A group at the
of
Americans to use sign language over a mobile phone. UW engineers had phones
working together this spring, and recently received a National Science
Foundation grant for a 20-person field project that will begin next year in
This is the first time two-way real-time video communication has been
demonstrated over cell phones in the
States
prototype on YouTube, deaf people around the country have been writing on a
daily basis.
For mobile communication, deaf people now communicate by cell phone using
text messages. “But the point is you want to be able to communicate in
your native language,” said principal investigator Eve Riskin, a UW professor of electrical engineering. “For deaf people that’s American
Sign Language.”
Video is much better than text-messaging because it’s faster and it’s better
at conveying emotion, said Jessica DeWitt, a UW undergraduate in psychology who
is deaf and is a collaborator on the MobileASL project. She says a large part
of her communication is with facial expressions, which are transmitted over the
video phones.
Low data transmission rates on
cellular networks, combined with limited processing power on mobile devices,
have so far prevented real-time video transmission with enough frames per
second that it could be used to transmit sign language. Communication rates on
United States cellular networks allow about one tenth of the data rates common
in places such as Europe and Asia (sign language over cell phones is already
possible in Sweden and Japan).
Even as faster networks are becoming more common in the
States
that would operate on the slower systems.
The team tried different ways to get comprehensible sign language on
low-resolution video. They discovered that the most important part of the image
to transmit in high resolution is around the face. This is not surprising,
since eye-tracking studies have already shown that people spend the most time
looking at a person’s face while they are signing.
The current version of MobileASL uses a standard video compression tool to
stay within the data transmission limit. Future versions will incorporate
custom tools to get better quality. The team developed a scheme to transmit the
person’s face and hands in high resolution, and the background in lower
resolution. Now they are working on another feature that identifies when people
are moving their hands, to reduce battery consumption and processing power when
the person is not signing.
The team is currently using phones imported from
which are the only ones they could find that would be compatible with the
software and have a camera and video screen located on the same side of the
phone so that people can film themselves while watching the screen.
Mobile video sign language won’t be widely available until the service is
provided through a commercial cell-phone manufacturer, Riskin said. The team
has already been in discussion with a major cellular network provider that has
expressed interest in the project.
The MobileASL team includes Richard Ladner, a UW professor of computer
science and engineering; Sheila Hemami, a professor of electrical engineering
at
Jacob Wobbrock, an assistant professor in the UW’s
Cherniavsky, Jaehong Chon and Rahul Vanam; and Cornell graduate
student Frank Ciaramello.
More details on the MobileASL project are at http://mobileasl.cs.washington.edu/index.html.
YouTube video http://youtube.com/watch?v=FaE1PvJwI8E.