Skip to main content

Can you hear me now? Nope, can you see me now? Sign language on mobile

A group at the University of Washington has developed software that for the first time enables deaf and hard-of-hearing Americans to use sign language over a mobile phone. UW engineers got the phones working together this spring, and recently received a National Science Foundation grant for a 20-person field project that will begin next year in Seattle. This is the first time two-way real-time video communication has been demonstrated over cell phones in the United States. Since posting a video of the working prototype on YouTube, deaf people around the country have been writing on a daily basis.


"A lot of people are excited about this," said principal investigator Eve Riskin, a UW professor of electrical engineering. For mobile communication, deaf people now communicate by cell phone using text messages. "But the point is you want to be able to communicate in your native language," Riskin said. "For deaf people that's American Sign Language." Video is much better than text-messaging because it's faster and it's better at conveying emotion, said Jessica DeWitt, a UW undergraduate in psychology who is deaf and is a collaborator on the MobileASL project. She says a large part of her communication is with facial expressions, which are transmitted over the video phones.

Low data transmission rates on U.S. cellular networks, combined with limited processing power on mobile devices, have so far prevented real-time video transmission with enough frames per second that it could be used to transmit sign language. Communication rates on United States cellular networks allow about one tenth of the data rates common in places such as Europe and Asia (sign language over cell phones is already possible in Sweden and Japan). Even as faster networks are becoming more common in the United States, there is still a need for phones that would operate on the slower systems. "The faster networks are not available everywhere," said doctoral student Anna Cavender. "They also cost more. We don't think it's fair for someone who's deaf to have to pay more for his or her cell phone than someone who's hearing."

The team tried different ways to get comprehensible sign language on low-resolution video. They discovered that the most important part of the image to transmit in high resolution is around the face. This is not surprising, since eye-tracking studies have already shown that people spend the most time looking at a person's face while they are signing. The current version of MobileASL uses a standard video compression tool to stay within the data transmission limit. Future versions will incorporate custom tools to get better quality. The team developed a scheme to transmit the person's face and hands in high resolution, and the background in lower resolution. Now they are working on another feature that identifies when people are moving their hands, to reduce battery consumption and processing power when the person is not signing.

The team is currently using phones imported from Europe, which are the only ones they could find that would be compatible with the software and have a camera and video screen located on the same side of the phone so that people can film themselves while watching the screen. Mobile video sign language won't be widely available until the service is provided through a commercial cell-phone manufacturer, Riskin said. The team has already been in discussion with a major cellular network provider that has expressed interest in the project.

The MobileASL team includes Richard Ladner, a UW professor of computer science and engineering; Sheila Hemami, a professor of electrical engineering at Cornell University; Jacob Wobbrock, an assistant professor in the UW's Information School; UW graduate students Neva Cherniavsky, Jaehong Chon and Rahul Vanam; and Cornell graduate student Frank Ciaramello.
Also read: Can you hear me now? Source: University of Washington.

Comments

Popular posts from this blog

Charging Implanted Heart Pumps Wirelessly

Mechanical pumps to give failing hearts a boost were originally developed as temporary measures for patients awaiting a heart transplant. But as the technology has improved, these ventricular assist devices commonly operate in patients for years, including in former vice-president Dick Cheney, whose implant this month celebrates its one-year anniversary. Prolonged use, however, has its own problems. The power cord that protrudes through the patient's belly is cumbersome and prone to infection over time. Infections occur in close to 40 percent of patients, are the leading cause of rehospitalization, and can be fatal. Researchers at the University of Washington and the University of Pittsburgh Medical Center have tested a wireless power system for ventricular assist devices. They recently presented the work in Washington, D.C. at the American Society for Artificial Internal Organs annual meeting, where it received the Willem Kolff/Donald B. Olsen Award for most promising research in

Autism and Eye Contact: Genes very much are involved

We have now a lot of evidence on genetic components in many disorders including neurological in both adults and kids. Autism is one such problem that has many genes involved. Research is still in full swing to find more genes and related pathways. However, one can find autistic features more phenotypically before genotyping. Eye contact is one of them. Studies have shown that autistic kids make less eye contact. This has been shown to have genetic component now. New research has uncovered compelling evidence that genetics plays a major role in how children look at the world and whether they have a preference for gazing at people's eyes and faces or at objects. The discovery by researchers at Washington University School of Medicine in St. Louis and Emory University School of Medicine in Atlanta adds new detail to understanding the causes of autism spectrum disorder. The results show that the moment-to-moment movements of children's eyes as they seek visual information about the

How much people depend on weather reports

Meteorologists on television, radio, online, and in newspapers supply weather reports to the average person over 100 times a month. Surveys demonstrated that the 300 billion forecasts accessed generate a value of $285 per household every year, or $32 billion for the entire United States. Odds are you have already watched one weather forecast today and will probably check out a few more. Accurate, timely forecasts are vital to everyday life, but just how critical may surprise you. Whether at work or play, you probably watch the weather quite closely. Most of us are at the weather person's mercy to know what to wear, what to expect, to prepare for the worst. New research shows the average United States household checks out a weather report more than three times a day. "It impacts pretty much every part of every activity we are involved with for the most part," Jeff Lazo, the director of the Societal Impacts Program at the National Center for Atmospheric Research (NCAR) in B