Monday, October 7, 2013

Mobile Technology for Sign-Language Communication and Eyes-Free Texting

I attended the PhD forum for mobile technology on Thursday morning, and found out about some fascinating research for helping people with deafness or blindness communicate using mobile smartphones.

Human-Centered Approach to Evaluating Mobile Sign Language Video Communication
Presenter: Jessica J. Tran, University of Washington, Seattle

Video communication on smartphones require a lot of data and bandwidth, which can be expensive and difficult to get a hold of.  When your source of communication is through video sign language and not through just voice-calling, this doesn't seem very fair or accessible.  People who are deaf should not have to pay more to have enough data to communicate.  Jessica J. Tran's research involves studying how low frame-rates and bandwidth can go before video sign language is no longer intelligible.  Volunteers would look at videos of a man signing sentences to them, ranging from 1 to 12 frames per second and varying kilobytes per second.  They would then rate the video for intelligibility and answer a question about the sentence being signed.  This research could allow companies to use the best combination for intelligibility and price, lowering costs yet still being understandable.

Perkinput: Eyes-Free Text Entry for Mobile Devices
Presenter: Shiri Azentot, University of Washington

On iPhone, non-visual text entry is possible, but very slow.  The user must hover over the screen and have the buttons read out to them, and then tap again to select the button.  This approach is also prone to errors.  Perkinput is an application that allows users who know Braille to type quickly and with less errors. On an iPhone, the user calibrates the app by placing down all four fingers, and then types each column of Braille using their index, middle and ring fingers as required. On an iPad, the user can use both hands, one for the first column of a Braille character, and another for the second column; and go even faster.  The application combines maximum likelihood and tracking to ensure that it is always reading the right fingers.  The application has been successful in testing so far, with faster speeds and fewer errors and a greater rate of improvement over time than voice-over.