News: Microsoft Research Asia has developed a way where Kinect can read and recognize human sign language.
Kinect, with its ability to provide depth information and color data simultaneously, makes it easier to track hand and body actions more accurately—and quickly.
Kinect has a lot of uses for gaming but what about non-gaming applications? It appears that Kinect is creating something quite revolutionary in the field of Natural User Interface. At the Institute of Computing Tech at Beijing Union University, Microsoft Research Asia has developed a way where the Kinect can read human sign language with ease. Sign language is the primary language for those who may be deaf or hard of hearing, however, you can’t really communicate with your computer using sign language. Microsoft Research Asia has developed a system which allows Kinect to detect sign languages used by people and, according to them, it works well. Here is what they had to say:
In this project—which is being shown during the demofest portion of faculty summit 2013, which brings more than 400 academic researchers to Microsoft headquarters to share insight into impactful research—the hand tracking leads to a process of 3-D motion-trajectory alignment and matching for individual words in sign language. The words are generated via hand tracking by the kinect for windows software and then normalized, and matching scores are computed to identify the most relevant candidates when a signed word is analyzed.
The algorithm for this 3-D trajectory matching, in turn, has enabled the construction of a system for sign-language recognition and translation, consisting of two modes. The first, Translation Mode, translates sign language into text or speech. The technology currently supports American sign language but has potential for all varieties of sign language.
The second, Communications Mode, enables communications between a hearing person and a deaf or hard-of-hearing person by use of an avatar. Guided by text input from a keyboard, the avatar can display the corresponding sign-language sentence. The deaf or hard-of-hearing person responds using sign language, and the system converts that answer into text.
It all sounds pretty amazing to me. While they certainly didn’t have gaming in mind here what do you think this could bring to Kinect on Xbox 360 or the far more powerful Kinect 2.0 on Xbox One? It would seem the possibilities are endless.