Hi, I’m Ian – the head of XR Labs here at Subvrsive. The work we do in XR Labs involves a lot of exploring. We test out many things, we see what works, and we try not to limit our creativity with too many rules. One value that we do hold is to strive to build applications that have purpose, that are useful, and that have meaning. Today, we are very excited to be announcing Project ASL – the first of what we hope to be many projects that embrace that spirit.
Project ASL is a Mixed Reality application in development at XR Labs designed to teach American Sign Language in an interactive way. Generally speaking, ASL has been taught in schools, with books, and most recently, with online resources and educational software. But the problem with most ASL learning software is that it revolves around the use of picture and video to demonstrate signing, creating a passive learning environment for users. With those solutions, the student is limited to being the listener – the machine could never watch the student sign.
With Project ASL, we hope to let the user become the speaker – to make software that adds a level of interactivity and comprehension that wasn’t possible before. To accomplish this, we are writing the project in Unity using the Leap Motion Controller. With this platform, we can write software that can see the user’s hands, and recognize when they are forming specific letters and words in ASL.
The first component of Project ASL, which we’re sharing today, is focused on vocabulary building and uses interactive flashcards and animations to teach users how to sign the alphabet and several basic words. Vocabulary building is the first of several components that we are designing as part of Project ASL, and we are excited to share more as it continues to evolve.