Gesture Recognition
Abstract
In this work, we propose an approach, for sign language recognition, that makes use of a virtual reality headset to create an immersive environment. We show how features from data acquired by the Leap Motion controller, using an egocentric view, can be used to automatically recognize a user signed gesture. The Leap features are used along with a random forest for real-time classification of the user’s gesture. We further analyze which of these features are most important, in an egocentric view, for gesture recognition. To test the efficacy of our proposed approach, we test on the 26 letters of the alphabet in American Sign Language in a virtual environment with an application for learning sign language.
Papers
- J. Schioppo, Z. Meyer, D. Fabiano, and S. Canavan. Sign Language Recognition: Learning American Sign Language in a Virtual Environment, CHI LBW, 2019.
- J. Schioppo, Z. Meyer, D. Fabiano, and S. Canavan. Sign Language Recognition in Virtual Reality, FG, 2020.
Videos
Notes
This project is no longer actively being worked on in the lab.