Recognizing British Sign Language Using Deep Learning A Contactless and Privacy Preserving Approach

Recognizing British Sign Language Using Deep Learning A Contactless and Privacy Preserving Approach

Abstract:

Sign language is utilized by deaf-mute to communicate through hand movements, body postures, and facial emotions. The motions in sign language comprise a range of distinct hand and finger articulations that are occasionally synchronized with the head, face, and body. Automatic sign language recognition (SLR) is a highly challenging area and still remains in its infancy compared with speech recognition after almost three decades of research. Current wearable and vision-based systems for SLR are intrusive and suffer from the limitations of ambient lighting and privacy concerns. To the best of our knowledge, our work proposes the first contactless British sign language (BSL) recognition system using radar and deep learning (DL) algorithms. Our proposed system extracts the 2-D spatiotemporal features from the radar data and applies the state-of-the-art DL models to classify spatiotemporal features from BSL signs to different verbs and emotions, such as Help, Drink, Eat, Happy, Hate, and Sad. We collected and annotated a large-scale benchmark BSL dataset covering 15 different types of BSL signs. Our proposed system demonstrates highest classification performance with a multiclass accuracy of up to 90.07% at a distance of 141 cm from the subject using the VGGNet model.