Open Access Open Access  Restricted Access Subscription Access

REAL TIME SIGN LANGUAGE TRANSLATOR

Sandhiya S S, Madhan K S P

Abstract


The Real-Time Sign Language Translator is an AI-driven system designed to bridge the communication gap between hearing and hearing-impaired individuals by translating sign language gestures into spoken or written text in real time. This application leverages Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) for accurate gesture recognition and sequence prediction, combined with OpenCV for real-time video processing. The system captures hand gestures via a webcam, preprocesses frames, and classifies them using a trained deep learning model. The recognized gestures are then converted into natural language sentences through a language generation module. For enhanced efficiency and scalability, the model is deployed using TensorFlow and integrated into an interactive Streamlit interface for live demonstrations. This project exemplifies the integration of Computer Vision and Natural Language Processing (NLP) to enable seamless, real-time human–computer interaction, promoting inclusivity and accessibility in communication.

 

Full Text:

PDF

References


Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

Graves, A., & Schmidhuber, J. (2013). Framewise phoneme classification with bidirectional LSTM networks. Neural Networks, 26, 5–10.

Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Journal of Software Tools.

Abadi, M., et al. (2016). TensorFlow: Large-scale machine learning on heterogeneous systems.

Streamlit Inc. (2022). Streamlit Documentation – Building Interactive Data and AI Applications.

Sharma, R., Gupta, A., & Nair, S. (2023). Real-Time Sign Language Recognition using CNN-LSTM Networks and TTS Integration. IEEE Access.


Refbacks

  • There are currently no refbacks.