Open Access Open Access  Restricted Access Subscription Access

Sign language translator using LSTM and Generative Adverserial Networks: A Review

Jenit Abraham, Nebila S, Hadiya Manzoor, Parvathy P, Neethu Sathyan M

Abstract


Sign language is an essential mode of communication for the deaf and hard-of-hearing community. However, the communication gap between sign language users and non-signers persists, hindering ef- fective interaction and inclusivity. In recent years, advancements in deep learning techniques, particularly Long Short-Term Memory (LSTM) networks and generative artificial intelligence (AI), have shown promise in bridging this gap by enabling the development of sign language translation systems.

Generative Adversarial Networks (GANs) with LSTM-based archi- tectures have shown promise in this domain. The GAN model con- sists of a generator and discriminator network working in an ad- versarial manner. The generator generates synthetic sign language videos, while the discriminator determines the authenticity of the generated videos. The GAN is trained using a dataset of real sign language videos paired with text annotations. The generator learns to generate realistic sign language videos by capturing the under- lying patterns and characteristics of the training data. The discrim- inator learns to classify the generated videos as real or fake. Once trained, the generator’s output videos can

be further processed us- ing computer vision techniques to extract visual features. These features are then used as input to an LSTM translational model that maps them to corresponding textual translations. The LSTM model learns the relationship between the visual features and the associ- ated text annotations. The GAN with LSTM generator and discrim- inator provides a framework for sign language translation, combin-ing the benefits of generative modeling and sequential processing. Further research and refinement of these models are needed to im- prove the accuracy and effectiveness of sign language translation systems.

To test the effectiveness of our method, we ran experiments us- ing a variety of datasets of sign language videos and compared the outcomes with those from conventional LSTM-based methods. The outcomes of our study show that the GAN-based system out- performs traditional LSTM-based methods. The generated text has better fluency and semantic coherence, closely matching the sign language gestures. These outcomes demonstrate how our system can aid in effective communication between sign language users and non-sign language users. The system can support a wide range of sign language expressions thanks to the GAN-based approach’s flexibility in handling variations in signing styles and gestures. In a number of areas, including education, communication platforms, and accessibility tools, this technology has the potential to improve communication accessibility.


Full Text:

PDF

References


Ilias Papastratis ,Christos Chatzikonstantinou ,Dimitrios Kon- stantinidis ,Kosmas Dimitropoulos andPetros Daras ”Artifi- cial Intelligence Technologies for Sign Language”.

Yun-Jung Ku; Min-Jen Chen; Chung-Ta King, ”A Virtual Sign Language Translator on Smartphones”, IEEE 2019

Aditya Das; Shantanu Gawde; Khyati Suratwala; Dhananjay Kalbande, ”Sign Language Recognition Using Deep Learning on Custom Processed Static Gesture Images.” IEEE ,2018

Ezhumalai P , Raj Kumar M , Rahul A S , Vimalanathan V , Yuvaraj A ” Speech To Sign Language Translator For Hearing Impaired” ,2021

Shashank Salian ,Indu Dokare , Dhiren Serai , Aditya Suresh

,”Proposed System for Sign Language Recognition”

Zhe Cao, Tomas Simon, Shih-En Wei, Yaser Sheikh , ”Re- altime Multi- Person 2D Pose Estimation using Part Affinity Fields”

Md. Yeasin, Sonkor Kanti Nath, Md.Iquebal Hossain Pat- wary, Shafayat Hossain, Shahadat Hossain, Rubel Ahmed, ”Design and Implementation of Bangla Sign Language Trans- lator”, 2019 5th International Conference on Advances in Electrical Engineering (ICAEE)

Setiawardhana; Rizky Yuniar Hakkun; Achmad Baharuddin, ”Sign Language Learning based on Android For Deaf and Speech Impaired People.”, IEEE, 2015

Kusumika Krori Dutta; Satheesh Kumar Raju K.; Anil Kumar G.S.; Sunny Arokia Swamy B., ”Double handed Indian sign language to speech and text.”, IEEE 2015

Lean Karlo S. Tolentino, Ronnie O. Serfa Juan, August C. Thio-ac, Maria Abigail B. Pamahoy, Joni Rose R. Forteza, ”Static Sign Language Recognition Using Deep Learning”, International Journal of Machine Learning and Computing,

Vol. 9, No. 6, December 2019


Refbacks

  • There are currently no refbacks.