Open Access Open Access  Restricted Access Subscription Access

Unsound: Software To Recognize Sign Language using Deep Learning

Dhrishya Suresh, Kajal Libi, Neeraja S Pradeep, Rajalakshmy B, Sidhik A

Abstract


The extraction of complicated hand gestures with continually changing shapes for sign language identification is regarded as a difficult task in computer vision. This research proposes the recognition of American sign language movements using convolutional neural networks (CNN), a strong artificial intelligence technology. The dataset contains 87,000 images, each sized 200x200 pixels. It's designed for a classification task, with 29 different categories. These categories include the letters A-Z (26 classes), as well as three additional classes for SPACE, DELETE, and NOTHING. The goal is to train a machine learning model to correctly categorize these images into their respective classes. CNN training is carried out with a variety of sample sizes, each of which includes many sets of individuals and viewing angles. To improve recognition accuracy, various CNN architectures were created and tested using our sign language data. Alongside our sign language recognizing software (UNSOUND) we have also incorporated the conversion of text to sign language for the convenience of those who are not familiar with sign language.


Full Text:

PDF

References


Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei and Y. Sheikh, "OpenPose: Realtime multi-person 2D pose estimation using part affinity fields", arXiv:1812.08008, Dec. 2018

B. Hu and J. Wang, "Deep learning based hand gesture recognition and UAV flight controls", Int. J. Autom. Comput., vol. 17, no. 1, pp. 17-29, Feb. 2020

M. Ramanathan, W. -Y. Yau and E.

K. Teoh, "Improving human body part detection using deep learning and motion consistency," 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), 2016, pp. 1-5, doi: 10.1109/ICARCV.2016.7838651.

U. Cote-Allard, C. L. Fall, A. Drouin, A. Campeau-Lecours, C. Gosselin,

K. Glette, et al., "Deep learning for electromyographic hand gesture signal classification using transfer learning", IEEE Trans. Neural Syst. Rehabil. Eng., vol. 27, no. 4, pp. 760-771, Apr. 2019.

S. Aly and W. Aly, "DeepArSLR: A novel signer-independent deep learning framework for isolated arabic sign language gestures recognition", IEEE Access, vol. 8,

pp. 83199-83212, 2020.

“Japanese Sign language Recognition Using Recurrent Neural Network

Isolated ASL Sign Recognition System for Deaf Persons” AU - Krishna Chowdary, Akhil AU - Sandeep, Gunna PY - 2022/05/05

T. Starner, J. Weaver and A. Pentland, "Real-time American sign language recognition using desk and wearable computer based video," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp.

-1375, Dec. 1998, doi:

1109/34.735811.


Refbacks

  • There are currently no refbacks.