

Review on Sign Language Detection based on Machine Learning
Abstract
The main means of human communication is through voice and language. We can understand each other’s ideas because of our ability to hear. Even today, speech recognition allows us to issue commands. But what if someone is completely deaf and eventually unable to speak? Since sign language is the primary means of communicationfor deaf and mute individuals, it is important to conduct considerable study into automatic interpretation of sign language in order to preserve their independence. Numerousmethods and algorithms have been created in this field with the help of image processing and machine learning. Each system that recognizes sign language is trained to identify the signs and translate them into the necessary patterns. In thisarticle, Sign Language is recorded as a collection of photographs, processed with the aid of Python,and then converted to text.
References
Sohelrana, K., Ahmed, S. F., Sameer, S., & Ashok, O. (2020, June). A review on smart gloves to convert sign to speech for mute community. In 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions)(ICRITO) (pp. 1262-1264). IEEE..
Aiswarya, V., Raju, N. N., Joy, S. S. J., Nagarajan, T., & Vijayalakshmi, P. (2018, March). Hidden Markov model-based Sign Language to speech conversion system in TAMIL. In 2018 Fourth International Conference on Biosignals, Images and Instrumentation (ICBSII) (pp. 206-212). IEEE.
Truong, V. N., Yang, C. K., & Tran, Q. V. (2016, October). A translator for American sign language to text and speech. In 2016 IEEE 5th Global Conference on Consumer Electronics (pp. 1-2). IEEE..
Kumar, D. N., Madhukar, M., Prabhakara, A., Marathe, A. V., & Bharadwaj, S. S. (2019, March). Sign Language to Speech Conversion—An Assistive System for Speech Impaired. In 2019 1st International Conference on Advanced Technologies in Intelligent Control, Environment, Computing & Communication Engineering (ICATIECE) (pp. 272-275). IEEE.
Dhivyasri, S., KB, K. H., Akash, M., Sona, M., Divyapriya, S., & Krishnaveni, V. (2021, May). An efficient approach for interpretation of Indian sign language using machine learning. In 2021 3rd International Conference on Signal Processing and Communication (ICPSC) (pp. 130-133). IEEE.
Poddar, N., Rao, S., Sawant, S., Somavanshi, V., & Chandak, S. (2015). Study of Sign Language Translation using Gesture Recognition. International Journal of Advanced Research in Computer and Communication Engineering, 4(2).
Pansare, J. R., & Ingle, M. (2016, August). Vision-based approach for American sign language recognition using edge orientation histogram. In 2016 International Conference on Image, Vision and Computing (ICIVC) (pp. 86-90). IEEE.
Konwar, A. S., Borah, B. S., & Tuithung, C. T. (2014, April). An American sign language detection system using HSV color model and edge detection. In 2014 International Conference on Communication and Signal Processing (pp. 743-747). IEEE.
Tu, Y. J., Kao, C. C., & Lin, H. Y. (2013, October). Human computer interaction using face and gesture recognition. In 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (pp. 1-8). IEEE.
Bantupalli, K., & Xie, Y. (2019). American Sign Language Recognition Using Machine Learning and Computer Vision.
Refbacks
- There are currently no refbacks.