Open Access Open Access  Restricted Access Subscription Access

Communication through Signing Interpretation to Voice for Stupid Individuals

Simon Silji

Abstract


In future it will in general be shaped into a versatile utilization of such structure that enables everyone have the choice to converse with nitwit people. In this plan webcam is used to get input picture. The image features are isolated through a pre-arranged Convolutional Mind Association. In later we can complete an android based application for stupid people to talk with standard peo-ple. Everyone uses mobilephones, so correspondence through wireless is seen as crucial in updating better handle in amicable situation. So there is a phenomenal opportunity for humanity in the correspondence area of all over. In this structure a webcam will get the hand movement from video and perform hand feature extraction. Then, at that point, anticipate the picture. Finally we drop by the result as voice. With the objective that it would be detectable to everyone. Here we are perceiving signs from live video. It is the major difference from various techniques. The outcome will be collection of words. For managing connectives uncommon lingos need to execute. Simulated intelligence is the region of this system. This is executed using python. The main target of this structure is to help imbecilic people in their correspondence.


Full Text:

PDF

References


Runpeng Cui, Hu Liu, and Changshui Zhang, “A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training”, IEEE,2018;

Kshitij Bantupalli,Ying Xie,“American Sign Language Recognition using Deep

Learning and Computer Vision”, IEEE International Conference on Big Data (Big Data),2018.

Sepp Hochreiter et al.,“Long Short-Term Memory,”, Neural Computa- tion 9(8): 1735-1780,1997.

Vi N.T. Truong ,Chuan-Kai Yang,Quoc-Viet Tran,Translator for American Sign Language to Text and Speech ”, IEEE 5th Global Conference on Consumer Electronics,2016.

Ren, J. Yuan, J. Meng, and Z. Zha,“Robust Part-Based Hand Gesture Recognition Using Kinect Sensor”, ” IEEE Trans. Multimedia, vol. 15, no. 5, pp. 1110–1120,2013

P. Molchanov, X. Yang, S. Gupta, K. Kim, S. Tyree, and J. Kauz, “Online detection and classification of dynamic hand gestures with recurrent 3D convolutional neural network”, in Proc. IEEE Conf. Comput. Vis. Pattern Recog,2016.

A. Graves, S. Fernandez, F. J. Gomez, and J. Schmidhu- ´ ber,“Connectionist temporal classification: labelling unsegmented se- quence data with recurrent neural networks”, ICML,pages 369–376, 2006.

Shadman Shahriar, Ashraf Siddiquee, Tanveerul Islam, Abesh Ghosh,Rajat Chakraborty, Asir Intisar Khan, Celia Shahnaz, Shaikh Anowarul Fattah,“Real-Time American Sign Language Recog- nition Using Skin Segmentation and Image Category Classification with Convolutional Neural Network and Deep Learning ”, IEEE Region 10 Conference,2018


Refbacks

  • There are currently no refbacks.