Open Access Open Access  Restricted Access Subscription Access

Improving Robustness of Hand Gesture Recognition using Kinect method for Real-Time Movements

S. Chandrasekhar, N N Mhala

Abstract


Hand motion acknowledgment is a vital point in human-PC collaboration. Notwithstanding, a large portion of the current strategies are muddled and tedious, which constrains the utilization of hand motion acknowledgment progressively conditions. In this paper, we propose an information combination based hand motion acknowledgment demonstrate by melding profundity data and skeleton information. In light of the exact division and following with Kinect V2, the model can accomplish ongoing execution, which is 18.7% quicker than a portion of the best in class techniques. In view of the trial comes about, the proposed model is exact and vigorous to revolution, flip, scale changes, lighting changes, jumbled foundation, and bends. This guarantees its utilization in various certifiable human-PC cooperation errands.


Full Text:

PDF

References


J. Rehg, T. Kanade, “Visual tracking of high DoF articulated structures: An application to human hand tracking,” In European Conference on Computer Vision and Image Understanding (1994): 35–46

Ali Erol, George Bebis, Mircea Nicolescu, Richard D. Boyle, Xander Twombly., “Vision-based hand pose estimation: A review,” Computer Vision and Image Understanding 108 (2007), pp 52–73

Microsoft Visual Studio 2013 combined with Open Source Computer Vision (OpenCV). The user can either sit on a chair or stand in front of the Kinect V2, moving forward or backward (left or right) to make the recognition task more challenging. Since the test is conducted in a laboratory, the background is cluttered with desks, laptops, books, and so on (as shown in Fig. 3). Fig. 5 shows the robustness of the recognition model, which indicates the model is invariant to partial rotation changes (a), insensitive to scaling and flip (b), and robust to finger distortions (c). Moreover, the use of depth images make the model invariant to cluttered background and illumination changes. For efficiency, six different users perform digit 0 to digit 5, respectively. For each digit, each user performs 30 times, resulting in 180 test images for each user. We compute the average time for each digit and the time

Stenger, A. Thayananthan, P.H.S. Torr, R. Cipolla, “ Model-based hand tracking using a hierarchical Bayesian filter,” IEEE Transactions on Pattern Analysis and Machine Intelligence (2006)

Jinshi Cui, Zengqi Sun, “Model-based visual hand posture tracking for guiding a dexterous robotic hand,” Optics Communications 235 (2004) 311–318

M Bay, Koller-Meier, L.V. Gool, “Smart particle filtering for 3D hand tracking,” in: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Los Alamitos, CA, USA, 2004, pp 675

M. de La Gorce, N. Paragios, D. J. Fleet, “Model-Based Hand Tracking with Texture, Shading and Self-occlusions,” IEEE Conference on Computer Vision and Pattern Recognition, Alaska, 2008

S. Malassiotis, F. Tsalakanidou, N. Mavridis, V. Giagourta, N. Grammalidis, M.G Strintzis, “A face and gesture recognition system based on an active stereo sensor,” In: Proceedings 2001 ICIP, Thessaloniki, Greece, 7-10 Oct. 2001, vol.3. (2001) 955–8

R Kjeldsen., J. Kender, “Toward the use of gesture in traditional user interfaces,” International Conference on Automatic Face and Gesture Recognition (1996): 151–56

R.G. O’Haga., A. Zelinsky, S. Rougeaux, “Visual gesture interfaces for virtual environments,” Interacting with Computers 14 (2002): 231–50

F.M. Mona, M.R. Mursi Ghazy, Assassa Abeer Alhumaimeedy, Khaled Alghathbar, “Automatic Human Face Counting in Digital Color Images,” Proceedings of the 8th Wseas International Conference on Signal Processing, Robotics and Automation, pp 269-275

L. Bretzner, I. Laptev,T. Lindeberg, “Hand gesture recognition using multiscale color features, hieracrchichal models and particle filtering,”

N. Dardas, Qing Chen, N.D. Georganas, E.M Petriu, “Hand gesture recognition using Bag-of-features and multi-class Support Vector Machine,” Haptic Audio-Visual Environments and Games (HAVE),2010.

Sharma, R., Huang, T. S., Pavovic, V. I., Zhao, Y., Lo, Z.,Chu, S., Schulten, K., Dalke, A., Phillips, J., Zeller, M. & Humphrey, W., Speech/Gesture Interface to a Visual Computing Environment for Molecular Biologists, In: Proc. of ICPR’96, Vol 2, pp 964-968

Gandy, M., Starner, T., Auxier, J. & Ashbrook, D. “The Gesture Pendant: A Self Illuminating, Wearable, Infrared Computer Vision System for Home Automation Control and Medical Monitoring”. Proc. of IEEE Int. Symposium on Wearable Computers. (2000), 87-94

W.Di, F. Zhu, L. Shao, “One Shot Learning Gesture Recognition from RGBD Images”, Proc. IEEE Computer Vision and Pattern Recognition Workshops, 2012.

U. Mahbub, H. Imtiaz, T. Roy, M. Rahman, M. Ahad, “A Template Matching Approach of One-shot-learning Gesture Recognition,” Pattern Recognit Letters, vol. 34, 1780–1788, 2013.


Refbacks

  • There are currently no refbacks.