Open Access Open Access  Restricted Access Subscription Access

Research Paper for Emotion Recognition using CNN

Devendra Patel

Abstract


In this paper, we tend to describe a Convolutional Neural Net- work (CNN) approach to real-time feeling detection. We tend to utilize knowledge from the Extended Cohn-Kanade dataset, the Japanese feminine countenance knowledge set, and our custom pictures to coach this model, and apply pre-processing steps to enhance performance. We tend to re-train a LeNet and AlexNet implementation, each of that performs with higher than ninety- seven accuracy. Analysis of the period pictures shows that the higher models perform fairly well at classifying facial expressions, however not yet because the quantitative results would indicate.


Full Text:

PDF

References


S. W. Chew, P. Lucey, S. Lucey, J. Saragih, J. F. Cohn, and S. Sridharan. Person-independent face expression detection exploitation forced native models. In Automatic Face Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 915–920, March 2011. DOI: 10.1109/FG.2011.5771373.

Charles Darwin, Paul Ekman, and Phillip Prodger. The expression of the emotions in man and animals. university Press, USA, 1998.

Abhinav Dhall, Akshay Asthana, Roland Goecke, and Tom Gedeon. feeling recognition exploitation Phog and lpq options. In Automatic Face Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 878–883. IEEE, 2011.

Paul Ekman. associate degree argument for basic emotions. Cognition feeling, 6(3-4):169–200, 1992.

Paul Ekman, Wallace V Friesen, and Phoebe Ellsworth. feeling within the human face: Guidelines for analysis associate de- greed an integration of findings. Elsevier, 2013.

Nico H Frijda, Saint Andrew the Apostle Ortony, Joep Sonnemans, and Gerald L Clore. The complexness of intensity: prob- lems regarding the structure of feeling intensity. 1992.

Md Nazrul Islam and Chu Kiong bathroom. Geometric feature- based facial feeling recognition exploitation two-stage fuzzy reasoning model. In Neural IP, pages 344–351. Springer, 2014.

L. A. Jeni, J. M. Girard, J. F. Cohn, and F. Delaware La Torre. Continuous au intensity estimation exploitation lo calized, distributed facial feature area. In Automatic Face and Gesture Recognition (FG), 2013 tenth IEEE International Conference and Workshops on, pages 1–7, April 2013. DOI: 10.1109/FG.2013.6553808.

Bo-Kyeong Kim, Jihyeon Roh, Suh-Yeon Dong, and Soo-Young Lee. graded com- mittee of deep convolutional neural networks for sturdy face expression recognition. Journal on Multimodal User Inter-faces, pages 1–17, 2016.

Patrick Lucey, Jeffrey F plant scientist, Takeo Kanade, Ja- son Saragih, Zara Ambadar, and Iain Matthews. The extended Cohn Kanade dataset (ck+): an entire dataset for action unit and emotion-given expression. In laptop Vision and Pattern Recognition Work- shops (CVPRW), 2010 IEEE laptop Society Conference on, pages 94–101. IEEE, 2010.


Refbacks

  • There are currently no refbacks.