Mood Detection Using Facial Expression with Songs
Abstract
To design a smart; automated music recommendation for a person based on their facial expression-based mood detection, an intelligent computer program will be developed. Ultimately, the goals of the project are to improve the quality of human/computer interactions through technologies such as artificial intelligence, computer vision, machine learning, and online music recommendations; develop a program that captures real-time facial expressions from live video; assess the emotions of individuals using trained machine-learning models; detect and analyze several different emotions (i.e., happiness; sadness; anger; fear; surprise; neutral); and recommend music to each user based on his/her detected emotion(s).
The Streamlit-based project has a very simple, user-friendly interface for a web application. In addition, the implementation includes an authentication module to ensure that only users who have registered with the application can log on. This is achieved through the use of the Firebase authentication services. All user credentials and mood histories are permanently stored in the cloud-based Firestore database, which ensures that both data security and availability of the stored data. Once a user logs in to the system successfully, they are redirected to the mood detection module where frames of the user's face are captured continuously using the OpenCV library, and then sent through the Facial Emotion Recognition (FER) model (which uses deep learning) to identify the user's mood.
The emotion detection module determines a person's emotional state by examining their facial features (eye movements, shape of mouth, and muscle patterns) to help predict their mood. This is done by collecting several different frames over a brief time period so that they can determine which emotion was detected the most often, that will be considered to be their final emotional state. Using this method, they will be able to detect emotions more precisely and reliably than through other means. This final determined emotional state will also be saved in the database for future tracking and analysis.
Once the system has determined the user's emotional status, it will automatically search for related music videos to match this emotion using the YouTube Data API. It utilizes predefined search phrases based on emotional status to find relevant Hindi-language music videos on YouTube. For example, if a user is in a happy state of mind, the system will play upbeat songs, if they are in a sad state of mind, the system will play sad songs, and if they are in a neutral state of mind, they will be played relaxing music or lo-fi music. To enhance user experience, a video previously viewed will not be delivered to the user until significant time has passed.
References
• Chen, L. & Gupta, A. (2022) – Real-time Facial Emotion Recognition using Deep Convolutional Neural Networks
• Rodriguez, J. & Kim, S. (2023) – A Context-Aware Music Recommendation System Based on Emotional State
• Singh, P. & Adebayo, F. (2023) – An Integrated System for Emotion-Driven Music Playback via Facial Analysis
• OpenCV, Open Source Computer Vision Library Documentation. Available: https://opencv.org/
• Streamlit Inc., Streamlit Documentation. Available: https://docs.streamlit.io/
• Google Developers, YouTube Data API v3 Documentation. Available: https://developers.google.com/youtube/v3
• Google Firebase, Firebase Authentication and Cloud Firestore Documentation. Available: https://firebase.google.com/
• FER-2013 Dataset, “Facial Expression Recognition Dataset,” Kaggle.
Refbacks
- There are currently no refbacks.