Open Access Open Access  Restricted Access Subscription Access

CRESCENDO: A Web Application for Mood Based Music Player

E.A. Tanushree, G.V. Charvi, Dr. Ch. Ramesh Babu, Dr. Meera Alphy, Dr. K. Sreekala, Y. Pavan Narasimha Rao

Abstract


Crescendo is a web-based music player application designed to enhance user experience by aligning song choices with the user's current emotional state. The application provides a curated music library categorized into emotional themes such as happy, sad, energetic, and calm. Users can select moods manually, and future iterations plan to support automated emotion detection using AI-driven methods. The backend is implemented using Node.js and Express, with Oracle DB as the database and JWT-secured user sessions. The system also supports features like email verification, password reset, history tracking, and playlist personalization. A clean, mobile-responsive frontend is integrated with the backend to provide seamless interaction and real-time response. Crescendo combines music and emotion analytics to promote mental wellness and redefine user interaction with digital audio platforms.

Full Text:

PDF

References


Naik, K. C. K., Bindu, C. H., Babu, M. H., & Pavan, J. S. (2020). A music recommendation system driven by emotion recognition using CNN, enabling playlist generation based on facial expressions. International Journal of Applied Engineering Research, 5(2).

Afreen, T., Sahu, P., Gondekar, A., Rajurkar, A., & Dhanake, S. (2022). A facial emotion detection-based model that recommends music by categorizing emotions into primary states like Happy, Sad, and Neutral. International Journal of Advanced Research in Science, Communication and Technology (IJARSCT), 2(6).

Shrimali, A., Kumari, R., Kumar, S., & Shiragapur, B. (2023). A music suggestion platform using EEG brainwave data for precise emotional analysis. International Journal of Research Publication and Reviews, 4(6).

Don Bosco Institute of Technology. (n.d.). A lightweight Android-based emotional music player leveraging Mobile-Net and Firebase for real-time emotion-based audio selection. [Unpublished institutional project].

Oramas, S., Nieto, O., Sordo, M., & Serra, X. (2017). A deep learning approach using textual and audio features to solve the cold-start problem in music recommendation. Proceedings of the International Society for Music Information Retrieval Conference (ISMIR).

Wang, J.-C., Yang, Y.-H., & Wang, H.-M. (2015). A model for retrieving music based on emotional attributes using Valence-Arousal mapping techniques. IEEE Transactions on Audio, Speech, and Language Processing.

Liu, Y. (2020). A comprehensive overview of deep learning algorithms in music recommendation systems, highlighting CNNs, RNNs, and hybrid methods. Asian Conference on Education (ACE) Proceedings.

Korzeniowski, F., Nieto, O., McCallum, M., Won, M., Oramas, S., & Schmidt, E. (2020). An analytical study on mood classification in music using user listening data and behavioural patterns. Journal of Intelligent Information Systems in Music Technology.

Pathak, H. G., Arora, S., Gupta, R., & Abrol, V. (2023). A framework for emotion-aware music recommendation systems that integrates intelligent recognition techniques to improve user engagement. International Journal of Artificial Intelligence and Data Science Research.

Musicovery Project Team. (n.d.). An early interactive platform for exploring music based on emotional graphs, allowing users to navigate playlists by mood. Archived Industry Report on Music Recommendation Systems


Refbacks

  • There are currently no refbacks.