Sentinel AI Cyber Bully Detection and Prevention
Abstract
Cyberbullying has become a major social issue in today’s digital age, where people spend a significant amount of time on social media platforms. Individuals, especially children and teenagers, are often exposed to harmful messages, offensive content, and online harassment. Such negative experiences can lead to emotional distress, loss of confidence, anxiety, and even long-term psychological effects. Despite increasing awareness, many users continue to face abuse without proper protection or support.This project focuses on addressing the problem of cyberbullying by promoting a safer and more respectful online environment. The aim is to identify harmful behavior and prevent it before it affects individuals. Instead of allowing abusive content to spread, the system encourages positive communication and responsible use of social media.The proposed solution helps in recognizing inappropriate content and taking necessary actions such as warning users or restricting harmful interactions. It not only protects victims but also creates awareness among users about the impact of their words and actions. By promoting digital responsibility and empathy, the system contributes to reducing online harassment.Overall, this project highlights the importance of creating a safe digital space where people can express themselves freely without fear of bullying or abuse. It emphasizes the role of technology as a supportive tool in solving real-world social problems and improving the quality of online interactions.
References
Dinakar, K., Reichart, R., & Lieberman, H. (2011). Modeling the Detection of Textual Cyberbullying. Proceedings of the International AAAI Conference on Web and Social Media, 5(1), 11–17.
Dadvar, M., Trieschnigg, D., & de Jong, F. (2013). Expert Knowledge for Automatic Detection of Bullying in Social Networks. Proceedings of the International Conference on Web Intelligence, 57–64.
Chen, Y., Zhou, Y., Zhu, S., & Xu, H. (2012). Detecting Offensive Language in Social Media to Protect Adolescent Online Safety. International Conference on Privacy, Security, Risk and Trust, 71–80.
Zhang, Z., Robinson, D., & Tepper, J. (2018). Detecting Hate Speech on Twitter Using a Convolutional Neural Network. Proceedings of the European Semantic Web Conference, 745–760.
Schmidt, A., & Wiegand, M. (2017). A Survey on Hate Speech Detection Using Natural Language Processing. Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, 1–10.
Fortuna, P., & Nunes, S. (2018). A Survey on Automatic Detection of Hate Speech in Text. ACM Computing Surveys, 51(4), 1–30.
Salminen, J., Almerekhi, H., Milenković, J., Jung, S. G., An, J., Kwak, H., & Jansen, B. J. (2020). Anatomy of Online Hate: Developing a Taxonomy and Machine Learning Models for Identifying Hate in Social Media. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 330–341.
Vidgen, B., & Derczynski, L. (2020). Directions in Abusive Language Training Data: Garbage In, Garbage Out. PLOS ONE, 15(12), e0243300.
Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. Proceedings of the NAACL Student Research Workshop, 88–93.
Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated Hate Speech Detection and the Problem of Offensive Language. Proceedings of the International AAAI Conference on Web and Social Media, 11(1), 512–515
Refbacks
- There are currently no refbacks.