Open Access Open Access  Restricted Access Subscription Access

AI-Generated Misinformation: Detection and Mitigation Strategies

Sathvik M.V

Abstract


The rapid advancement of generative artificial intelligence (AI) has introduced unprecedented challenges  in  the  digital  information  landscape. AI-generated misinformation, encompassing fabricated text, deepfake images, synthetic audio, and manipulated vide os, poses significant threats to democracy, security, and social trust. This paper reviews the nature and risks of AI -generated misinformation while examining computational methodologies for its detection and mitigation. Emerging solutions include linguist ic and multimodal detection, watermarking, provenance tracking, and blockchain -based verification. By integrating technical detection methods with governance and awareness strategies, societies can mitigate the dangers of synthetic disinformation while pre serving the benefits of generative AI.


Full Text:

PDF

References


Zeller s, R. et al., “Defending Against Neural Fake News,” NeurIPS, 2019.

Korshunov, P. & Marcel, S., “Deepfakes: A New Threat to Face Recognition?” IEEE Signal Processing Magazine, 2019.

Jawahar, G., et al., “Automatic Detection of Machine Generated Text: A Survey,” Computational Linguistics, 2021.

Verdoliva, L., “Media Forensics and Deepfakes: An Overview,” IEEE J -STSP, 2020.

Wang, Y., et al., “Deepfake Audio Detection via Spectrum Analys is,” ICASSP, 2021.

Kirchenbauer, J., et al., “A Watermark for Large Language Models,” arXiv preprint arXiv:2301.10226, 2023.

Cai, H., et al., “Blockchain -Based Provenance Tracking for Multimedia Authenticity,” IEEE Transactions on Multimedia, 2022.

Wodak, J., “Generative AI and Misinformation: Emerging Risks and Responses,” AI & Society, 2023.


Refbacks

  • There are currently no refbacks.