Preprint Article Version 1 This version is not peer-reviewed

TMining: Detecting Fake News with Machine Learning and Explainable AI

Version 1 : Received: 10 September 2024 / Approved: 11 September 2024 / Online: 11 September 2024 (07:32:27 CEST)

How to cite: Lupei, M.; Shliakhta, M. TMining: Detecting Fake News with Machine Learning and Explainable AI. Preprints 2024, 2024090847. https://doi.org/10.20944/preprints202409.0847.v1 Lupei, M.; Shliakhta, M. TMining: Detecting Fake News with Machine Learning and Explainable AI. Preprints 2024, 2024090847. https://doi.org/10.20944/preprints202409.0847.v1

Abstract

The spread of false information can significantly harm public opinion, underscoring the importance of accurately identifying untrustworthy news. This paper presents an innovative machine learning (ML) tool, TMining, designed to evaluate news credibility and facilitate various text-mining tasks. By examining a range of ML methodologies alongside preprocessing techniques, we aim to boost the system's effectiveness. Our research meticulously assesses different datasets, highlights the impact of applying stemming techniques, and employs Local Interpretable Model-Agnostic Explanations (LIMEs) to shed light on the rationale behind model predictions. The outcomes reveal a notable enhancement in both the precision and clarity of the news verification process. The ultimate version of the model has been made available as an Application Program Interface (API), and its source code has been shared openly, encouraging further exploration and collaboration within the scientific community. This initiative advances our ability to discern manipulative context from fictitious content and promotes transparency and understanding in the domain of ML applications.

Keywords

Text Mining; Fake News; Model Explanation; Machine Learning

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.