Preprint Article Version 1 This version is not peer-reviewed

Political Bias in AI-Language Models: A Comparative Analysis of ChatGPT-4, Perplexity, Google Gemini, and Claude

Version 1 : Received: 14 July 2024 / Approved: 15 July 2024 / Online: 16 July 2024 (05:25:50 CEST)

How to cite: Choudhary, T. Political Bias in AI-Language Models: A Comparative Analysis of ChatGPT-4, Perplexity, Google Gemini, and Claude. Preprints 2024, 2024071274. https://doi.org/10.20944/preprints202407.1274.v1 Choudhary, T. Political Bias in AI-Language Models: A Comparative Analysis of ChatGPT-4, Perplexity, Google Gemini, and Claude. Preprints 2024, 2024071274. https://doi.org/10.20944/preprints202407.1274.v1

Abstract

Artificial intelligence (AI) driven language models have seen a rapid rise in development, deployment, and adoption over the last few years. This surge has sparked many discussions about their societal and political impact, including political bias. Bias is a crucial topic in the context of large models due to its far-reaching consequences on technology, politics, and society. It significantly influences public perception, decision-making, political discourse, and AI policy governance and ethics. This study investigates political bias through a comparative analysis of four prominent AI models: ChatGPT-4, Perplexity, Google Gemini, and Claude.Through a comprehensive analysis by systematically and categorically evaluating their responses to politically and ideologically charged tests and prompts, utilizing the Pew Research Center’s Political Typology Quiz, the Political Compass assessment, and ISideWith political party quiz, this study identifies significant ideological leanings and the nature of political bias within these models. The findings revealed that ChatGPT-4 and Claude exhibit a liberal bias, Perplexity is more conservative, while Google Gemini adopts more centrist stances. The presence of such biases underscores the critical need for transparency in AI development and the incorporation of diverse training datasets, regular audits, and user education to mitigate these biases. This analysis also advocates for more robust practices and comprehensive frameworks to assess and reduce political bias in AI, ensuring these technologies contribute positively to society and support informed, balanced, and inclusive public discourse, which will point towards neutrality.The results of this study add to the ongoing discourse about the ethical implications and development of AI models, highlighting the critical need to build trust and integrity in AI models. Additionally, future research directions have been outlined to explore and address the complex issue of bias in AI.

Keywords

Artificial Intelligence (AI); Bias in Algorithms; Ethical Artificial Intelligence; Language Models; Political Bias

Subject

Social Sciences, Political Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.