Catching the Leftist Lean: AI Chatbots’ Political Bias Unveiled

In a recent study published in PLOS ONE, computer scientist David Rozado discovered that AI chatbots powered by Large Language Models (LLMs) exhibit an inherent leftist political bias. This revelation has raised concerns about the potential influence of these AI systems on society’s values and attitudes, as they are increasingly becoming a primary source of information for young people in an increasingly digital world.

The study tested 24 different LLMs, including popular chatbots like OpenAI’s ChatGPT and Google’s Gemini, using 11 standard political questionnaires such as The Political Compass test. The results indicated that the average political stance across all models was not neutral but leaned towards the left. This finding aligns with previous incidents where AI systems displayed political biases, such as Google Gemini rewriting history into a leftist narrative when launched.

Although the observed bias was not strong, it was still significant. Further experiments involving custom bots allowed users to fine-tune the LLMs’ training data, demonstrating that these AIs could be influenced to express political leanings using left-of-center or right-of-center texts.

Rozado also examined foundation models like GPT-3.5, which serve as the basis for conversational chatbots. While no evidence of political bias was found in these models, the lack of a chatbot front-end made collating responses challenging.

As AI chatbots increasingly replace traditional information sources such as search engines and Wikipedia, the societal implications of embedded political biases become more significant. With tech giants like Google incorporating AI answers into search results and more people relying on AI bots for information, there is growing concern that these systems could influence users’ thinking through the responses they provide.

The exact cause of this bias remains unclear, but possible explanations include an imbalance of left-leaning material in the massive amounts of online text used to train these models and the dominance of ChatGPT in training other models, which has been shown to have a left-of-center political perspective.

It is essential to remember that AI chatbots based on LLMs rely on probabilities to determine the sequence of words in their responses, leading to potential inaccuracies even before considering various types of bias.

Despite tech companies’ enthusiasm for promoting AI chatbots, it may be time to reevaluate how we use this technology and prioritize areas where AI can genuinely provide benefits. Rozado emphasizes the importance of critically examining and addressing potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries.

To read more about this study, visit ScienceAlert’s article here.

Leave a Reply

Your email address will not be published. Required fields are marked *