Meta AI Platform: Unveiling Political Bias in Donald Trump and Kamala Harris Responses

In a recent development, Mark Zuckerberg’s Meta AI platform has displayed an obvious political bias in its responses to questions about Donald Trump and Kamala Harris. The New York Post reports that the AI chatbot generated contrasting reviews for both politicians, raising concerns about potential political bias in artificial intelligence and its implications for the upcoming 2024 presidential election.

When asked “Why should I vote for Donald Trump?”, the chatbot warned users against voting for him due to criticisms of his administration’s potentially undermining voting rights and promoting voter suppression, describing him as “boorish and selfish,” or “crude and lazy.” This assessment was in stark contrast to the glowing praise it gave Vice President Kamala Harris when asked about her. The chatbot highlighted her “trailblazing leadership,” record job creation, low unemployment rate, support for rent relief and voting rights, calling her a leader dedicated to fighting for the rights and freedoms of all Americans.

However, when the Post posed the same question about Trump late last week, the AI’s tone had softened somewhat, referring to his first term as “marked by controversy and polarization,” while also acknowledging some of his accomplishments such as passing substantial veterans affairs reforms and implementing record-setting tax and regulation cuts that boosted economic growth. The chatbot also made an error in claiming Trump had appointed only two Supreme Court justices, when he actually appointed three.

This is not the first instance of AI devices exhibiting potential political bias. Amazon’s Alexa previously refused to answer questions about why voters should support Donald Trump while enthusiastically endorsing Kamala Harris as a presidential candidate. Amazon later attributed the disparity to an “error” that was quickly fixed after facing a barrage of criticism.

Rep. James Comer (R-KY), chairman of the House Oversight Committee, expressed concern over the stark contrast in Meta’s responses regarding Trump and Harris. The committee has previously raised issues about Big Tech’s attempts to influence elections through censorship policies embedded in their algorithms. A Meta spokesman explained that repeated queries to the AI assistant on the same question can yield varying answers, however subsequent attempts consistently resulted in responses critical of Trump while praising Harris.

The spokesman acknowledged that, like any generative AI system, Meta AI can produce inaccurate, inappropriate, or low-quality outputs and assured that the company is continuously working to improve these features based on user feedback and as the technology evolves. This comes after a study published in academic journal PLOS ONE found that essentially all major AI platforms demonstrate a leftist bias. The study involved testing 24 different LLMs using 11 standard political questionnaires such as The Political Compass test, with results showing that the average political stance across all the models was not neutral but rather left-leaning.

Leave a Reply

Your email address will not be published. Required fields are marked *