Deep Fakes: The New Weapon of Mass Manipulation

The saying “Garbage in, garbage out” (GIGO) applies well to artificial intelligence (AI. A system’s output will only be as good as its input. If the data fed into an AI is corrupted or skewed, so too will be its outputs. This principle is evident in the development of AI technology, which has become increasingly reliant on consuming vast amounts of information to generate useful insights and solutions.

However, a significant concern arises when AI is used to influence public opinion by promoting certain ideologies. In such cases, AI becomes a tool for manipulation rather than a means to foster objective discourse. This scenario raises questions about the role of AI in shaping public perception and potentially controlling information that could shape society’s understanding of critical issues.

One example of this is OpenAI, a company run by Sam Altman that released the breakthrough product ChatGPT in 2022. As OpenAI seeks to expand its AI capabilities, it must absorb more information from various sources. Initially, AI could simply collect data indiscriminately. However, content providers have become more aware of this practice and are taking legal action or demanding compensation for the use of their materials.

OpenAI is now licensing content from center-left to center-right sources, including the Associated Press and the Financial Times. While some may argue that these sources represent establishment-friendly uniparty-like liberalism, it is clear that OpenAI’s choices determine the AI’s perception of truth and its dissemination to users over time.

Similar concerns extend beyond AI, as digital information continues to evolve rapidly. We are all familiar with the practice of stealth editing or censorship in digital content, often resulting from real-time battles to alter information on topics ranging from the Middle East to Donald Trump. The same phenomenon is evident on platforms like Wikipedia, where revisions and edits constantly reshape the narrative surrounding various subjects.

Conservatives are right to be skeptical about these developments, as there is a risk that their viewpoint may be excluded from digital discourse, narrowing the Overton Window of acceptable ideas. This exclusion could lead to an increasingly polarized society, with certain opinions being deemed factually incorrect or unworthy of consideration.

The concept of “truth” is continually evolving and redefined by various sources. In the case of AI, it is essential for companies like OpenAI to scrutinize their content providers and ensure a diverse range of perspectives inform their algorithms. The motto of the Royal Society, convened in London in 1660 to study the sciences and the natural world, serves as an appropriate reminder: “Nullius in verba” – take nobody’s word for it. As society continues to grapple with the challenge of discerning truth from falsehood, AI must be subjected to the same standards of critical thinking that apply to all information sources.

Leave a Reply

Your email address will not be published. Required fields are marked *