Meet ChatGPT’s Right-Wing Alter Ego
The rise of AI language models has led to concerns over political bias, with some people worried that these models will only reinforce existing political viewpoints. Elon Musk has caused controversy by announcing his plans to build a competitor to OpenAI’s ChatGPT called “TruthGPT”, which he claims will be a “maximum truth-seeking AI” that reflects only his political views. However, others are trying to use AI to bridge political divisions, with data scientist David Rozado creating RightWingGPT, a more conservative version of ChatGPT.
Rozado used a language model called Davinci GPT-3, similar but less powerful than the one that powers ChatGPT, and fine-tuned it with additional text to create RightWingGPT. He spent a few hundred dollars on cloud computing to do this, and plans to create more language models, including LeftWingGPT and DepolarizingGPT, which he says will demonstrate a “depolarizing political position”.
Rozado plans to put all three models online this summer, with the text for DepolarizingGPT coming from conservative voices including Thomas Sowell, Milton Freeman, and William F. Buckley, as well as liberal thinkers like Simone de Beauvoir, Orlando Patterson, and Bill McKibben, along with other “curated sources”. The goal of these language models is to provoke reflection rather than to spread a particular worldview, and to encourage society to create AIs focused on building bridges rather than sowing division.
However, the problem of settling on what is objectively true through the fog of political division and teaching that to language models may prove the biggest obstacle. ChatGPT and other conversational bots are built on complex algorithms that are fed huge amounts of text and trained to predict what word should follow a string of words. While this process can generate remarkably coherent output, it can also capture many subtle biases from the training material they consume.
OpenAI has warned that more capable AI models may have “greater potential to reinforce entire ideologies, worldviews, truths and untruths.” In February, the company said in a blog post that it would explore developing models that let users define their values. The Chinese government has also recently issued new guidelines on generative AI that aim to tame the behavior of these models and shape their political sensibilities.
Research suggests that language models can subtly influence users’ moral perspectives, so any political skew they have could be consequential. This has led to conservative organizations building competitors to ChatGPT, with the social network Gab announcing that it is working on AI tools with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code”.
In conclusion, the development of more politically aligned AI bots threatens to stoke political division. While some people are trying to use AI to bridge political divisions, others are concerned that language models will only reinforce existing political viewpoints. The challenge will be settling on what is objectively true through the fog of political division and teaching that to language models. OpenAI has warned that more capable AI models may have “greater potential to reinforce entire ideologies, worldviews, truths and untruths”, while research suggests that language models can subtly influence users’ moral perspectives, so any political skew they have could be consequential. The Chinese government has also recently issued new guidelines on generative AI that aim to tame the behavior of these models and shape their political sensibilities.
Comments
Post a Comment