Last week, from a post by the excellent Matteo Flora on AI censorship Deepseek, a beautiful exchange of experiences was born. Some of the commentators, many to be honest, pointed out how, differently from Chinese AI, Western AI Chat GPT were balanced in their responses. Of course, broad and reasoned responses are better than censorship, as the “philosopher” Max Catalano would say. I had some doubts then, though, and I have more today. The crucial question is: how reliable are AI-generated opinions on political and social issues? Today, the question is no longer theoretical: and a new study published in Science Direct (I link it here) helps us in a first response, providing us with concrete data on the political influence of chatbots.
The research, the result of collaboration between theUniversity of East Anglia, Getulio Vargas Foundation e Insper, systematically analyzed ChatGPT responses, revealing surprising patterns that may call into question the role of AI in public debate.
Political Influence Hidden in Algorithms
The international team led by Dr. Fabrio Motoki conducted an in-depth analysis of ChatGPT using the questionnaire Pew Research Center, a non-profit organization known for its ability to measure American public opinion. The researchers had the AI respond as if it were an “average American,” a “left-wing/Democratic American,” and a “right-wing/Neocon American,” repeating the test 200 times for each category to obtain statistically significant data. The results showed a clear tendency of the chatbot to provide answers closer to the Dem positions, even when he had to “impersonate” the average American.
The subtle art of digital manipulation
It's not just about overt censorship or overtly biased responses. Political influence manifests itself in more subtle and sophisticated ways. When researchers asked ChatGPT to generate longer texts on politically sensitive topics, they found that the system tended to promote ideas associated with the left, such as role of the state in the economy. Curiously, however, it maintains positions favorable to the army and theAmerican exceptionalism, traditionally associated with the right. This seemingly contradictory mix could be linked to the recent partnership between OpenAI and the military contractor Anduril.
The team he extended the research to the generation of images as well, aware of their powerful impact on shaping public opinion. Using DALL·E 3, they found that the system refused to generate images that represented conservative positions on divisive issues, citing concerns about misinformation. Only through an ingenious “jailbreak” (having ChatGPT describe what another AI system would produce) were they able to generate these images.
The price of (non)neutrality
The implications of this research are profound. As co-author points out Dr. Pinho Neto, uncontrolled biases in generative AI could deepen existing social divisions, eroding trust in democratic institutions and processes. It is not a question of “East versus West” or of more or less censorious systems: each approach to AI content moderation brings with it its own risks and biases.
The solution is not to demonize AI or to claim a probably impossible absolute neutrality, but to develop a critical and conscious approach. Researchers suggest the implementation of regulatory standards and greater transparency, especially considering the growing use of AI in journalism, education and public policies. As users, we must keep our critical sense high, remembering that behind every apparently neutral response (even from “our AIs” that we perceive as different from the “bad and censorious AIs” of “others”) there are hidden choices and orientations that can influence our way of thinking.