Reddit is addressing the aftermath of an unauthorized AI experiment carried out by researchers from the University of Zurich, who deployed more than a dozen AI-powered bots, disguised as real users, on the r/ChangeMyView (CMV), a subreddit with nearly 4 million members.
The experiment saw AI bots post more than 1,000 comments over several months, posing as fabricated identities, including rape survivors, trauma counselors, Black critics of the Black Lives Matter (BLM) movement, and gay Roman Catholics to engage with users and attempt to influence their views on contentious topics.
In April, CMV moderators issued a statement condemning the experiment as a clear violation of subreddit rules and Reddit’s broader content policies. “Our sub is a decidedly human space that rejects undisclosed AI as a core value,” they wrote. “People do not come here to discuss their views with AI or to be experimented upon.”
The announcement followed a private email from the researchers, sent only after the experiment had concluded. In it, they admitted to using AI-generated comments without disclosure, saying transparency would have undermined the study’s goals.
“Our experiment assessed LLMs’ (Large Language Models) persuasiveness in an ethical scenario,” they wrote. “We recognize that our experiment broke the community rules against AI-generated comments and apologize. But we believe it was crucial to conduct a study of this kind, even if it meant disobeying the rules.”
In response to the notice, the CMV moderation team filed a formal ethics complaint with the University of Zurich’s Institutional Review Board, citing serious concerns over the experiment’s impact on the community and the lack of transparency. They accused the researchers of exploiting unsuspecting users to gather academic data, without their knowledge or consent.
“AI pretending to be a victim of rape, a Black man opposed to BLM, or a trauma counselor—these are roles that mislead users into deeply personal conversations with machines,” the moderators said. “This kind of psychological manipulation isn’t ethical research. It’s exploitation.”
They urged the university to block publication of the study, arguing that its findings were gathered through unethical means.
The University of Zurich later responded, stating that it had conducted a thorough internal review of the project. However, the university clarified that its ethics commission lacks legal authority to block publication.
“This project yields important insights, and the risks (e.g. trauma etc.) are minimal. This means that suppressing publication is not proportionate to the importance of the insights the study yields,” the university wrote.
The moderators rejected the university’s position and argued that publication would set a dangerous precedent and expose online communities to similar intrusions in the future.
“Community-level experiments impact communities, not just individuals,” they said. “Allowing publication would dramatically encourage further intrusion by researchers, contributing to increased community vulnerability to future non-consensual human subjects experimentation.”
Reddit’s Chief Legal Officer, Ben Lee, echoed the moderators’ concerns, calling the study “deeply wrong on both a moral and legal level.”
He said Reddit is preparing formal legal action against both the University of Zurich and the research team. “We want to do everything we can to support the community and ensure that the researchers are held accountable for their misdeeds here,” Lee said.
The Reddit controversy highlights the growing power of AI chatbots to infiltrate online conversations. In March, researchers announced that OpenAI’s GPT-4.5 had passed the Turing test, successfully convincing 73% of participants that they were speaking with a human.
While the so-called “dead internet” theory, the idea that much of the internet could soon be generated by bots, remains speculative, a recent Forbes report noted that some experts believe the digital landscape is already approaching that reality.
Meanwhile, major platforms have been steadily expanding their AI capabilities to engage younger audiences.
In recent years, TikTok has introduced Symphony, a suite of AI tools that enables brands and creators to produce ads using artificial avatars. Meta has also rolled out generative AI features across Instagram and Facebook, allowing users to create and interact with AI-generated characters.
“We expect these AIs to actually exist on our platforms in the same way that [human] accounts do,” Connor Hayes, Meta’s vice president of product for generative AI, told Financial Times.
I don’t know about you, but trusting big tech and big media to be responsible stewards of technology that has the power to manipulate the population at scale is dangerous to the point of irresponsible. In my view, it is a threat to the democracy we fought so hard to defend.