OpenAI has identified and dismantled five covert influence operations originating from Russia, China, Iran, and Israel that were exploiting its artificial intelligence tools to manipulate public opinion.
According to a report released on Thursday, these operations used AI models to generate a wide range of deceptive content, including social media comments, articles, images, and fake account biographies in multiple languages. The actors behind these campaigns sought to influence public discourse on various issues, such as the conflicts in Gaza and Ukraine, and political matters in the U.S., Europe, and India.
The report highlighted that these influence operations were active over the past three months and included sophisticated tactics such as creating names and biographies for fake accounts, debugging code, and translating and proofreading texts. OpenAI analysts noted that the operations struggled to engage a substantial audience, with many of their posts being recognized as fake and called out by real users.
One of the disrupted operations, linked to Russian actors, utilized AI to generate political comments on Telegram and other platforms. Known as “Bad Grammar,” this operation focused on undermining support for Ukraine and spreading disinformation about politics in the U.S. and other countries. Another Russian operation, “Doppelganger,” was connected to the Kremlin and used AI tools to create and translate news articles and social media posts in multiple languages, targeting audiences in Europe and the U.S. Similarly, the Chinese network “Spamouflage” leveraged AI to generate and post pro-China messages while attacking critics of Beijing across various social media platforms.
In addition to the Russian and Chinese operations, OpenAI also uncovered influence campaigns from Iran and Israel. The Iranian network used AI to produce content supporting its geopolitical interests, while an Israeli political marketing firm, Stoic, generated pro-Israel content related to the war in Gaza. These campaigns often involved fake personas posing as concerned citizens or students, aiming to sway opinions on sensitive topics. However, like their Russian and Chinese counterparts, these efforts largely failed to achieve significant engagement.
Expanded Coverage: