Openai shuts down malicious state-linked ai campaigns
OpenAI has announced that it has disrupted ten covert influence operations abusing its AI tools. These operations were linked to state-affiliated threat actors from China, Russia, Iran, and North Korea. The actors used generative AI to support influence campaigns, spread propaganda, and impersonate real users across online platforms.
Abusing AI for information operations
In a newly published threat report, OpenAI detailed how the threat actors misused its models to generate content, translate text, and gather information. These activities were not traditional cyberattacks involving system breaches or malware deployment. Instead, they represented a new form of digital manipulation powered by AI.
The actors created fake articles, comments, social media posts, and personas. The goal was to push politically motivated narratives and undermine trust in public discourse. Although the campaigns had limited reach, they demonstrate how generative AI can be used to produce high volumes of persuasive content quickly and cheaply.
Activity by country
China
One of the most active groups, identified as Spamouflage, used AI to produce multilingual content that supported the Chinese government and criticized Western policies. The content targeted audiences in the United States and Europe.
Russia
Russian-linked actors used AI models to generate political commentary and mimic legitimate news sources. Much of their content supported Russia’s war in Ukraine and attempted to discredit Western media. These influence campaigns are part of a broader strategy, where cyber operations and disinformation go hand in hand. This is not the first time Russian threat actors have used advanced tools to target Ukraine. In a separate incident, Ukraine’s largest bank was hit by the SmokeLoader malware, highlighting how hybrid attacks are deployed across both digital and informational domains.
Iran
Iranian campaigns involved fabricated news articles and imagery designed to support the regime’s political messaging. AI tools were used to enhance the quality and volume of disinformation.
North Korea
North Korean actors created fake personas posing as recruiters and researchers in the cybersecurity industry. These efforts appeared to be part of larger intelligence-gathering operations focused on foreign targets. The country is also known for financially motivated cybercrime. The Lazarus Group, a state-linked threat actor, has previously carried out high-profile crypto heists, including the $1.4 billion theft targeting Bybit. Read more about the Lazarus Group and the Bybit breach here.
These examples highlight how state-sponsored threat actors are increasingly turning to generative AI as a tool in their cyber arsenals. Learn more about how states engage in cyberattacks in our guide to state-sponsored hacking.
A growing cybersecurity challenge
This is the first time OpenAI has shared detailed insights into how its tools have been used in state-linked operations. The findings reflect an evolving threat landscape, where generative AI enables new types of influence campaigns.
Even though the campaigns had low engagement and did not appear to be highly effective, the underlying tactic is concerning. AI allows threat actors to overcome language barriers, create consistent messaging, and operate at a scale that would not be possible with human labor alone.
OpenAI’s response
In response to the abuse, OpenAI terminated the accounts involved and introduced additional safeguards to detect similar behavior in the future. The company has also shared its findings with peer platforms, law enforcement, and policymakers.
OpenAI emphasized the importance of transparency and early action. In its report, the company stated:
"Although the actors we disrupted did not appear to use our tools to significantly increase the reach or effectiveness of their operations, we believe it is important to share our findings to raise awareness."
What this means for cybersecurity
The misuse of generative AI for influence campaigns is no longer a theoretical risk. It is happening now. This development highlights the need for stronger oversight and proactive detection efforts within the AI ecosystem.
At Moxso, we remain focused on the intersection between AI and cybersecurity. While OpenAI’s actions are a strong step in the right direction, these campaigns serve as a reminder of how digital threats continue to evolve. Awareness, collaboration, and resilience are more important than ever.

Sarah Krarup
Sarah studies innovation and entrepreneurship with a deep interest in IT and how cybersecurity impacts businesses and individuals. She has extensive experience in copywriting and is dedicated to making cybersecurity information accessible and engaging for everyone.
View all posts by Sarah Krarup