
Meta on Thursday revealed that it disrupted three covert influence operations originating from Iran, China, and Romania during the first quarter of 2025.
“We detected and removed these campaigns before they were able to build authentic audiences on our apps,” the social media giant said in its quarterly Adversarial Threat Report.
This included a network of 658 accounts on Facebook, 14 Pages, and two accounts on Instagram that targeted Romania across several platforms, including Meta’s services, TikTok, X, and YouTube. One of the pages in question had about 18,300 followers.
The threat actors behind the activity leveraged fake accounts to manage Facebook Pages, direct users to off-platform websites, and post comments on posts by politicians and news entities. The accounts masqueraded as locals living in Romania and posted content related to sports, travel, or local news.
While a majority of these comments did not receive any engagement from authentic audiences, Meta said these fictitious personas also had a corresponding presence on other platforms in an attempt to make them look credible.
“This campaign showed consistent operational security (OpSec) to conceal its origin and coordination, including by relying on proxy IP infrastructure,” the company noted. “The people behind this effort posted primarily in Romanian about news and current events, including elections in Romania.”
A second influence network disrupted by Meta originated from Iran and targeted Azeri-speaking audiences in Azerbaijan and Turkey across its platforms, X, and YouTube. It consisted of 17 accounts on Facebook, 22 FB Pages, and 21 accounts on Instagram.
The counterfeit accounts created by the operation were used to post content, including in Groups, manage Pages, and comment on the network’s own content so as to artificially inflate the popularity of the network’s content. Many of these accounts posed as female journalists and pro-Palestine activists.
“The operation also used popular hashtags like #palestine, #gaza, #starbucks, #instagram in their posts, as part of its spammy tactics in an attempt to insert themselves in the existing public discourse,” Meta said.
“The operators posted in Azeri about news and current events, including the Paris Olympics, Israel’s 2024 pager attacks, a boycott of American brands, and criticisms of the U.S., President Biden, and Israel’s actions in Gaza.”
The activity has been attributed to a known threat activity cluster dubbed Storm-2035, which Microsoft described in August 2024 as an Iranian network targeting U.S. voter groups with “polarizing messaging” on presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.
In the intervening months, artificial intelligence (AI) company OpenAI also revealed that it banned ChatGPT accounts created by Storm-2035 to weaponize its chatbot for generating content to be shared on social media.
Lastly, Meta revealed that it removed 157 Facebook accounts, 19 Pages, one Group, and 17 accounts on Instagram to target audiences in Myanmar, Taiwan, and Japan. The threat actors behind the operation have been found to use AI to create profile photos and run an “account farm” to spin up new fake accounts.
The Chinese-origin activity encompassed three separate clusters, each reposting other users’ and their own content in English, Burmese, Mandarin, and Japanese about news and current events in the countries they targeted.
“In Myanmar, they posted about the need to end the ongoing conflict, criticized the civil resistance movements and shared supportive commentary about the military junta,” the company said.
“In Japan, the campaign criticized Japan’s government and its military ties with the U.S. In Taiwan, they posted claims that Taiwanese politicians and military leaders are corrupt, and ran Pages claiming to display posts submitted anonymously — in a likely attempt to create the impression of an authentic discourse.”