May 1, 2025
Claude AI Exploited to Operate 100+ Fake Political Personas in Global Influence Campaign
Artificial intelligence (AI) company Anthropic has revealed that unknown threat actors leveraged its Claude chatbot for an "influence-as-a-service" operation to engage with authentic accounts across Facebook and X. The sophisticated activity, branded as financially-motivated, is said to have used its AI tool to orchestrate 100 distinct persons on the two social media platforms, creating a

May 01, 2025Ravie LakshmananArtificial Intelligence / Disinformation

Artificial intelligence (AI) company Anthropic has revealed that unknown threat actors leveraged its Claude chatbot for an “influence-as-a-service” operation to engage with authentic accounts across Facebook and X.

The sophisticated activity, branded as financially-motivated, is said to have used its AI tool to orchestrate 100 distinct persons on the two social media platforms, creating a network of “politically-aligned accounts” that engaged with “10s of thousands” of authentic accounts.

The now-disrupted operation, Anthropic researchers said, prioritized persistence and longevity over vitality and sought to amplify moderate political perspectives that supported or undermined European, Iranian, the United Arab Emirates (U.A.E.), and Kenyan interests.

These included promoting the U.A.E. as a superior business environment while being critical of European regulatory frameworks, focusing on energy security narratives for European audiences, and cultural identity narratives for Iranian audiences.

The efforts also pushed narratives supporting Albanian figures and criticizing opposition figures in an unspecified European country, as well as advocated development initiatives and political figures in Kenya. These influence operations are consistent with state-affiliated campaigns, although exactly who was behind them remains unknown, it added.

“What is especially novel is that this operation used Claude not just for content generation, but also to decide when social media bot accounts would comment, like, or re-share posts from authentic social media users,” the company noted.

“Claude was used as an orchestrator deciding what actions social media bot accounts should take based on politically motivated personas.”

The use of Claude as a tactical engagement decision-maker notwithstanding, the chatbot has been used to generate appropriate politically-aligned responses in the persona’s voice and native language, and create prompts for two popular image-generation tools.

The operation is believed to be the work of a commercial service that caters to different clients across various countries. At least four distinct campaigns have been identified using this programmatic framework.

“The operation implemented a highly structured JSON-based approach to persona management, allowing it to maintain continuity across platforms and establish consistent engagement patterns mimicking authentic human behavior,” researchers Ken Lebedev, Alex Moix, and Jacob Klein said.

“By using this programmatic framework, operators could efficiently standardize and scale their efforts and enable systematic tracking and updating of persona attributes, engagement history, and narrative themes across multiple accounts simultaneously.”

Another interesting aspect of the campaign was that it “strategically” instructed the automated accounts to respond with humor and sarcasm to accusations from other accounts that they may be bots.

Anthropic said the operation highlights the need for new frameworks to evaluate influence operations revolving around relationship building and community integration. It also warned that similar malicious activities could become common in the years to come as AI lowers the barrier further to conduct influence campaigns.

Elsewhere, the company noted that it banned a sophisticated threat actor using its models to scrape leaked passwords and usernames associated with security cameras and devise methods to brute-force internet-facing targets using the stolen credentials.

The threat actor further employed Claude to process posts from information stealer logs posted on Telegram, create scripts to scrape target URLs from websites, and improve their own systems to better search functionality.

Two other cases of misuse spotted by Anthropic in March 2025 are listed below –

  • A recruitment fraud campaign that leveraged Claude to enhance the content of scams targeting job seekers in Eastern European countries
  • A novice actor that leveraged Claude to enhance their technical capabilities to develop advanced malware beyond their skill level with capabilities to scan the dark web and generate undetectable malicious payloads that can evade security control and maintain long-term persistent access to compromised systems

“This case illustrates how AI can potentially flatten the learning curve for malicious actors, allowing individuals with limited technical knowledge to develop sophisticated tools and potentially accelerate their progression from low-level activities to more serious cybercriminal endeavors,” Anthropic said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.