
Threat actors are attempting to leverage a newly released artificial intelligence (AI) offensive security tool called HexStrike AI to exploit recently disclosed security flaws.
HexStrike AI, according to its website, is pitched as an AI‑driven security platform to automate reconnaissance and vulnerability discovery with an aim to accelerate authorized red teaming operations, bug bounty hunting, and capture the flag (CTF) challenges.
Per information shared on its GitHub repository, the open-source platform integrates with over 150 security tools to facilitate network reconnaissance, web application security testing, reverse engineering, and cloud security. It also supports dozens of specialized AI agents that are fine-tuned for vulnerability intelligence, exploit development, attack chain discovery, and error handling.
But according to a report from Check Point, threat actors are trying their hands on the tool to gain an adversarial advantage, attempting to weaponize the tool to exploit recently disclosed security vulnerabilities.
“This marks a pivotal moment: a tool designed to strengthen defenses has been claimed to be rapidly repurposed into an engine for exploitation, crystallizing earlier concepts into a widely available platform driving real-world attacks,” the cybersecurity company said.
Discussions on darknet cybercrime forums show that threat actors claim to have successfully exploited the three security flaws that Citrix disclosed last week using HexStrike AI, and, in some cases, even flag seemingly vulnerable NetScaler instances that are then offered to other criminals for sale.
Check Point said the malicious use of such tools has major implications for cybersecurity, not only shrinking the window between public disclosure and mass exploitation, but also helping parallelize the automation of exploitation efforts.
What’s more, it cuts down the human effort and allows for automatically retrying failed exploitation attempts until they become successful, which the cybersecurity company said increases the “overall exploitation yield.”
“The immediate priority is clear: patch and harden affected systems,” it added. “Hexstrike AI represents a broader paradigm shift, where AI orchestration will increasingly be used to weaponize vulnerabilities quickly and at scale.”
The disclosure comes as two researchers from Alias Robotics and Oracle Corporation said in a newly published study that AI-powered cybersecurity agents like PentestGPT carry heightened prompt injection risks, effectively turning security tools into cyber weapons via hidden instructions.
“The hunter becomes the hunted, the security tool becomes an attack vector, and what started as a penetration test ends with the attacker gaining shell access to the tester’s infrastructure,” researchers Víctor Mayoral-Vilches and Per Mannermaa Rynning said.
“Current LLM-based security agents are fundamentally unsafe for deployment in adversarial environments without comprehensive defensive measures.”