
In this article, we will provide a brief overview of Pillar Security’s platform to better understand how they are tackling AI security challenges.
Pillar Security is building a platform to cover the entire software development and deployment lifecycle with the goal of providing trust in AI systems. Using its holistic approach, the platform introduces new ways of detecting AI threats, beginning at pre-planning stages and going all the way through runtime. Along the way, users gain visibility into the security posture of their applications while enabling safe AI execution.
Pillar is uniquely suited to the challenges inherent in AI security. Co-founder and CEO Dor Sarig comes from a cyber-offensive background, having spent a decade leading security operations for governmental and enterprise organizations. In contrast, co-founder and CTO Ziv Karlinger spent over ten years developing defensive techniques, securing against financial cybercrime and securing supply chains. Together, their red team-blue team approach forms the foundation of Pillar Security and is instrumental in mitigating threats.
The Philosophy Behind the Approach
Before diving into the platform, it’s important to understand the underlying approach taken by Pillar. Rather than developing a siloed system where each piece of the platform focuses on a single area, Pillar offers a holistic approach. Each component within the platform enriches the next, creating a closed feedback loop that enables security to adapt to each unique use case.
The detections found in the posture management section of the platform are enriched by data detected in the discovery section. Likewise, adaptive guardrails that are utilized during runtime are built on insights from threat modeling and red teaming. This dynamic feedback loop ensures that live defenses are optimized as new vulnerabilities are discovered. This approach creates a powerful, holistic and contextual-based defense against threats to AI systems – from build to runtime.
AI Workbench: Threat Modeling Where AI Begins
The Pillar Security platform begins at what they call the AI workbench. Before any code is written, this secure playground for threat modeling allows security teams to experiment with AI use cases and proactively map potential threats. This stage is crucial to ensure that organizations align their AI systems with corporate policies and regulatory demands.
Developers and security teams are guided through a structured threat modeling process, generating potential attack scenarios specific to the application use case. Risks are aligned with the application’s business context, and the process is aligned with established frameworks such as STRIDE, ISO, MITRE ATLAS, OWASP Top Ten for LLMs, and Pillar’s own SAIL framework. The goal is to build security and trust into the design from day one.
AI Discovery: Real-Time Visibility into AI Assets
AI sprawl is a complex challenge for security and governance teams. They lack visibility into how and where AI is being used within their development and production environments.
Pillar takes a unique approach to AI security that goes beyond the CI/CD pipeline and the traditional SDLC. By integrating directly with code repositories, data platforms, AI/ML frameworks, IdPs and local environments, it can automatically find and catalog every AI asset within the organization. The platform displays a full inventory of AI apps, including models, tools, datasets, MCP servers, coding agents, meta prompts, and more. This visibility guides teams, helping form the foundation of the organizational security policy and enabling a clear understanding of the business use case, including what the application does and how the organization uses it.
Figure 1: Pillar Security automatically discovers all AI assets across the organization and flags unmonitored components to prevent security blind spots. |
AI-SPM: Mapping and Managing AI Risk
After identifying all AI assets, Pillar is able to understand the security posture by analyzing each of the assets. During this stage, the platform’s AI Security Posture Management (AI-SPM) conducts a robust static and dynamic analysis of all AI assets and their interconnections.
By analyzing the AI assets, Pillar creates visual representations of the identified Agentic systems, their components and their associated attack surfaces. Furthermore, it identifies supply chain, data poisoning and model/prompt/tool level risks. These insights, which appear within the platform, enable teams to prioritize threats, as it show exactly how a threat actor may move through the system.
Figure 2: Pillar’s Policy Center provides a centralized dashboard for monitoring enterprise-wide AI compliance posture |
AI Red Teaming: Simulating Attacks Before They Happen
Rather than waiting until the application is fully built, Pillar promotes a trust-by-design approach, enabling AI teams to test as they build.
The platform runs simulated attacks that are tailored to the AI system use case, by leveraging common techniques like prompt injections and jailbreaking to sophisticated attacks targeting business logic vulnerabilities. These Red Team activities help identify whether an AI agent can be manipulated into giving unauthorized refunds, leaking sensitive data, or executing unintended tool actions. This process not only evaluates the model, but also the broader agentic application and its integration with external tools and APIs.
Pillar also offers a unique capability through red teaming for tool use. The platform integrates threat modeling with dynamic tool activation, rigorously testing how chained tool and API calls might be weaponized in realistic attack scenarios. This advanced approach reveals vulnerabilities that traditional prompt-based testing methods are unable to detect.
For enterprises using third-party and embedded AI apps, such as copilots, or custom chatbots where they don’t have access to the underlying code, Pillar offers black-box, target-based red teaming. With just a URL and credentials, Pillar’s adversarial agents can stress-test any accessible AI application whether internal or external. These agents simulate real-world attacks to probe data boundaries and uncover exposure risks, enabling organizations to confidently assess and secure third-party AI systems without needing to integrate or customize them.
Figure 3: Pillar’s tailored red teaming tests real-world attack scenarios against an AI application’s specific use case and business logic |
Guardrails: Runtime Policy Enforcement That Learns
As AI applications move into production, real-time security controls become essential. Pillar addresses this need with a system of adaptive guardrails that monitor inputs and outputs during runtime, designed to enforce security policies without interrupting application performance.
Unlike static rule sets or traditional firewalls, these guardrails are model agnostic, application-centric and continuously evolve. According to Pillar, they draw on telemetry data, insights gathered during red teaming, and threat intelligence feeds to adapt in real time to emerging attack techniques. This allows the platform to adjust its enforcement based on each application’s business logic and behavior, and be highly precise with alerts.
During the walkthrough, we saw how guardrails can be finely tuned to prevent misuse, such as data exfiltration or unintended actions, while preserving the AI’s intended behavior. Organizations can enforce their AI policy and custom code-of-conduct rules across applications with confidence that security and functionality will coexist.
Figure 4: Pillar’s adaptive guardrails monitor runtime activity to detect and flag malicious use and policy violations |
Sandbox: Containing Agentic Risk
One of the most critical concerns is excessive agency. When agents can perform actions beyond their intended scopes, it can lead to unintended consequences.
Pillar addresses this during the Operate phase through secure sandboxing. AI agents, including advanced systems like coding agents and MCP servers, run inside tightly controlled environments. These isolated runtimes apply zero-trust principles to separate agents from critical infrastructure and sensitive data, while still enabling them to operate productively. Any unexpected or malicious behavior is contained without impacting the larger system. Every action is captured and logged in detail, giving teams a granular forensic trail that can be analyzed after the fact. With this containment strategy, organizations can safely give AI agents the room they need to operate.
AI Telemetry: Observability from Prompt to Action
Security doesn’t stop once the application is live. Throughout the lifecycle, Pillar continuously collects telemetry data across the entire AI stack. Prompts, agent actions, tool calls, and contextual metadata are all logged in real time.
This telemetry powers deep investigations and compliance tracking. Security teams can trace incidents from symptom to root cause, understand anomalous behavior, and ensure AI systems are operating within policy boundaries. It’s not enough to know what happened. It’s about understanding why something took place and how to prevent it from happening again.
Due to the sensitivity of the telemetry data, Pillar can be deployed on the customer cloud for full data control.
Final Thoughts
Pillar stands apart through a combination of technical depth, real-world insight, and enterprise-grade flexibility.
Founded by leaders in both offensive and defensive cybersecurity, the team has a proven track record of pioneering research that has uncovered critical vulnerabilities and produced detailed real-world attack reports. This expertise is embedded into the platform at every level.
Pillar also takes a holistic approach to AI security that extends beyond the CI/CD pipeline. By integrating security into the planning and coding phases and connecting directly to code repositories, data platforms and local environments, Pillar gains early and deep visibility into the systems being built. This context enables more precise risk analysis and highly targeted red team testing as development progresses.
The platform is powered by the industry’s largest AI threat intelligence feed, enriched by over 10 million real-world interactions. This threat data fuels automated testing, risk modeling, and adaptive defenses that evolve with the threat landscape.
Finally, Pillar is built for flexible deployment. It can run on premises, in hybrid environments, or fully in the cloud, giving customers full control over sensitive data, prompts, and proprietary models. This is a critical advantage for regulated industries where data residency and security are paramount.
Together, these capabilities make Pillar a powerful and practical foundation for secure AI adoption at scale, helping innovative organizations manage AI-specific risks and gain trust in their AI systems.