![AI and Security - A New Puzzle to Figure Out AI and Security - A New Puzzle to Figure Out](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQtjhrZ_RleOqKe8Fj4zJv972M42R4nxywJeJHgd-X3ljwzN_JNlwY_jxlzpX55Mau0XSHgir_NGCGGQjKk4PxFdyoOadOmebz27_2VPS8EPbJckp01HD0UbJ0CoRV7cYnBedcge_g0LxJ32iJTzleB0m4MYs3yTgmVxJAEEA4NIu_DcvNARt_L6S_Gdw/s728-rw-e365/main.jpg)
AI is everywhere now, transforming how businesses operate and how users engage with apps, devices, and services. A lot of applications now have some Artificial Intelligence inside, whether supporting a chat interface, intelligently analyzing data or matching user preferences. No question AI benefits users, but it also brings new security challenges, especially Identity-related security challenges. Let’s explore what these challenges are and what you can do to face them with Okta.
Which AI?
Everyone talks about AI, but this term is very general, and several technologies fall under this umbrella. For example, symbolic AI uses technologies such as logic programming, expert systems, and semantic networks. Other approaches use neural networks, Bayesian networks, and other tools. Newer Generative AI uses Machine Learning (ML) and Large Language Models (LLM) as core technologies to generate content such as text, images, video, audio, etc. Many of the applications we use most often today, like chatbots, search, or content creation, are powered by ML and LLM. That’s why when people talk about AI, they’re probably referring to ML and LLM based AI.
AI systems and AI-powered applications have different levels of complexity and are exposed to different risks. Typically, a vulnerability in an AI system also affects the AI-powered applications that depend on it. In this article, we will focus on the risks that affect AI-powered applications—those that most organizations have already started building or will be building in the near future.
Defend Your GenAI Apps from identity threats
There are four critical requirements for which identity is crucial when building AI applications.
First, user authentication. The agent or app needs to know who the user is. For example, a chatbot might need to display my chat history or know my age and country of residence to customize replies. This requires some form of identification, which can be done with authentication.
Second, calling APIs on behalf of users. AI agents connect to far more apps than a typical web application. As GenAI apps integrate with more products, calling APIs securely will be critical.
Third, asynchronous workflows. AI agents may need to take more time to complete tasks or wait for complex conditions to be met. It might be minutes or hours, but it could also be days. Users won’t wait that long. These cases will become mainstream and will be implemented as asynchronous workflows, with agents running in the background. For these scenarios, humans will act as supervisors, approving or rejecting actions when away from a chatbot.
Fourth, Authorization for Retrieval Augmented Generation (RAG). Almost all GenAI apps can feed information from multiple systems to AI models in order to implement RAG. To avoid sensitive information disclosure, all data fed to AI models to respond or act on behalf of a user must be data the user has permission to access.
We need to solve all four requirements to realize GenAI’s full potential and help make sure that our GenAI applications are built securely.
Leveraging AI to help with security attacks
AI has also made it easier and faster for attackers to carry out targeted attacks. For example, by leveraging AI to run social engineering attacks or creating deepfakes. In addition, attackers can use AI to exploit vulnerabilities in applications at scale. Building GenAI into applications securely is one challenge, but what about using AI to help detect and respond to potential attacks faster with security threats?
Traditional security measures like MFA are no longer enough by themselves. Integrating AI into your identity security strategy can help detect bots, stolen sessions, or suspicious activity. It helps us:
- Do intelligent signal analysis to detect unauthorized or suspicious access attempts
- Analyze various signals related to application access activity and compare them to historical data in search of common patterns
- Terminate a session automatically if suspicious activity is detected
The rise of AI-based applications has a vast amount of potential, however, AI also poses new security challenges.
What’s next?
AI is changing the way humans interact with technology and with each other. In the next decade, we will see the rise of a huge AI agent ecosystem—networks of interconnected AI programs that integrate into our applications and act autonomously for us. While GenAI has many positives, it also introduces significant security risks that must be considered when building AI applications. Enabling builders to securely integrate GenAI into their apps to make them AI and enterprise-ready is crucial.
The flip side of AI is how it can help with traditional security threats. AI applications face similar security issues as traditional applications, such as unauthorized access to information, but with the use of new attack techniques by malicious actors.
AI is a reality, for better or for worse. It brings countless benefits to users and builders, but at the same time, concerns and new challenges on the security side and all up throughout every organization.
Identity companies like Auth0 are here to help take the security piece off your plate. Learn more about building GenAI applications securely at auth0.ai.
Discover why an easy-to-implement, adaptable authentication and authorization platform is the smarter path forward—read more here.