May 28, 2025
Researcher Uses OpenAI’s o3 to Spot Zero-Day Flaw in Linux Kernel’s SMB
OpenAI’s o3 artificial intelligence (AI) model recently helped a cybersecurity researcher in uncovering a zero-day vulnerability in Linux. As per the researcher, the flaw was found in the Linux kernel’s Server Message Block (SMB) implementation, also known as ksmbd. The previously unknown security flaw is said to be tricky to find since it involved multiple users ...

OpenAI’s o3 artificial intelligence (AI) model recently helped a cybersecurity researcher in uncovering a zero-day vulnerability in Linux. As per the researcher, the flaw was found in the Linux kernel’s Server Message Block (SMB) implementation, also known as ksmbd. The previously unknown security flaw is said to be tricky to find since it involved multiple users or connections interacting with the system at the same time. This specific bug is now tracked as CVE-2025-37899, and a fix has already been released.

OpenAI’s o3 Finds Zero-Day Vulnerability

Usage of AI models in finding zero-day or previously unknown (and likely unexploited) bugs is relatively rare, despite the increasing capabilities of the technology to potentially hunt them. Most researchers still prefer to uncover such security flaws using traditional code auditing, which can be a cumbersome way to analyse a large codebase. Researcher Sean Heelan detailed how OpenAI’s o3 model assisted him in uncovering the flaw relatively easily in a blog post.

Interestingly, the major bug was not the focus for the researcher. Heelan was testing the AI’s capability against a different bug (CVE-2025-37778), also described as the “Kerberos authentication vulnerability.” This bug also falls in the “use-after-free” category, which essentially means that a part of the system deletes something from memory, but other parts still try to use it afterwards. This can lead to crashes and security issues. The AI model was able to find the flaw in eight out of the 100 runs.

Once Heelan confirmed that o3 is capable of detecting a known security bug from a large chunk of code, he decided to use it to feed the AI model the entire file of the session setup command handler instead of just one function. This file, notably, contains around 12,000 lines of code and handles different types of requests. An analogy of this would be to give the AI a novel and to ask it to find a specific typo, only, this typo could potentially crash the computer.

After o3 was asked to run 100 simulations of this full file, it was only able to find the previously known bug once. Heelan acknowledges the drop in performance but highlights that the AI was still able to find the bug, which is a big feat. However, he found that in other runs, the OpenAI model spotted an entirely different bug, which was previously unknown, and the researcher missed it.

This new security flaw was also of the same nature, but it affected the SMB logoff command handler. This zero-day vulnerability also involved the system trying to access a file that was previously deleted, however, this bug triggered the issue when a user was logging out or ending a session.

As per o3’s report, this bug could potentially crash the system or allow attackers to run code with deep system access, making it a major security concern. Heelan highlighted that o3 was able to understand a tricky bug in a real-world scenario, and explained the vulnerability clearly in its report.

Heelan added that o3 is not perfect and has a high signal-to-noise ratio (ratio between false positive to true positive). However, it found that the model behaves like a human when searching for bugs, unlike traditional security tools, which have a rigid way of functioning.