
Sakana AI, a Tokyo-based artificial intelligence (AI) firm, introduced a new artificial intelligence (AI) agentic framework that can improve the development and deployment speeds of large language models (LLMs). Announced on Thursday, the company unveiled the AI CUDA Engineer that improves both the pre-training and inference speeds of an AI model by optimising the codebase. The AI firm highlighted that the entire process is driven by AI agents and is end-to-end automated. Notably, Sakana AI introduced The AI Scientist last year which can conduct scientific research.
Sakana AI Unveils AI CUDA Engineer
In a post, the Japanese AI firm stated that after developing AI systems that can create new models, and fully automate the AI research process, it began working on ways to speed up the deployment and inference speeds of an LLM.
The company said that the research led to the development of the AI CUDA Engineer. It is a fully automated, comprehensive agent framework for CUDA (Compute Unified Device Architecture) kernel discovery and optimisation.
CUDA kernels can be understood as specialised functions that run on Nvidia GPUs, allowing parallel execution of code across multiple threads. Due to parallelism, it is more optimised than traditional methods and allows for the acceleration of computational tasks, especially those with large datasets. As such, this is considered a great way to optimise AI models’ deployment and inference.
Sakana AI said the AI CUDA Engineer can automatically convert PyTorch modules into optimised CUDA kernels, to significantly improve deployment speedups. It can generate kernels that are said to be 10-100 times faster than its PyTorch counterpart.
The process includes four steps. First, the agent framework converts the PyTorch code into working kernels. Then, the agent implements optimisation techniques to ensure only the best kernels are generated. Then, kernel crossover prompts are added, which combine multiple optimised kernels to create new kernels. Finally, the AI agent preserves the high-performance CUDA kernels in an archive, which are used to deliver performance improvements. The company has also published a study that further details the process.
Alongside the paper, Sakana AI is also publishing the AI CUDA Engineer Archive, which is a dataset consisting of more than 30,000 kernels generated by the AI. These kernels are released under the CC-By-4.0 license and can be accessed via Hugging Face.
Additionally, the Japanese firm also launched a website that lets visitors interactively explore 17,000 verified kernels and their profiles. The website allows users to explore these kernels across 230 tasks, and also lets them compare CUDA kernels across individual experiments.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.