December 27, 2024
Apple Vision Pro Vulnerability Exposed Virtual Keyboard Inputs to Attackers
Details have emerged about a now-patched security flaw impacting Apple's Vision Pro mixed reality headset that, if successfully exploited, could allow malicious attackers to infer data entered on the device's virtual keyboard. The attack, dubbed GAZEploit, has been assigned the CVE identifier CVE-2024-40865. "A novel attack that can infer eye-related biometrics from the avatar image to

Sep 13, 2024Ravie LakshmananVirtual Reality / Vulnerability

Details have emerged about a now-patched security flaw impacting Apple’s Vision Pro mixed reality headset that, if successfully exploited, could allow malicious attackers to infer data entered on the device’s virtual keyboard.

The attack, dubbed GAZEploit, has been assigned the CVE identifier CVE-2024-40865.

“A novel attack that can infer eye-related biometrics from the avatar image to reconstruct text entered via gaze-controlled typing,” a group of academics from the University of Florida said.

“The GAZEploit attack leverages the vulnerability inherent in gaze-controlled text entry when users share a virtual avatar.”

Following responsible disclosure, Apple addressed the issue in visionOS 1.3 released on July 29, 2024. It described the vulnerability as impacting a component called Presence.

“Inputs to the virtual keyboard may be inferred from Persona,” it said in a security advisory, adding it resolved the problem by “suspending Persona when the virtual keyboard is active.”

In a nutshell, the researchers found that it was possible to analyze a virtual avatar’s eye movements (or “gaze”) to determine what the user wearing the headset was typing on the virtual keyboard, effectively compromising their privacy.

As a result, a threat actor could, hypothetically, analyze virtual avatars shared via video calls, online meeting apps, or live streaming platforms and remotely perform keystroke inference. This could then be exploited to extract sensitive information such as passwords.

The attack, in turn, is accomplished by means of a supervised learning model trained on Persona recordings, eye aspect ratio (EAR), and eye gaze estimation to differentiate between typing sessions and other VR-related activities (e.g., watching movies or playing games).

In the subsequent step, the gaze estimation directions on the virtual keyboard are mapped to specific keys in order to determine the potential keystrokes in a manner such that it also takes into account the keyboard’s location in the virtual space.

“By remotely capturing and analyzing the virtual avatar video, an attacker can reconstruct the typed keys,” the researchers said. “Notably, the GAZEploit attack is the first known attack in this domain that exploits leaked gaze information to remotely perform keystroke inference.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.