November 12, 2024
Google Adds New AI Accessibility Features for Pixel Phones and Android
Google introduced new artificial intelligence (AI) accessibility features for Pixel smartphones and Android devices on Tuesday. There are four new features, out of which two are exclusive to Pixel smartphones, and two have a wider availability across Android devices. These features are aimed at people with low vision and vision loss, people who are deaf, and those wit...

Google introduced new artificial intelligence (AI) accessibility features for Pixel smartphones and Android devices on Tuesday. There are four new features, out of which two are exclusive to Pixel smartphones, and two have a wider availability across Android devices. These features are aimed at people with low vision and vision loss, people who are deaf, and those with speech impairment. The features include Guided Frame, new AI features in the Magnifier app, as well as improvements in the Live Transcribe and Live Captions feature.

Google Adds AI-Powered Accessibility Features

In a blog post, the tech giant highlighted that it is committed to working with the disability community and is looking to bring new accessibility tools and innovation to make technology more inclusive.

The first feature is dubbed Guided Frame, and it is exclusive to the Pixel Camera. The feature provides spoken assistance to users to help them place their faces within the frame and find the right camera angle. This feature is aimed at those with low vision and vision loss. Google says the feature will prompt users to tilt their faces up or down, or pan left to right before the camera automatically captures the photo. Additionally, it will also tell the user when the lighting is inadequate so they can find a better frame.

Earlier, the feature was available through Android’s screen reader TalkBack, but Guided Frame has now been placed within the camera settings.

Another Pixel-specific feature is an upgrade to the Magnifier app. The app was introduced last year and it allowed users to use the camera to zoom into real-world surroundings to read sign boards and find items on a menu board. Now, Google has used AI to let users search for specific words in their surroundings.

This will allow them to look for information about their flight at the airport or find a specific item at a restaurant as the AI will auto-zoom on the word. Additionally, a picture-in-picture mode has been added which shows the zoomed-out image in a smaller window while the searched word is locked into the bigger window. Users can also switch the lenses of the camera for specific purposes. The app also supports the front-facing camera so it can be used as a mirror.

Live Transcribe is also getting a new upgrade which will be supported only on foldable smartphones. In dual-screen mode, it can now show each speaker their own transcriptions while using the feature. This way, if two people are sitting across a table, the smartphone can be placed in the middle and each half of the screen will show what that person has said. Google says it will make it easier for all participants to follow the conversation better.

The Live Captions feature is also getting an upgrade. Google has added support for seven new languages — Chinese, Korean, Polish, Portuguese, Russian, Turkish, and Vietnamese — to Live Captions. Now, whenever the device plays a sound, users will be able to get a real-time caption for it in these languages as well.

These languages will also be available on-device for Live Transcribe, Google said. This takes the total number of languages to 15. When transcribing these languages, users will no longer require a connection to the Internet. However, if connected to the Internet, the feature works with 120 languages.