November 22, 2024

Shocking results — Report: Deepfake porn consistently found atop Google, Bing search results Google vows to create more safeguards to protect victims of deepfake porn.

Ashley Belanger – Jan 11, 2024 9:55 pm UTC EnlargeGeorgePeters | iStock / Getty Images Plus reader comments 73

Popular search engines like Google and Bing are making it easy to surface nonconsensual deepfake pornography by placing it at the top of search results, NBC News reported Thursday.

These controversial deepfakes superimpose faces of real women, often celebrities, onto the bodies of adult entertainers to make them appear to be engaging in real sex. Thanks in part to advances in generative AI, there is now a burgeoning black market for deepfake porn that could be discovered through a Google search, NBC News previously reported.

NBC News uncovered the problem by turning off safe search, then combining the names of 36 female celebrities with obvious search terms like “deepfakes,” “deepfake porn,” and “fake nudes.” Bing generated links to deepfake videos in top results 35 times, while Google did so 34 times. Bing also surfaced “fake nude photos of former teen Disney Channel female actors” using images where actors appear to be underaged.

A Google spokesperson told NBC that the tech giant understands “how distressing this content can be for people affected by it” and is “actively working to bring more protections to Search.”

According to Google’s spokesperson, this controversial content sometimes appears because “Google indexes content that exists on the web,” just “like any search engine.” But while searches using terms like “deepfake” may generate results consistently, Google “actively” designs “ranking systems to avoid shocking people with unexpected harmful or explicit content that they arent looking for,” the spokesperson said.

Currently, the only way to remove nonconsensual deepfake porn from Google search results is for the victim to submit a form personally or through an “authorized representative.” That form requires victims to meet three requirements, showing that: they’re “identifiably depicted” in the deepfake, the “imagery in question is fake and falsely depicts” them as “nude or in a sexually explicit situation,” and the imagery was distributed without their consent.

While this gives victims some course of action to remove content, experts are concerned that search engines need to do more to effectively reduce the prevalence of deepfake pornography available onlinewhich right now is rising at a rapid rate. Advertisement

This emerging issue increasingly affects average people and even children, not just celebrities. Last June, child safety experts discovered thousands of realistic but fake AI child sex images being traded online, around the same time that the FBI warned that the use of AI-generated deepfakes in sextortion schemes was increasing.

And nonconsensual deepfake porn isn’t just being traded in black markets online. In November, New Jersey police launched a probe after high school teens used AI image generators to create and share fake nude photos of female classmates.

With tech companies seemingly slow to stop rise in deepfakes, some states have passed laws criminalizing deepfake porn distribution. Last July, Virginia amended its existing law criminalizing revenge porn to include any “falsely created videographic or still image.” In October, New York passed a law specifically focused on banning deepfake porn, imposing a $1,000 fine and up to a year of jail time on violators. Congress has also introduced legislation that creates criminal penalties for spreading deepfake porn.

Although Google told NBC News that its search features “dont allow manipulated media or sexually explicit content,” the outlet’s investigation seemingly found otherwise. NBC News also noted that Google’s Play app store hosts an app that was previously marketed for creating deepfake porn, despite prohibiting “apps determined to promote or perpetuate demonstrably misleading or deceptive imagery, videos and/or text. This suggests that Google’s remediation efforts blocking deceptive imagery may be inconsistent.

Google told Ars that it will soon be strengthening its policies against apps featuring AI-generated restricted content in the Play Store. A generative AI policy taking effect on January 31 will require all apps to comply with developer policies that ban AI-generated restricted content, including deceptive content and content that facilitates the exploitation or abuse of children.

Experts told NBC News that “Googles lack of proactive patrolling for abuse has made it and other search engines useful platforms for people looking to engage in deepfake harassment campaigns.”

Google is currently “in the process of building more expansive safeguards, with a particular focus on removing the need for known victims to request content removals one by one,” Google’s spokesperson told NBC News.

Microsoft’s spokesperson told Ars that they were looking into our request to comment. We will update this report with any new information that Microsoft shares.

In the past, Microsoft President Brad Smith has said that among all dangers that AI poses, deepfakes worry him most, but deepfakes fueling “foreign cyber influence operations” seemingly concern him more than deepfake porn.

This story was updated on January 11 to include information on Google’s AI-generated content policy. reader comments 73 Ashley Belanger Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars