November 7, 2024
Meta plans to identify more AI-generated images ahead of upcoming elections
Meta will take a harder stance on misinformation and deepfakes ahead of upcoming elections around the world, the company said Tuesday.

Meta Platforms CEO Mark Zuckerberg arrives at federal court in San Jose, California, on Dec. 20, 2022.

David Paul Morris | Bloomberg | Getty Images

Meta is expanding its effort to identify images doctored by artificial intelligence as it seeks to weed out misinformation and deepfakes ahead of upcoming elections around the world.

The company is building tools to identify AI-generated content at scale when it appears on Facebook, Instagram and Threads, it announced Tuesday.

Until now, Meta only labeled AI-generated images developed using its own AI tools. Now, the company says it will seek to apply those labels on content from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.

The labels will appear in all the languages available on each app. But the shift won’t be immediate.

In the blog post, Nick Clegg, Meta’s president of global affairs, wrote that the company will begin to label AI-generated images originating from external sources “in the coming months” and continue working on the problem “through the next year.”

The added time is needed to work with other AI companies to “align on common technical standards that signal when a piece of content has been created using AI,” Clegg wrote.

Election-related misinformation caused a crisis for Facebook after the 2016 presidential election because of the way foreign actors, largely from Russia, were able to create and spread highly charged and inaccurate content. The platform was repeatedly exploited in the ensuing years, most notably during the Covid pandemic, when people used the platform to spread vast amounts of misinformation. Holocaust deniers and QAnon conspiracy theorists also ran rampant on the site.

Meta is trying to show that it’s prepared for bad actors to use more advanced forms of technology in the 2024 cycle.

While some AI-generated content is easily detected, that’s not always the case. Services that claim to identify AI-generated text, such as essays, have been shown to exhibit bias against non-native English speakers. It’s not much easier for images and videos, though there are often signs.

Meta is looking to minimize uncertainty by working mainly with other AI companies that use invisible watermarks and certain types of metadata in the images created on their platforms. However, there are ways to remove watermarks, a problem that Meta plans to address.

“We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers,” Clegg wrote. “At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks.”

Audio and video can be even harder to monitor than images, because there’s not yet an industry standard for AI companies to add any invisible identifiers.

“We can’t yet detect those signals and label this content from other companies,” Clegg wrote.

Meta will add a way for users to voluntarily disclose when they upload AI-generated video or audio. If they share a deepfake or other form of AI-generated content without disclosing it, the company “may apply penalties,” the post says.

“If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate,” Clegg wrote.

WATCH: Meta is too optimistic on revenue and cost growth