A man walks near Google offices on January 25, 2023 in New York City.
Leonardo Munoz | View Press | Getty Images
Election ads running on Google and YouTube that are created with artificial intelligence will soon have to carry a clear disclosure, according to new rules created by the company.
The new disclosure requirement for digitally altered or created content comes as campaigning for the 2024 presidential and congressional elections kicks into high gear. At the same time, new AI tools like OpenAI’s ChatGPT and Google’s Bard have contributed to concerns about how easily deceptive information can be created and spread online.
“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” a Google spokesperson said in a statement. “This update builds on our existing transparency efforts – it’ll help further support responsible political advertising and provide voters with the information they need to make informed decisions.”
The policy will take effect in mid-November and will require election advertisers to disclose that ads containing AI-generated elements have been computer-generated or do not show real events. Minor changes like brightening or resizing an image do not require such a disclosure.
Election ads that have been digitally created or altered must include a disclosures such as, “This audio was computer-generated,” or “This image does not depict real events.”
Google and other digital ad platforms like Meta’s Facebook and Instagram already have some policies around election ads and digitally-altered posts. In 2018, for example, Google began requiring an identity verification process to run election ads on its platforms. Meta in 2020 announced a general ban on “misleading manipulated media” like deepfakes, which can use AI to create potentially convincing false videos.
WATCH: How A.I. could impact jobs of outsourced coders in India