Google is taking proactive measures to combat the rise of synthetic content in political advertising on its platforms. In response to the increasing prevalence of AI-generated content, the tech giant has announced new rules that will require political ads to disclose the use of artificial intelligence in creating images and audio.
This significant change is scheduled to go into effect in November, well in advance of the next US presidential election, to address concerns about AI’s potential to fuel disinformation campaigns during political campaigns.
While Google’s existing ad policies already prohibit the manipulation of digital media to deceive or mislead the public on political, social, or public interest matters, this update specifically targets election-related ads. Such ads will be required to prominently disclose any “synthetic content” that portrays real or realistic-looking individuals or events. Google suggests labels like “this image does not depict real events” or “this video content was synthetically generated” to serve as clear indicators.
Furthermore, Google maintains strict policies against demonstrably false claims that could erode trust in the electoral process.
In addition to requiring transparency about the use of AI in political ads, Google also mandates disclosure of the ad’s sponsor and makes information about political messages available in an online ads library. The requirement for clear and conspicuous disclosures of digitally altered content within election ads aims to ensure that users are well-informed about the authenticity of the content they encounter.
These measures come in response to incidents like the dissemination of a fake image of former US President Donald Trump’s arrest and a deepfake video featuring Ukrainian President Volodymyr Zelensky, emphasizing the need for enhanced transparency and accountability in the realm of political advertising.
Also, see:
The Top 5 Breakthroughs in Artificial Intelligence for 2023 and beyond