Google has announced that it will soon require political ads using artificial intelligence to be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered. The new rule will apply to election ads on Google’s own platforms, particularly YouTube, as well as third-party websites that are part of Google’s ad display network.
Artificial intelligence (AI) is a powerful tool that can be used to create realistic and convincing content that alters or generates images, videos, or audio clips of people or events. Such content is often referred to as deepfakes, and it can potentially be used for malicious purposes, such as spreading misinformation, defaming opponents, or manipulating public opinion.
Though fake images, videos, or audio clips are not new to political advertising, generative AI tools are making it easier to do, and more realistic. Some presidential campaigns in the 2024 race — including that of Florida GOP Gov. Ron DeSantis — already are using the technology. The Republican National Committee in April released an entirely AI-generated ad meant to show the future of the United States if President Joe Biden is reelected. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic. In June, DeSantis’ campaign shared an attack ad against his GOP primary opponent Donald Trump that used AI-generated images of the former president hugging infectious disease expert Dr. Anthony Fauci.
Google’s new policy aims to increase transparency and accountability
Starting in mid-November, just under a year before the US presidential election, Google said in an update to its political content policy affecting YouTube and other services that disclosure of AI to alter images must be clear and conspicuous and be located somewhere that users are likely to notice it. The policy will also affect campaign ads ahead of next year’s elections in India, South Africa, the European Union and other regions where Google already has a verification process for election advertisers.
Google is not banning AI outright in political advertising. Exceptions to the ban include synthetic content altered or generated in a way that’s inconsequential to the claims made in the ad. AI can also be used in editing techniques like image resizing, cropping, color, defect correction, or background edits.
The company said that it will rely on a combination of automated and human review to enforce the policy, and that it will remove any ads that violate its guidelines. It also encouraged users to report any ads that they suspect of using deceptive AI techniques.
Other efforts to regulate AI-generated deceptive content
Google’s announcement comes amid growing concerns over the potential impact of AI-generated deceptive content on democracy and society. Last month the Federal Election Commission began a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election. Such deepfakes can include synthetic voice of political figures saying something they never said. Congress could pass legislation creating guardrails for AI-generated deceptive content, and lawmakers, including Senate Majority Leader Chuck Schumer, have expressed intent to do so. Several states also have discussed or passed legislation related to deepfake technology.
However, some experts warn that regulation alone may not be enough to combat the threat of deepfakes, and that more public awareness and education is needed. They also suggest that media outlets and fact-checkers should adopt tools and methods to verify the authenticity of digital content and expose any manipulation.