How Artificial Intelligence Influences Elections, and What We Can Do About It

Image
A person's hands on a laptop keyboard with various windows and icons hovering in the air above it that say "AI"
Photo by Khanchit Khirisutchalual

2024 will be the first election year to feature the widespread influence of AI before, during and after voters cast their ballots, including in the making and distribution of public messages about candidates as well as electoral processes. 

Campaign Legal Center (CLC) has been working hard to help address the impact of AI on our democracy, including educating the public about what to expect in upcoming political campaigns and recommending policy solutions lawmakers should adopt to mitigate the greatest risks to our election system.  

CLC has particularly highlighted the danger of political ads that use AI technology to generate deceptively realistic false content — such as “deepfakes,” which are manipulated media that depict people doing or saying things they didn’t say or do, or events that didn’t really occur — to mislead the public about what candidates are asserting, their positions on issues, and even whether certain events actually happened. If left unchecked, these fraudulent and deceptive uses of AI could infringe on voters’ fundamental right to make informed decisions. 

In addition to influencing the voters’ perceptions of candidates, AI could be used to manipulate the administration of elections, including by spreading disinformation to suppress voter turnout.

Bad actors could use AI tools to create and distribute convincingly false messages about where or when to cast a ballot, or to discourage voters from showing up to their polling locations in the first place.  

For example, shortly before the 2024 New Hampshire primary election, an AI-generated robocall simulated President Biden’s voice and urged voters not to participate in that election, falsely suggesting that voters should “save” their vote for the 2024 general election in November.

The average voter hearing this message might reasonably have concluded that Biden had actually recorded the message, and that they should comply with his request — effectively disenfranchising them.  

Looking ahead, it is not hard to imagine other fabricated messages from trusted voices being used to dissuade citizens from voting, undermine their ability to vote or to raise false alarms about emergency situations like a fire or an attack that would persuade voters to stay at home on Election Day.  

Moreover, there is a real risk that AI could be used to worsen the disproportionate targeting of disinformation at Black and brown voters who already face too many barriers to equal participation in the democratic process. 

AI also creates new opportunities for bad actors to undermine election administration or sow unjustified doubt about election results. AI technologies could easily be used to manufacture fake images and false evidence of misconduct, such as ballot tampering or shredding. That would not only erode public trust in the results of elections, it could fuel additional public threats of violence against election administrators.

In recent years, nonpartisan election workers have already faced unprecedented levels of threats and harassment while trying to ensure our democratic process is smooth and fair. AI technology could make their situation even worse. 

Even after all the ballots have been cast, AI could be used to fabricate audio of a candidate claiming they rigged the results, or to generate other misinformation that could persuade the supporters of a failed campaign to disrupt vote counting and certification procedures, which are already increasingly politicized and served as the basis for the effort to sabotage the 2020 presidential election. Fake media made with AI have already been used to influence major elections in Argentina and Slovakia. 

To be clear, the risks of election manipulation, voter suppression and misinformation all predate the arrival of AI-based media tools. But AI undoubtedly provides bad actors with easy access to new tools to harm our democracy more easily and effectively, increasing the urgency of a robust response. Many necessary solutions also go beyond regulating new technologies, and laws often lag behind even well-established tech. 

Many states have begun to take action, passing and proposing bills ranging from mandatory disclaimers that would inform voters when AI is being used in elections, to legislation banning political deepfakes. At the federal level, bipartisan legislation has been introduced in Congress that would prohibit the distribution of deceptive AI-generated content to influence an election or to fundraise.  

Federal agencies are also considering what they can do to safeguard our democracy in the age of AI, with some proposals that would extend beyond elections but nevertheless help address the specific risks voters face. For example, in response to the AI-generated fake Biden robocall in New Hampshire, the FCC outlawed certain robocalls containing voices generated by AI.  

Finally, new education efforts are underway to simply prepare the public for AI-generated media being used to influence and possibly manipulate elections, which will hopefully help mitigate its worst effects. 

In fact, 20 major tech companies, including Google, Meta (the parent company of Facebook and Instagram), OpenAI, X (formerly Twitter), and TikTok, recently pledged to take concrete steps to detect, track, and combat the use of deepfakes and other election interference efforts. Of course, they must follow through on those promises.  

Although it is encouraging to see widespread interest in preventing AI-based election manipulation, many proposed solutions are still a long way from providing tangible protection for voters and the electoral process. That is why Campaign Legal Center continues to urge policymakers across the country to redouble their efforts and take strong action to address the unique challenges AI creates for our democracy.

Adav is CLC's Executive Director.