PHOENIX – In the rapidly evolving campaign landscape of the upcoming presidential election, artificial intelligence is already a large and active participant, revolutionizing campaign strategies and communication. From AI-generated campaign ads to rapid response times, election officials say these technological advancements could have a profound impact on the way candidates engage with voters.
Generative AI is already being used by candidates and politicians to assist in maximizing voter outreach and send out statements and fundraising emails in record time.
This technology provides parties and candidates, as well as the average person, with inexpensive, fast tools for developing political messaging, changing the dynamics of political communication – for better or for worse.
Voters are seeing the negative side as new and advanced technology makes it easier than ever for individuals to easily create nearly undetectable manipulated media that can simulate a candidate’s voice or likeness often without stating that the content is fabricated.
“What is especially troublesome is that in the last year or so, the technology has become easier to use,” said Retha Hill, the executive director of the New Media Innovation and Entrepreneurship Lab at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication, which houses Cronkite News. “You can clone voices, you can animate a photo, you can make these deepfakes without having to invest a lot of money. Before, you had to have people who knew what they were doing; now, a novice can do it. It’s getting harder and harder to detect what is real and what is not.”
With such a close and heated election cycle in the battleground state of Arizona, the creation of convincing false narratives or sharing of fabricated audio or video about candidates could be enough to turn the tides.
“They’re easier to make, they’re harder to detect, and people may already have a mindset to believe what they see,” Hill said. “If you’re not paying that much attention, if you see it on social media moving through your feed and it confirms something you’re thinking all along about that politician … it can help to reaffirm what you’re thinking.”
Current laws regarding digitally manipulated content present challenges in regulating misinformation during an election season, officials say. Campaign speech is greatly protected under the First Amendment, and defamation lawsuits are rare and seldom successful in political campaigns.
In light of these recent technological advancements, state and federal lawmakers across the country are attempting to address the growing problem, with individuals from both major parties working to introduce legislation that combats AI misinformation.
Changing regulations around artificial intelligence
Arizona legislators are hoping to pave the way for political candidates to take legal action against the rising threat of deepfakes in election campaigns. HB 2394, a measure proposed by state Rep. Alexander Kolodin, R-Scottsdale, is designed to address the rise of digitally manipulated media created with an intent to mislead voters.
The bill, which is working its way through committees, states that, if enacted, candidates for public office – or any citizen of Arizona – would have the right to take legal action against digital impersonation, if they could prove the content was published without consent and that it was created with the intent of deception.
“Artificial-generative technologies, they have a very legitimate role to play in our public discourse,” Kolodin said during a Jan. 24 House Municipal Oversight and Elections Committee meeting. “This is a bill, I believe, that really is the most thoughtful and respectful attempt at figuring out what to do about this new technology and the political and elections context that, at the same time, does not in any means infringe on the First Amendment.”
Notably, the bill doesn’t grant the power to remove deepfakes from the internet but rather assists candidates in proving misinformation to voters, with court backing. An expedited process can be obtained under specific conditions, such as an impending election within a 180-day period or if the content depicts explicit or harmful content.
Kolodin designed HB 2394 as a rapid way for candidates to combat deceptive content and establish their credibility when nearing an election.
“The expedited part, that’s the court saying, ‘This is my best judgment, based on the little time that we’ve had to examine this.’ There’s an opportunity for people who are experts to weigh in in court immediately. A week is an eternity in politics, especially in a swing state,” Kolodin said.
Combatting AI misinformation through election security trainings
As legislation works its way through the process, elected officials are finding new ways to combat the issue before election season is in full swing. Arizona Secretary of State Adrian Fontes has turned to utilizing AI tools themselves to detect AI-generated content and mitigate the problem.
Fontes’ office is hosting multiple training sessions aimed at covering crucial aspects of election security, such as verification tests and personalized authentication for audio communications. These exercises have included representatives from all 15 Arizona counties, as well as election officials, emergency management staff, law enforcement and county supervisors.
Officials worked with Fontes to create manipulated content specifically for the purpose of training election officials how to spot and combat deepfakes.
“We made a synthetic version of Secretary Adrian Fontes, and many folks did not realize at first that was not the real Adrian Fontes,” said Michael Moore, the chief information security officer for Fontes’ office. “It was a repurposed video of him, it wasn’t a legitimate video, and voice cloning is incredibly impressive these days.”
The state will host more of these training sessions as the election nears, emphasizing the importance of preparedness in the face of evolving technological threats to fair elections.
“Tools are much more sophisticated now, and you can create basically an impossible-to-discern, perfect lifelike representation of them,” Moore said. “So we wanted to make sure that folks understood this is the state of the technology today. We’re going to see this, and we need to be prepared to respond to this.”