MILWAUKEE – The rapid advancement of artificial intelligence has opened the doors to a flood of new possibilities in politics – including convincing deepfakes and the rapid spread of disinformation.
“We should consider (AI) to be a significant risk to American democracy,” said David Harris, a professor at the University of California at Berkeley who studies AI, misinformation and democracy.
The growing availability of AI tools means that creating realistic – but fraudulent – video, audio and photos of politicians is cheaper and more accessible than ever.
“By the time November election rolls around, you’ll hardly be able to tell the difference between reality and artificial intelligence,” says a deepfake version of Kari Lake, the former GOP nominee for Arizona governor now seeking a seat in the U.S. Senate.
The video was created by media outlet Arizona Agenda as a way to warn voters how deceptive deepfakes can be.
At the Republican National Convention, Microsoft experts warned deepfakes can influence elections and discussed ways to mitigate the dangers.
The audience at the workshop managed to correctly guess whether images were real or AI-generated four out of six tries – “a great score” according to Ashley O’Rourke, a Microsoft executive.
“2024 is going to be the first cycle where the issue of deepfakes is going to play a more essential role,” she said.
While there is no “silver bullet” to combat misinformation and deepfakes, consumer literacy can help immunize voters, said Ginny Badanes, who leads Microsoft’s Democracy Forward program. That entails a healthy skepticism and being willing to look for sources to confirm that what they see or hear is real.
“If we can get to a place with a kind of resilience like that, I think that we will find that a lot of attempts that people make to deceive us, to defraud us, we would be a lot less susceptible to,” she said.
Microsoft plans a similar workshop when Democrats convene next month in Chicago, part of a growing effort by election officials and tech companies to improve AI literacy and sensitize voters to the new realities.
In December, Arizona Secretary of State Adrian Fontes collaborated with election and technology experts to prepare local election officials for AI-based disruptions.
One exercise simulated an emergency in which officials were ordered via audio to keep polling locations open. They couldn’t be sure if the audio was AI-generated and practiced how to respond.
“We need to be literate, we need to be concerned and we need to be prepared,” said Toshi Hoo, director of the Emerging Media Lab at the Institute for the Future.
Hoo worked with Fontes to create the tabletop exercise. He said it’s critical that officials and citizens understand AI’s capabilities and strengthen interpersonal relationships and communication to improve verification and trust.
Generative AI can exacerbate existing threats to elections, said Noah Praetz, president of The Elections Group, which advises elections administrators on security and procedures.
Praetz also collaborated on the Arizona exercise, which he said reinforced the importance of tools and tactics already used to prevent election interference.
“It’s stuff that they have to do anyway,” said Praetz: “One, relentlessly communicate with all stakeholders and, two, put their shields up from a cyberdefense standpoint.”
To avoid falling for disinformation created or manipulated by AI, Hoo encouraged citizens to spend time using AI tools to better understand its capabilities and become more literate.
“It’s important for folks to become aware of what the new kind of capabilities are because they’re happening so quickly,” he said.
Fontes created an Artificial Intelligence and Election Security Advisory Committee, which includes Hoo, Harris, Praetz and a dozen other experts. The panel focuses on how to safeguard elections from potential disruptions from AI, as well as how AI might be used as a tool.
In February, Microsoft and 19 other tech companies signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections.
Badanes called it a “quite significant” step.
“It’s different than what we’ve seen in other cycles with technology companies really defensive after the fact. In this case we are leaning into the challenge and acknowledging our role in it,” she said.
The companies have pledged to develop technology to mitigate risks related to deceptive AI content. That includes beefing up efforts to find and quash such content on their platforms.
Last year, the White House secured voluntary pledges from seven leading AI companies to develop the technology responsibly. Among the techniques is embedding digital watermarks to make it easier to verify legitimate content.
Harris said it’s not enough.
“It’s only with regulation will we get them to actually do what needs to be built to protect democracy from all of the ways that AI could be used to interfere with it,” Harris said.
In May, Arizona Gov. Katie Hobbs signed a new law that allows candidates and ordinary citizens to sue over “digital impersonations” made without their consent. There is no comprehensive federal regulation on AI.
Hoo emphasized that it’s important to be cautious and prepared, but there’s no way to know how rampant AI-based mischief will be in the 2024 elections.
“If people stopped believing anything is real because we’ve oversold or over-spoken the risks of deepfakes, that’s a problem, also,” he said.