America’s election chiefs are worried AI is coming for them

Election officials are important enough to fake — and public enough to make it easy to do — but anonymous enough that voters may easily be tricked.

Election officials fear that they could be targets of AI-generated deepfakes leading up to the election, in an effort to fool the public and their colleagues. | POLITICO illustration by Emily Scherer/Photos by iStock

A false call from a secretary of state telling poll workers they aren’t needed on Election Day. A fake video of a state election director shredding ballots before they’re counted. An email sent to a county election official trying to phish logins to its voter database.

Election officials worry that the rise of generative AI makes this kind of attack on the democratic process even easier ahead of the November election — and they’re looking for ways to combat it.

Election workers are uniquely vulnerable targets: They’re obscure enough that nobody knows who they really are, so unlike a fake of a more prominent figure — like Joe Biden or Donald Trump — people may not be on the lookout for something that seems off. At the same time, they’re important enough to fake and just public enough that it’d be easy to do.

Combine that with the fact that election officials are still broadly trusted by most Americans — but don’t have a way to effectively reach their voters — a well-executed fake of them could be highly dangerous but hard to counter.

“I 100 percent expect it to happen this cycle,” New Mexico Secretary of State Maggie Toulouse Oliver said of deepfake videos or other disinformation being spread about elections. “It is going to be prevalent in election communications this year.”

Secretaries of state gathered at the National Association of Secretaries of State winter meeting last month told POLITICO they have already begun working AI scenarios into their trainings with local officials, and that the potential dangers of AI-fueled misinformation will be featured in communication plans with voters.

Election officials have already spent the last few years struggling to figure out how to combat an increasingly toxic election environment in which misinformation has fueled public distrust of the electoral system and physical threats. Now they’re worried AI will make that challenge even more unmanageable.

“Our staff is in conversation with a lot of folks around the country,” said Arizona Secretary of State Adrian Fontes, who publicized a training late last year that included a deepfaked version of himself spreading false information. “It has a lot of potential to do a lot of damage.”

One of the particular threats of AI impersonations of election officials is that they’re used to mislead other election workers. | Jason Connolly/AFP via Getty Images

The technology has improved so rapidly that people often don’t realize how easily and effectively someone can be impersonated by AI. When the good-government group Brennan Center for Justice runs its election AI trainings — it helped arrange the one in Arizona — it shows them a tangible example of AI misinformation. After recording participants in the tabletop training exercise, the group then created and displayed deepfakes later in the training of the participants giving misleading messages to the public.

“To see it, as opposed to hearing about it, and to see it with people that you know or maybe even of yourself, brought home that this isn’t some science fiction,” said Lawrence Norden, the senior director of the group’s elections and government program.

He said the Brennan Center plans to run the exercise in other states this year. The Cybersecurity and Infrastructure Security Agency, a federal agency charged with cybersecurity and protection of “critical infrastructure” — including election systems — has also begun working on incorporating AI scenarios into election training.

Cait Conley, a senior adviser to CISA, said in a statement that the agency has heard directly from election officials about “potential risks of generative AI and its ability to make them, personally, targets.” Generative AI has the potential to intensify “cyber, physical, and operational risks,” and the agency has and will continue to conduct tabletop exercises and trainings on AI.

One of the particular threats of AI impersonations of election officials is that they’re used to mislead other election workers. For example, a faked secretary of state could push out wrong last-minute changes during the organized chaos of Election Day, confusing local election administrators or poll workers and disrupting voting. Or AI can be used to mimic a colleague to try gaining unauthorized access to important systems.

“We’re seeing a little smarter phishing emails because AI does make it a bit more advanced,” said Carly Koppes, the clerk and recorder of Weld County, Colorado. “They may be trying to test some waters and ramp it up for later this year.”

Election administrators are preparing their responses for when the AI-driven attacks do come.

Several election administrators said the emphasis will be on responding quickly to misinformation, given how easy it will be for bad actors to generate even greater — and more convincing — amounts of misleading content. Many have plans for public education campaigns. They also plan to double down on the misinformation-fighting programs and tactics they’ve developed in recent years.

Some solutions are surprisingly simple. Kansas Secretary of State Scott Schwab, for example, pointed to the long-running shift of local election offices signing up for .gov websites and email addresses to prove their authenticity to voters.

Source: https://www.politico.com/news/2024/03/11/secretary-state-ai-election-misinformation-00146137

 

Exit mobile version