AI is creating ‘more sophisticated’ but not unprecedented election threats, DHS official says
AI is likely to create more convincing phishing campaigns but is “not necessarily introducing a new threat or risk in and of itself,” the official said.
Artificial intelligence tools are exacerbating cyber threats to election systems and personnel and helping to spread more sophisticated misinformation about the voting process but are not yet presenting novel risks to election infrastructure, a top Department of Homeland Security official said on Friday.
Speaking at an event at the Center for Strategic and International Studies think tank, Iranga Kahangama — assistant secretary for cyber, infrastructure, risk and resilience at DHS — said AI will lead to “some more sophisticated, more precise attempts” by nefarious actors to interfere with future elections, but added that officials “see artificial intelligence in the election space as not necessarily introducing a new threat or risk in and of itself.”
Common threats to the voting process in previous election cycles have included the rapid spread of mis- and disinformation, phishing attempts and the creation of false videos and audio. Kahangama said AI technologies largely present “a means to more hyper-personalized, or a faster means, with which an adversary could potentially disrupt or interrupt an election” using these familiar tactics.
Kahangama said that phishing campaigns that use generative AI to produce more polished text, for instance, “may be a little bit more believable” and can increase the chances of election personnel clicking on malicious links. This access could also result in additional follow-up campaigns that push “false claims of equipment being defective or being manipulated.”
More convincing AI-generated content could also lead to further threats of violence against and intimidation of election workers, pushing personnel to leave their jobs and resulting in slower vote tabulations that could undermine trust in the validity of election results.
The ability for emerging technologies to generate realistic content also presents a growing threat to elections, Kahangama said, citing a robocall that New Hampshire residents received ahead of the state’s presidential primary in January that featured an AI-generated voice of President Joe Biden.
“We see that threat, we don't think it's necessarily a new one, but it's gonna make us need to be a little bit faster,” he added.
To enhance officials’ response times, Kahangama said state and local election personnel should continue to bolster their cybersecurity practices and utilize resources from the Cybersecurity and Infrastructure Security Agency to educate workers about the risks posed by AI-generated content.
CISA launched a website in February to provide state and local officials with resources for defending against cyber and physical threats and also released guidance in January on the risks posed by AI to the election process.
Kahangama said CISA and DHS as a whole “really want to focus on elevating state and local voices” when it comes to combating these threats, largely by empowering them “to be the ones refuting this information as it affects them.”
When it came to the AI-generated robocalls in New Hampshire, Kahangama pointed to the fact that the state attorney general’s office took the lead in tracking down and identifying the culprit.
“I think that's a very appropriate and meaningful way to deal with it,” he added. “We have capable resources on the ground in state and locals, and they are going to be empowered by the federal government.”