President Joe Biden discusses artificial intelligence in in San Francisco, California, June 20, 2023. Biden is set to receive a national security memorandum required under his executive order on AI.

President Joe Biden discusses artificial intelligence in in San Francisco, California, June 20, 2023. Biden is set to receive a national security memorandum required under his executive order on AI. ANDREW CABALLERO-REYNOLDS/AFP via Getty Images

Biden to receive AI national security memo outlining forbidden uses, opportunities for innovation

The memorandum expected to be delivered Friday to President Joe Biden will build upon existing artificial intelligence guidance while highlighting workforce needs and prohibited use scenarios.

President Joe Biden is expected to receive a national security memorandum outlining the specific risks artificial intelligence technologies can pose to the U.S. national security posture on Friday, which will thread the needle between encouraging experimentation with AI systems while limiting the contexts in which they can be deployed, several sources with knowledge of the memorandum’s contents told Nextgov/FCW.

As part of Biden’s October 2023 executive order on AI, the forthcoming memorandum is designed to "develop a coordinated executive branch approach to managing AI’s security risks," and it is expected to build on previous guidance issued by the Office of Management and Budget and international commitments discussed in Bletchley Park in November 2023 and G7 meetings. 

“This [memorandum] is focused on national security systems, which exist in military and intelligence agencies, but also some of FBI’s and DHS’s systems also will qualify,” a person familiar with the expected contents of the memo said.

On the government contracting side, the memorandum cannot make any changes to AI procurement procedures, but will likely carry “significant implications” for cloud service providers and frontier model developers to have a thorough understanding of how best to responsibly deploy these technologies.

Securing U.S. leadership in AI innovation and standardization is also a likely focal point of the memo, which is expected to address domestic workforce challenges. 

“In addition to underscoring the strategic focus on talent development as essential for maintaining technological leadership, a heavy focus will be on talent development within the United States and bringing top talent to the United States,” a second person with knowledge of the memorandum’s contents said. “This is seen as critical for enhancing the nation's competitive edge in AI technologies.”

The memo is also expected to deal with the energy demands of AI computing and how best to leverage balance those demands with the policy push for clean energy. 

The memo is expected to address how AI should not be used in government operations. The first source said that the memo will likely include a short list of “prohibited uses” of AI systems, such as leveraging it to operate nuclear weapons and tracking constitutionally protected activity, like free speech. 

“High impact” use cases of AI will also be outlined in the memo. Examples of high impact AI deployment will likely include risky scenarios  — such as real-time biometric tracking and identifying individuals as threats to national security — that are not prohibited, but demand greater oversight. 

“Those high impact uses will be subject to various governance and risk management practices that will be similar to those in the OMB memo, though depart from them in some ways,” the first source said. 

Although the memo will be initially classified, the Biden administration is angling to declassify as much as it can for broader accessibility at a later date, the second source familiar with the memo said. 

Experts in the national security field note that this memorandum is important in setting a tone for how the government will respond to both risks and advantages offered by AI technologies. 

“What you're looking at here are government capabilities that really affect fundamental freedoms and rights: who they decide to investigate, who they decide to surveil, who they allow to come into the country, who they designated as a national security or public safety threat. So these are things that are really important to individuals and really affect their lives,” Faiza Patel, co-director of the Liberty and National Security Program within New York University’s Brennan Center for Justice, told Nextgov/FCW. “So I think it's an incredibly high-stakes document which hasn't gotten as much attention, I think, as some of the other AI work.”

Patel noted that within national security organizations, internal mechanisms are often needed to enforce the implementation of safeguards. She said that bringing in more robust external oversight to ensure the safe deployment of AI technologies stands to be helpful in federal agencies working to preserve civil liberties alongside AI integration.

“I would be pleased to see strong guardrails for high-risk systems. I would be pleased to see a robust list of high-risk systems, but I do question whether there are effective mechanisms inside the government to make sure whether those rules and safeguards are actually being followed,” Patel said.