Agencies are on track with AI executive order deadlines, White House says
At the 90-day mark since President Joe Biden’s sweeping artificial intelligence executive order, agencies have met mandated timelines, with some initiatives ahead of schedule.
The White House confirmed it has met all of the 90-day benchmarks it set out to achieve in President Joe Biden’s October 2023 executive order on artificial intelligence, with a focus on corralling the private sector into participating in public sector regulatory efforts absent binding law.
Announced on Monday, the completed actions from the executive order focus on managing the security risks associated with AI-powered systems and investing in innovation. These include leveraging authorities within the Defense Production Act to oblige AI software developers to report “vital information” about their systems, namely safety test results, to the Department of Commerce.
“These companies now must share this information on the most powerful AI systems, and they must likewise report large computing clusters able to train these systems,” the press release reads.
The Department of Commerce — which houses the National Institute of Standards and Technology, an agency at the forefront of AI regulatory efforts — also proposed a new rule to request U.S. cloud companies disclose their foreign customers using cloud software products and train powerful large language models.
A set of nine agencies have also successfully submitted their risk assessments for their uses of AI systems to the Department of Homeland Security that will “serve as the basis for continued federal action.” Among these agencies are the Department of Defense, the Department of Transportation, the Department of Treasury, and Department of Health and Human Services.
The National Science Foundation also launched the pilot program of the National Artificial Intelligence Research Resource last week to further democratize access and education to AI tools, sourcing these resources from both government data and private sector support.
Increased hiring for initiatives like NAIRR and other federal operations involving AI were also a key part of the executive order’s provisions, such as the Office of Personnel Management’s hiring surge for more AI-focused positions in the federal government and pooled hiring action that provided a single data scientist job listing to draw relevant talent to several different agencies.
Some agencies delivered on their order-mandated actions early. The NSF launched its new Regional Innovation Engines that focus on advancing AI research and development earlier than anticipated. Similarly, the Office of Management and Budget published its public call for information on how AI may influence privacy impact assessments and the Department of Energy established an office to coordinate AI and other emerging technologies ahead of schedule.
Experts within the AI regulatory landscape applauded the Biden administration’s actions under the order and note that lawmakers on Capitol Hill are following suit.
“The fact that it is early 2024 and we are seeing the executive branch with a proactive, agile response to a rapidly changing tech landscape is noteworthy,” Marci Harris, the cofounder and executive director of POPVOX Foundation told Nextgov/FCW. “As the executive branch takes proactive steps in AI safety, security, and equitable access, we also appreciate that Congress is increasing its own capacity to understand and incorporate these new technologies into its work to keep pace with the opportunities and challenges that these new technologies bring.”
Other industry voices have pointed out that the narrative that AI technologies are not subject to any regulatory capacity is inaccurate. Miriam Vogel, the president and CEO of EqualAI, an advocacy organization whose goal is to prevent systemic bias from permeating in AI systems, noted that innovation in AI technologies are still subject to existing legal frameworks.
“A dangerous and prevalent myth in AI discourse is that this field is entirely unregulated,” Vogel said in a statement to Nextgov/FCW. “However, operating under this misperception could lead to liability and harm for both companies and individuals.”
Vogel added that AI and ML tools do offer a “revolutionary impact” and should be carefully developed to avoid societal risk, just like earlier technological innovations, but remain subject to law enforcement in contexts like civil rights, privacy and intellectual property.
NEXT STORY: Why AI can’t replace air traffic controllers