House bill targets AI-generated comments in rulemaking
The legislation looks to provide assurance that public comments on pending regulations come from real people.
The House passed a bill on Monday that tasks federal agencies with managing computer-generated comments in rulemaking proceedings. The Comment Integrity and Management Act of 2024 passed without opposition on a voice vote.
The bill, sponsored by Rep. Clay Higgins, R-La., doesn't look to block comments generated by large language models like ChatGPT, Google's Gemini and others. Rather, the bill puts a legislative framework around existing efforts to identify and manage the flow of computer-generated comments to proceedings on Regulations.gov and elsewhere.
"The cornerstone of this bill is its commitment to ensuring that every comment submitted by electronic means comes from a real person, not an automated program," Higgins said on the House floor on Monday. "By requiring human verification, we are taking a significant step towards preserving the authenticity of public input."
Even before the advent of publicly available generative AI applications, policymakers were concerned about how to treat bulk submissions in response to federal agency rulemaking proceedings.
"The basic issue is that it has gotten easier for people to post comments online in a rulemaking," Rep. Jamie Raskin, D-Md., said on the House floor on Monday in support of the bill. "That is a really good thing because it means that the process of implementing regulations is more accessible, more transparent, more open, and more participatory, but a number of the agencies have found, I think, what Members of Congress have found. Sometimes you get the same paragraph 100 times, 1,000 times, or 3,000 times."
In one instance, the Federal Communications Commission's docket on the repeal of net neutrality policy in 2017 generated more than 22 million comments — more than 8.5 million of which were posted via a campaign led by broadband providers who wanted to see an end to the policy, according to a probe by New York's attorney general. There were 7 million comments traced to a computer science student, who attributed the comments to computer-generated names and addresses.
The Biden administration is also concerned about the issue. Its 2023 executive order on Modernizing Regulatory Review charges the head of the Office of Information and Regulatory Affairs with considering "guidance or tools to modernize the notice-and-comment process, including through technological changes…[including] guidance or tools to address mass comments, computer-generated comments (such as those generated through artificial intelligence), and falsely attributed comments."
One new wrinkle posed by generative AI, according to Mark Febrizio, a senior policy analyst at the George Washington University Regulatory Studies Center, is whether bot-generated comments can escape detection of tools designed to identify spam comments. In a 2023 paper, Febrizio argued that existing checks in place in the Regulations.gov platform could thwart efforts to flood proceedings with AI-generated comments.
These checks include a CAPTCHA interface to identify individual posters as human and, more significantly, a registration system used to authorize and track bulk comment submissions.
"Even with an unlimited supply of AI-generated content, a malicious user would quickly hit a bottleneck when trying to submit those comments on agency rules," Febrizio wrote.
Under the bill, any agency with its own rulemaking docket would be subject to policies similar to the one governing comments on Regulations.gov on the posting of AI-generated comments. Specifically, such policies would task agencies with publishing just a single representative version of mass comments, rather than having each listed separately in the docket, and publicly state the number of computer-generated submissions in a rulemaking proceeding.
The Office of Management and Budget is charged with issuing guidance for agencies to follow when implementing commenting policies and using "new technology to offer new opportunities for public participation in the rulemaking process." The bill tasks the Government Accountability Office with delivering a report to Congress with recommendations on how to identify computer-generated content, the prevalence of such comments and their actual impact on rulemaking.