WASHINGTON, D.C. – Bipartisan legislation authored by U.S. Senators Gary Peters (D-MI), Mike Braun (R-IN), and James Lankford (R-OK) torequire transparency of the federal government’s use of artificial intelligence (AI) has advanced in the Senate. The bill would require federal agencies to notify individuals when they are subject to decisions made by certain AI and other automated systems. The bill requires agencies to have an appeals process that will ensure there is a human review of AI-generated decisions that may negatively affect individuals. The legislation came after Peters convened a hearing with subject matter experts on the government’s use of AI and how lawmakers can increase transparency around the government’s use of these technologies. The bill was advanced by the Senate Homeland Security and Governmental Affairs Committee where Peters serves as Chair, and now moves to the full Senate for consideration.
“As more federal agencies use artificial intelligence to better serve the public, taxpayers deserve to know when these systems are involved in making decisions that directly impact their lives,” said Senator Peters. “This bipartisan bill will improve transparency around the federal government’s use of this technology, and ensure people have an opportunity to get the answers they deserve about certain decisions made by AI systems.”
“No American should have to wonder if critical decisions in the government are being made by artificial intelligence (AI). The federal government needs to be proactive and transparent with AI utilization and ensure that decisions aren’t being made without humans in the driver’s seat,” said Senator Braun.
“Artificial intelligence is a powerful tool to improve the efficiency and effectiveness of our federal government, but we must always keep security, privacy, and any unintended consequences in mind before we turn any process or decision over to a computer,” said Senator Lankford. “The federal government can and should thoughtfully integrate new technology to help improve customer service for Americans. But agencies should be transparent about when, where, and how we are interacting with AI to ensure continuous oversight and accountability for how these tools impact Americans.”
The federal government is already using AI to interact with and make decisions about the public, and use of these systems is only expected to grow. While AI systems can help improve government efficiency, they can also pose risks if deployed improperly. For example, a recent study found that the Internal Revenue Service used an automated system that was more likely to recommend Black taxpayers than white taxpayers for audits. People who unknowingly interact with AI can often be confused or frustrated by how or why these systems make certain determinations. The senators’ bipartisan legislation will increase transparency of the government’s AI use and provide increased opportunities for the public to appeal decisions that may be inaccurate or biased.
The Transparent Automated Governance Act (TAG Act) requires the Director of Office of Management and Budget to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems. It would also require agencies to notify individuals when a critical decision is made about them using an augmented decision process. Finally, the guidance would instruct agencies to establish human review appeals processes for individuals who receive an adverse critical decision made using an augmented critical decision process.
Peters and Braun’s bill to create an AI training program for federal supervisors and management officials also recently advanced in the Senate.
###