WASHINGTON, D.C. – U.S. Senators Gary Peters (D-MI), Chairman of the Homeland Security and Governmental Affairs Committee, Mike Braun (R-IN), and James Lankford (R-OK) introduced bipartisan legislation to require transparency of the federal government’s use of artificial intelligence (AI). The bill would require federal agencies to notify individuals when they are interacting with or subject to decisions made using certain AI or other automated systems. The bill also directs agencies to establish an appeals process that will ensure there is a human review of AI-generated decisions that may negatively affect individuals. The legislation comes after Peters convened a hearing last month with outside experts on the federal government’s use of AI and how lawmakers can increase transparency around the government’s use of these technologies.
“Artificial intelligence is already transforming how federal agencies are serving the public, but government must be more transparent with the public about when and how they are using these emerging technologies,” said Senator Peters. “This bipartisan bill will ensure taxpayers know when they are interacting with certain federal AI systems and establishes a process for people to get answers about why these systems are making certain decisions.”
“No American should have to wonder if they are talking to an actual person or artificial intelligence when interacting with the government. The federal government needs to be proactive and transparent with AI utilization and ensure that decisions aren’t being made without humans in the driver’s seat,” said Senator Braun.
“Artificial intelligence is a powerful tool to improve the efficiency and effectiveness of our federal government, but we must always keep security, privacy, and any unintended consequences in mind before we turn any process or decision over to a computer,” said Senator Lankford. “The federal government can and should thoughtfully integrate new technology to help improve customer service for Americans. But agencies should be transparent about when, where, and how we are interacting with AI to ensure continuous oversight and accountability for how these tools impact Americans.”
The federal government is already using AI to interact with and make decisions about the public, and use of these systems is only expected to grow. While AI systems can help improve government efficiency, they can also pose risks if deployed improperly. People who unknowingly interact with AI can often be confused or frustrated by how or why these systems make certain determinations. The senators’ bipartisan legislation will increase transparency of the government’s AI use and provide increased opportunities for the public to appeal decisions that may be inaccurate or biased.
The Transparent Automated Governance Act (TAG Act) requires the Director of Office of Management and Budget to issue guidance to agencies to implement transparency practices relating to the use of AI and other automated systems. This guidance would direct agencies to notify individuals when they are interacting with certain automated systems. It would also require agencies to notify individuals when a critical decision is made about them using an augmented decision process. Finally, the guidance would instruct agencies to establish human review appeals processes for individuals who receive an adverse critical decision made using an augmented critical decision process.
Peters and Braun’s bill to create an AI training program for federal supervisors and management officials also recently advanced in the Senate.
###