Governance, Risk and Compliance, Government Regulations

New requirements for security, accountability in contractor AI work proposed

Executive Chairman of Alphabet Inc., Google’s parent company, Eric Schmidt speaks during a National Security Commission on Artificial Intelligence (NSCAI) conference November 5, 2019 in Washington, DC. A Senate bill could create new security and accountability requirements on contractors that develop or manage AI systems for the government. (...

Leaders on the Senate Homeland Security and Governmental Affairs Committee have introduced legislation that would put new constraints on the way contractors manage and protect federal artificial intelligence applications.

The Good AI Act, introduced by HSGAC Chair Gary Peters, D-Mich., and ranking Republican Rob Portman, R-Ohio, would compel the director of the Office of Management and Budget to form an “Artificial Intelligence Hygiene Working Group” that would focus on updating federal contracting standards for AI projects and incorporate the latest research on best practices.

The group would be given a specific mandate to align future contracting language with the AI in Government Act, which mandated the creation of an AI Center of Excellence at the General Services Administration and directed OMB to develop guidance to federal agencies on current and future projects.

The new bill from Peters and Portman would give OMB a year to align contracting rules with recently passed laws like the AI in Government Act, ensure contractors secure training data, algorithms and other components of their AI systems, and “address the ownership and security of data and other information created, used, processed, stored, maintained, disseminated, disclosed, or disposed of by a contractor or subcontractor on behalf of the federal government.”

“While artificial intelligence applications have the potential to strengthen our national security, we must ensure data collected by this technology is secure, used appropriately, and does not compromise the privacy and rights of Americans,” Peters said in a statement. “This bipartisan bill will help ensure that federal contractors are using artificial intelligence properly and for the benefit of the country — and that the information collected through these technologies is not misused.”

The bill would cover software or applications that rely in whole or in part on machine learning and other forms of artificial intelligence, systems designed to research or develop AI capabilities and any system where such capabilities are “integrated into another system or agency business process, operational activity, or technology system.”

The director of OMB would be empowered to select representatives from other interagency councils to sit on the working group, but must also incorporate input from a variety of sources, including an executive order released late last year on promoting trustworthy AI in government, a government commissioned AI report released earlier this year and privacy, security and civil liberties stakeholders in and out of government. The working group would sunset after 10 years.

The legislation will seek to harmonize federal contracting rules and language with a number of positions and policies adopted by the U.S. government around responsible and secure use of artificial intelligence over the past year.

A report released in April by the National Security Commission on Artificial Intelligence urged U.S. policymakers to align AI systems and uses by federal agencies and law enforcement with “American values and the rule of law” including mechanisms for due process, individual privacy and nondiscrimination. It also called for AI systems to be engineered to be reliable, transparent, easy to understand or interpret and allow for auditing.

Crucially, it specified that in order to be effective, these principles must apply to work being carried out by federal contractors.

“These recommended practices should apply both to systems that are developed by departments and agencies, as well as those that are acquired,” the authors recommended. “Systems acquired (including commercial, off-the-shelf systems or those acquired through contractors) should be subjected to the same rigorous standards and practices— whether in the acquisitions or acceptance processes.”

An executive order issued in late 2020 took many of the same positions and called for agencies to “ensure the safety, security, and resiliency of their AI applications, including resilience when confronted with systematic vulnerabilities, adversarial manipulation, and other malicious exploitation.”

Derek B. Johnson

Derek is a senior editor and reporter at SC Media, where he has spent the past three years providing award-winning coverage of cybersecurity news across the public and private sectors. Prior to that, he was a senior reporter covering cybersecurity policy at Federal Computer Week. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds