Part routine 90-day regulatory update, part campaign rhetoric, the Biden administration on Monday reported on the progress of the president’s executive order (EO) on artificial intelligence (AI) — and much of the focus was on the security of AI models and national security.
First announced on Oct. 30, here are four important takeaways from today’s announcement:
AI developers must now report tests results to the government
The Biden EO on AI used Defense Production Act authorities to compel AI developers to report AI safety test results to the Department of Commerce. Such companies must now share this information on the most powerful AI systems — and must report on large computing clusters capable of training these systems.
Cloud providers must report ‘malign’ activity
If finalized as the Department of Commerce proposed in the original EO, cloud providers now must alert the government when foreign clients train the most powerful AI models that could potentially get used for “malign” activity.
Leading federal agencies took steps to ensure the AI safety of critical infrastructure
Nine federal agencies, including the Defense Department, Transportation Department, Treasury Department, and the Department of Health and Human Services, submitted risk assessments to the Homeland Security Department, reports that aim to help ensure that the United States is ahead of the curve in integrating AI safety into critical infrastructure.
The ‘AI Talent Surge’ is under way
The AI and Tech Talent Task Force created by the Biden EO on AI has launched an aggressive hiring effort and has been actively recruiting AI talent. For example, the Office of Personnel Management has granted flexible hiring authorities for federal agencies to hire AI tech talent. And various organizations, such as the Presidential Innovation Fellows, U.S. Digital Corps, and the U.S. Digital Service have scaled up hiring for AI talent.
Over the last 90 days, the administration has also launched its EducateAI initiative, which seeks to offer educational opportunities to students at the K-12 through undergraduate levels, plus advance the work of the National Science Foundation to develop the nation’s workforce needs.
Industry reaction to AI executive order
“It’s good to see a focus on developing homegrown talent by creating opportunities in K-12,” said Morgan Wright, chief security advisor at SentinelOne.
Wright pointed out that in a previous article for SC Media, he wrote that the best time to start closing the knowledge gap was two decades ago. The next best time is today.
“Providing regulatory clarity for AI is another good objective, one that has eluded the federal government to date,” added Wright.
While AI companies and cloud providers have their marching orders, other industry pros still had some questions about the language used in the EO.
John Allison, director of public sector at Checkmarx, said he wants to see details of what the government will do with the additional information the industry submits.
“I’d also like to have a definition of what foreign AI activity qualifies under ‘malign’ such that the AI providers can accurately generate the reports,” said Allison. "Like most things with AI, changes are happening at light speed and security and compliance is playing catch-up to the technology. I don't think anyone would argue that if AI is not developed in a safe and secure manner, the consequences may be catastrophic. We are already seeing AI being used by bad actors, and there’s no reason to think that will ever stop.”
Craig Burland, chief information security officer at Inversion6, said while we’re still a long way from control of AI being “real," the administration has put another stepping stone on the path. Burland said unsurprisingly, the government started with a focus on national security and critical infrastructure — areas they can directly influence without inviting an avalanche of litigation.
“However, the requirement of the testing requirement is limited to models that pose a ‘serious risk to national security, national economic security, or national public health and safety,’ narrowing who will be subject to the new rule,” explained Burland. “This is a fairly subjective scope that could see agents in dark suits and sunglasses appear in the lobby, or miss the next OpenAI altogether. AI continues to be a double-edged sword, promising benefits in innovation, design, and efficiency; but bringing with it an alarming potential for misuse and chaos."
Mona Ghadiri, senior director of product management at BlueVoyant, added that having National Institute of Standards and Technology (NIST) serve as the framework leader for AI safety testing makes a lot of sense because they make these kinds of frameworks for cybersecurity already. There’s also a lot that can be leveraged from other testing practices that are necessary to meet other government requirements, like automotive crash testing.
“My hope is we get to the point where every ‘car’ has windshield wipers and a seatbelt,” said Ghadiri. “AI is not like that yet. The interesting piece will be how groups will be certified to test their own AI — or become certified testers — and what the actual assessment from an external third party will look like in terms of length of time. Introducing these types of third-party assessments can slow down development and inhibit rapid prototyping, but we do really need it.”
Gal Ringel, co-founder and CEO at Mine, said the executive order was a meaningful step forward, especially since comprehensive AI legislation from Congress likely is not on the near horizon.
Over the next few months the focus on AI governance should be on forming transparent working relationships with the tech companies behind the most powerful generative AI models, particularly since the threshold set for AI model capability before one needs to submit to these safety tests and controls is so high, Ringel continued.
“The EU's AI Act is not expected to formally pass until at least May, so there’s no rush to immediately institute risk assessment or data protection requirements on generative AI yet, although that time will come,” said Ringel. “As we are still in the early days of this technological shift, making sure the government can establish a working rapport with Big Tech on this issue and laying the groundwork for how these safety tests will unfold may not be a glamorous goal for the next few months, but it's a critical one. The government and Big Tech never aligned on data privacy issues until it was too late and the government's hand was forced by broad public support, so there cannot be a repeat of that failure or the consequences could be immeasurably more damaging when it comes to AI.”