SAN FRANCISCO – Current and former U.S. government agency leaders stressed the importance for public and private guardrails on AI and voiced concern geopolitical strife is increasingly creating existential cybersecurity threats to the U.S. critical infrastructure.
At a Tuesday keynote at RSA Conference, Department of Homeland Security Secretary Alejandro Mayorkas said grappling with the impact of AI is a growing priority. To that end, he said this week marked the first meeting of the DHS AI Safety and Security Advisory Board, which includes the CEOs of OpenAI, Microsoft, Alphabet and Nvidia. Central to the goal of the blue-ribbon board, announced last month, is to partner with the government on understanding the impact AI is having on defending U.S. critical infrastructure.
For more real-time RSAC coverage from SC Media please visit here.
Mayorkas was short on details of the inaugural meeting, which are closed to the public, but said discussions were “very robust discussion” around “what the definition of ‘safe’ is” when it comes to AI use and how to handle the “dual use” of AI by both defenders and adversaries.
He added agenda items included laying out the first principles that would ground the board’s work and define what roles and responsibilities each voice at the table would have.
Humane Intelligence CEO and Co-founder Rumman Chowdhury, who is also a U.S. Science Envoy, joined Mayorkas on stage as part of the panel discussion. She addressed concerns that the AI Safety and Security Advisory Board appeared only reflect the largest AI stakeholders. Chowdhury was the former director of META (ML Ethics, Transparency, and Accountability) at Twitter, now X.
Chowdhury emphasized that the board is “more than just heavy hitters in tech.” Mayorkas stressed that it’s necessary to include the voices of a range of tech companies tasked with handling and protecting critical data and assets. Mayorkas said the board also includes prominent academics and civil rights leaders, with civilians comprising nearly half of the board.
The DHS AI Safety and Security Advisory Board will continue to meet quarterly, but “converse daily,” Mayorkas said. The DHS secretary also took the opportunity to appeal to the cybersecurity professionals in the RSAC audience to consider bringing their skills to the public sector in the future.
CISA director, past director stress ‘secure by design’ imperative
Continuing the theme of private and public efforts to secure AI, a second keynote panel session on Tuesday of Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly and former director Chris Krebs, who now serves as the chief intelligence and public policy officer at SentinelOne.
The talk, moderated by Washington Post Digital Threats Reporter Joseph Menn, was titled “A World On Fire: Playing Defense in a Digitized World…and Winning,” and covered topics including ransomware, China’s attacks on U.S. critical infrastructure, AI and CISA’s “Secure by Design” initiatives.
Easterly highlighted financially motivated cybercrime, including ransomware, and China-backed threat actor activity as the two of the threats expected to have a growing impact in the coming years. The current CISA director noted that some estimates indicate global cybercrime may cost businesses as much as $10 trillion by next year.
Meanwhile, China nation-state threat actors such as Volt Typhoon were recently observed shifting strategy from espionage to “burrowing into our critical infrastructure,” poised to weaken U.S. defenses in the event of future conflict, Easterly said.
Easterly testified about Chinese cyberattacks on critical infrastructure last week in front of the House Appropriations Subcommittee on Homeland Security, calling these attacks the most serious threat to the nation she has seen in her more than three-decade-long career.
The threats of ransomware and critical infrastructure attacks share a common solution in the adoption of secure by design principles, Easterly said, as both ransomware gangs and nation-state threat actors are constantly on the lookout for security vulnerabilities to exploit for initial access into systems.
One of the current challenges in this regard is the fact that the secure by design pledge promoted by CISA is voluntarily – there is currently a lack of policy enforcement to drive software manufacturers to prioritize security more heavily when designing their products. Krebs said a voluntary sense of responsibility from businesses is only one of four “levers” that will ultimately motivate businesses to adopt secure by design principles.
The other three levers identified by Krebs are civil litigation, regulatory action and legislation. Krebs admits that currently, the European Union, rather than the United States, is “setting the agenda” for crucial security initiatives by passing laws like the AI Act rather than relying on other levers to pull security in the right direction.
On the topic of AI, Krebs said he expects to see “waves” of AI “combat” between attackers and defenders, but believes the defenders are poised to come out on top based on the AI innovation currently coming out of the private sector. While cybersecurity companies are getting ahead of the ball to develop AI-powered security tools, Microsoft’s recent report on the use of large language models like ChatGPT by nation-state threat actors showed only basic dabbling in the AI’s capabilities for tasks such as research, social engineering and translation.
Easterly said AI has the potential to become one of the “most powerful weapons of this century,” with the hope that defenders will be able to leverage its power effectively and responsibly.