The White House released a new executive order (EO) this week that seeks to increase federal oversight of rapidly expanding AI systems, promote the safety and security of AI development, and reduce its risks for consumers and national security.
The EO has arrived at a critical time, as artificial “general” intelligence has become a reality faster than many expected. Lots of people were surprised this year by the transformational power of ChatGPT, but advances in AI promise to be exponentially more powerful in the year ahead. The implications of these intelligent systems are world-changing – in good ways and bad – and the government needs to act fast if it hopes to effectively manage those impacts.
The release of the EO stands as an important step in the right direction, setting us on a path to effectively harness the enormous potential of AI to make our lives better, while also keeping security and safety top of mind. It introduces several components that are certain to improve the way we create and interact with AI, but it also presents a few gaps and areas for continued development as the order’s guidelines are actioned by public and private sector companies.
The EO’s advantages
Enhanced AI safety and security is perhaps the greatest and most obvious benefit that the executive order brings to the table, but there are several additional components buried beneath these that create other kinds of positive impacts. To name a few:
Where the EO falls short
The industry will likely find it challenging to determine the level of transparency around red-teaming. The EO sets more rigorous standards for red-team testing before the public release of AI models. Most notably, the EO stipulates that developers share their safety test results with the U.S. government. While developers may agree to sharing how they are tackling vulnerabilities, few may want to proactively share what those vulnerabilities are to avoid exposing their organization to risk or scrutiny.
While the EO effectively covers ways to promote safe AI development, what’s missing is a component around promoting protections to defend against adversarial AI. As an email security vendor, we have a front row seat to how attackers have already weaponized generative AI to scale the volume of their attacks, and government agencies are highly attractive targets given their access to sensitive data and control over critical infrastructure. The EO does little to acknowledge the risks that result from bad actors using these tools, or prevent them from doing so.
By the same token, there are untapped opportunities to proactively use AI for good. The EO largely focuses on minimizing the risk of “bad AI,” but there’s enormous potential for “good AI” to help in this fight. As cybersecurity defenses become increasingly AI-enabled, government bodies should consider ways to nurture the development of offensive AI.
At the end of the day, the EO is a set of guidelines, rather than a permanent law, which makes it difficult to effectively enforce. While it’s helpful for steering the AI industry in the right direction, we’ll still need to develop a practical implementation framework.
Additionally, we have to watch that we don’t overregulate. Too much regulation could slow the progress of AI innovation, particularly for AI start-ups that may not have the capital to meet extensive testing and regulatory requirements like the AI giants can. Over-regulation could cause a lack of grassroots innovation in AI that will continue to give adversaries the upper hand.
It will be interesting to see what tangible impacts the EO will have as federal agencies begin to take action on its guidelines. While only a first step, we can bet that it’s a step in the right direction overall. There has never been a more critical time to rally the entire technology ecosystem around building stronger safety, security, and trust in AI systems.
Mike Britton, chief information security officer, Abnormal Security