Social media is buzzing with the news that more than 1,000 tech and AI luminaries signed a petition for the industry to take a six-month moratorium on the training of artificial intelligence (AI) systems more powerful than OpenAI's GPT-4.
The signers are a diverse list that included tech titan Elon Musk, tech legend Steve Wozniak, and former Democratic presidential candidate and futurist Andrew Yang.
Yang's Twitter post with New York Times technology reporter and author of "Futureproof," Kevin Roose, in which Roose shared a survey on Yang’s podcast that asked if AI automation would displace millions of jobs by 2030 — roughly 75% to 80% said "yes" — and if AI would replace their job by 2030, only 20% said "yes."
“We have this idea that these tools are amazing and powerful and creative and disruptive and will change the entire economy in the next 10 years, but not my job,” said Roose. “I’m special. I’m unique. I’m creative. I’m human. I’m untouchable. So I do think there’s a lot of wishful thinking and almost hubris around this. I’ve made peace with my own eventual obsolesence, I just hope that the robot overlords will be merciful when they put me out of a job.”
Some security pros saw this moment as mostly media hype and talked about the need for the industry to develop a more level-headed approach to AI development — and even get government involved.
'The cat is out of the bag,' 'the genie is out of the bottle'
“Anything we do to stop things in the AI space is probably just noise,” said Andrew Barratt, vice president at Coalfire. “It’s also impossible to do this globally in a coordinated fashion. AI will be the productivity enabler of the next couple of generations. The danger will be watching it replace search engines and then become monetized by advertisers who ‘intelligently’ place their products into the answers.”
Barratt said the spike in fear was mostly triggered by the recent attention given to ChatGPT. He said rather than take a pause, the industry should encourage knowledge workers around the world to look at how they can best use the various AI tools that are becoming more and more consumer-friendly to help deliver productivity.
“Those who don’t will be left behind,” said Barratt.
Dan Shiebler, head of machine learning at Abnormal Security, said he found it interesting how diverse the signers and their motivations are. For example, Shiebler said Musk has been pretty vocal that he believes AGI (computers figuring out how to make themselves better and therefore exploding in capability) are an imminent danger, where AI skeptics such as Gary Marcus are clearly coming to this letter from a different angle. Marcus recently tweeted that all cynicism aside about Musk, “he appears to me to be genuinely worried.”
“Personally, I don’t think this letter will achieve much,” said Shiebler. “The cat is out of the bag on these large language models. The limiting factor in generating them is money and time, and both of these will fall rapidly. We need to prepare businesses to use these models safely and securely, not try to stop the clock on their development.”
John Bambenek, principal threat hunter at Netenrich, added that while it’s doubtful that anyone will pause anything, there’s a growing awareness that consideration of the ethical implications of AI projects has lagged far behind the speed of development.
“I think it’s good to reassess what we are doing, and the profound impacts it will have, as we have already seen some spectacular fails when it comes to thoughtless AI/ML deployments,” Bambenek said.
In a perfect world, it's a great idea to slow down and get all our ducks in a row, said Kevin Bocek, vice president of ecosystem and community at Venafi. Bocek said there are serious implications around the rush to adopt and develop AI that can and will have serious ramifications.
“But in practice, it’s a silly and unworkable idea,” said Bocek. “That genie is well and truly out of the bottle. It’s like saying we should not use encryption because criminals might use it; all the while criminals are using it. You will never get a global agreement to put the brakes on AI — even if you did, people may agree publicly, but development wouldn’t cease. Countries will naturally continue to develop their advantage. So those that did pause would only find themselves being left behind, giving countries like China the advantage. We may also miss out on all the advantages that AI can deliver to society.”
These AI tools are powerful, because they can understand context in the question and in the information, explained Baber Amin, COO at Veridium. Amin said we need to lobby our elected representatives to set up some oversight at the state and federal level for responsible use and deployment of AI-based technology.
“Self-governance should not be an option as we have already seen things go awry with Microsoft’s chat bot,” said Amin. “Responsible AI usage in the context of search engines should focus on transparency, fairness, user privacy, accuracy, accessibility, and social responsibility."
Marc Rotenberg, founder at Center for AI and Digital Policy, said in a LinkedIn post that one of the reasons he signed the letter was because it recommends that AI developers must work with policymakers to accelerate robust AI governance systems.
Rotenberg they should include many of the recommendations from the Center for AI and Digital Policy:
- Respect the Universal Guidelines for AI.
- Implement the OECD/G20 AI Principles.
- Implement the UNESCO Recommendation on AI Ethics.
- Finalize and implement the EU AI Act.
- Finalize and implement the Council of Europe AI Treaty.
- Legislate and implement the OSTP AI Bill of Rights.
“The AI research community needs to be aware that there are already robust governance frameworks for AI,” said Rotenberg. “There’s a lot of work ahead for implementation. Support from the AI community will help.”