AI/ML, Government Regulations

Changes to controversial California AI safety bill fail to satisfy critics

Share

A highly controversial California AI safety bill passed in the state’s Appropriations Committee Thursday, but despite several amendments designed to appease concerned voices in the tech industry, some critics said the changes don’t go far enough to prevent the bill from stifling AI innovation, startups and open-source projects.

California Senate Bill 1047 (SB 1047) would put the onus on AI developers to prevent AI systems from causing mass casualties — for example, through the AI-driven development of biological or nuclear weapons — or major cybersecurity events costing more than $500 million in damage.

The bill would only apply to AI models trained with computing power greater than 1026 floating-point operations per second (FLOPS) and at costs of at least $100 million, and imposes various requirements, including certain security testing and audits, security incident reporting, and the implementation of a “kill switch.”

The bill would also entitle the State of California to sue developers of covered models that result in disaster incidents, i.e., the aforementioned mass casualty events or cyberattacks.

“In my opinion, this is a complete piece of theater and a naked attempt to grab the zeitgeist of a passing moment. And overall, seems to be just another useless bit of tech regulation that penalizes small players and incentivizes tech giants while accomplishing next to nothing,” Dane Grace, technical solutions manager at cybersecurity risk management company Brinqa, told SC Media.

Previously, the bill had even more stringent legal standards and penalties, potentially subjecting developers to perjury charges for submitting misleading safety test certifications and to lawsuits for failing to uphold certain security practices, before a security event had even occurred.

The new version of the bill also alters language to require “reasonable care,” rather than “reasonable assurance” that AI models do not pose a catastrophic risk. The changes to the bill were made, in part, on the request of AI company Anthropic, according to bill sponsor Sen. Scott Weiner, D-San Francisco.

The computing power and cost thresholds for “covered models” has not quelled fears that the bill’s requirements could put an undue burden on startups, as one letter to Wiener from a legal representative of the venture capitalist firm Andreessen Horowitz (a16z) noted that many tech startups already receive that level of funding for their AI training.

Another major concern is the burden placed on developers of open-source models, which could potentially be held responsible for harms that arise from downstream use of their models by others. The bill could de-incentivize developers from releasing open-source models, removing a valuable tool for startups, academics and other innovators from the AI development ecosystem, noted a letter from eight U.S. Congress members to California Gov. Gavin Newsom dated Aug. 15.

“In general, this kind of regulation primarily target smaller, younger entrepreneurs. I am convinced that this will raise the bar significantly for new players in the AI space. They are partially shutting the door for a more ethical player to come through, especially in the rise of synthetic, open source LLMS,” Grace opined.

Despite the amendments added to the version of the bill that passed through the Appropriations Committee Thursday, critics still said the bill is too vague, will do harm to AI innovation, and does not properly address the potential risks posed by AI systems.

For example, the letter from U.S. Congress members noted that the bill focused on the hypothetical scenario of mass casualty events caused by AI rather than more immediately and demonstrable concerns such as the dissemination of misinformation and nonconsensual deepfakes. It also pointed out that the current lack of standardized security frameworks for AI models makes the requirements of the bill “premature.”

Martin Casado, general partner at a16z, argued in a comment on X that the bill amendments “are window dressing” and “don’t address the real issues or criticisms of the bill.”

Grace also was not impressed with the changes made to the bill.

“The bill’s requirement for developers to exercise ‘reasonable care’ to ensure AI models do not pose significant risks, as opposed to the previous standard of ‘reasonable assurance,’ raises additional concerns. Who evaluates this? A government board? It took them 20+ years to figure out that Facebook and Google were vacuuming everything we put online,” Grace said.

With SB 1047 having passed through the Appropriations Committee, it will now move to the California Assembly, where it will be eligible for vote on Aug. 20.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.