Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

California AI Catastrophe Bill Clears Committee

New Version Aims to Ensure AI Safety While Keeping Its Builders Happy
California AI Catastrophe Bill Clears Committee
The California State Capitol Building in an undated file photo (Image: Shutterstock)

California state lawmakers watered down a bill aimed at preventing artificial intelligence disasters after hearing criticism from industry and federal representatives.

AI firm Anthropic, along with federal Democratic Bay Area lawmakers such as Reps. Zoe Lofgren and Nancy Pelosi, opposed state Sen. Scott Wiener's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act over the past several weeks, arguing that enacting the bill could make California hostile to innovation and force companies to incorporate elsewhere in the country.

See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity

An amended version approved Thursday by the California Assembly Appropriations Committee no longer contains language allowing the state attorney general to sue AI companies over negligent safety practices before a catastrophic event. Compromise language says the top law enforcement official can ask for a cessation of dangerous operations. The state attorney general would be able to sue after a catastrophic event.

"We accepted a number of very reasonable amendments proposed ... we've addressed the core concerns," Wiener said of Thursday's revision of the bill.

"While the amendments do not reflect 100 percent of the changes requested by Anthropic - a world leader on both innovation and safety - we accepted a number of very reasonable amendments proposed, and I believe we've addressed the core concerns expressed by Anthropic and many others in the industry," he said.

The new version removes a requirement that AI companies submit safety test results under penalty of perjury. Model developers would only need to take "reasonable care" to ensure that their products do not pose catastrophic risks, instead of the "reasonable assurance" the previous version mandated. In cases of developers fine-tuning open-source models, liability would become a problem only if the developer spent at least $10 million in fine-tuning the underlying model.

California is home to 35 of the top 50 AI companies in the world, according to Gov. Gavin Newsom's executive order that calls for deeper research into AI development use cases and risks (see: California Executive Order Hopes to Ensure 'Trustworthy AI').

Lofgren, whose district includes parts of San Jose, told Weiner earlier this month she was "very concerned" because the legislation did not appear to have "any clear benefit for the public." "There is a real risk that companies will decide to incorporate in other jurisdictions or simply not release models in California," she said.

Pelosi, the former House Speaker whose district encompasses most of San Francisco, earlier described the bill as "more harmful than helpful" and said that "the view of many of us in Congress is that SB 1047 is well-intentioned but ill-informed." California "must have legislation" that allows "small entrepreneurs and academia - not big tech - to dominate," she said

Wiener said he "strongly" disagreed with Pelosi's statement, adding that the bill requires "only the largest AI developers to do what each one of them has repeatedly committed to do: Perform basic safety testing on massively powerful AI models."

Co-authored by Democratic Sens. Richard Roth, Susan Rubio and Henry Stern, the legislation received support from AI pioneers Geoffrey Hinton, emeritus professor of computer science at University of Toronto, and Yoshua Bengio, professor of computer science at University of Montreal and former Google AI chief.

"Forty years ago when I was training the first version of the AI algorithms behind tools like ChatGPT, no one - including myself - would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously," Hinton said, adding that the legislation "takes a very sensible approach to balance those concerns."

The bill still faces opposition after being amended in committee. Martin Casado, a general partner in venture capital firm Andreessen Horowitz, called the amendments "window dressing" that doesn't address the bill's "real issues or criticisms." Separately, eight Democratic members of the U.S. House of Representatives representing California wrote to Newsom, wrote to Newsom urging him to veto the bill. They legislation "would not be good for our state, for the start-up community, for scientific development, or even for protection against possible harm associated with AI development," the letter says.

The legislation must now pass the California Assembly's final vote. If it does, it will be referred to the Senate.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing paymentsecurity.io, you agree to our use of cookies.