Connect with us

Tech

California won’t require big tech firms to test safety of AI after Newsom kills bill

Published

on

California won’t require big tech firms to test safety of AI after Newsom kills bill

California governor Gavin Newsom on Sunday vetoed a hotly contested artificial intelligence safety bill after the tech industry raised objections. Newsom said that requiring companies to stress test large AI models before releasing them could drive AI businesses from the state and hinder innovation.

“California is home to 32 of the world’s 50 leading AI companies,” the governor said in a statement accompanying the veto. “The bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, officially known as SB 1047, would have targeted companies developing generative AI – which can respond to prompts with fully formed text, images or audio, as well as run repetitive tasks with minimal intervention. Companies building models costing more than $100m would have been required to implement “kill switches” for their AI as well as publish plans for the testing and mitigation of extreme risks.

Newsom said he had asked leading experts from the US AI Safety Institute on generative AI to help California “develop workable guardrails” that focus “on developing an empirical, science-based trajectory analysis”. He also ordered state agencies to expand their assessment of the risks from potential catastrophic events tied to AI use.

Despite the veto, the governor said: “We cannot afford to wait for a major catastrophe to occur before taking action to protect the public … Safety protocols must be adopted.”

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom wrote. “For these reasons, I cannot sign this bill.”

SB 1047, written by Democratic state senator Scott Wiener of San Francisco, included a number of protections and oversight measures including requiring companies to implement the ability to shut down the model completely in the case of emergencies, to only use the AI model for the stated purpose as well as whistleblower protections for employees looking to disclose issues with an AI system.

In response to Newsom’s veto, Wiener said: “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers … At the same time, the debate around SB 1047 has dramatically advanced the issue of AI safety on the international stage.”

AI companies and groups allied with Silicon Valley praised Newsom’s veto. Venture capitalist Marc Andreessen wrote: “Thank you @gavinnewsom for vetoing SB1047 – for siding with California dynamism, economic growth, and freedom to compute, over safetyism, doomerism, and decline.” Meta’s chief AI scientist Yann Lecun had likewise called the bill “extremely regressive”. California representatives Nancy Pelosi and Ro Khanna, both Democrats, had voiced their opposition to it in the weeks leading up to his decision.

Breaking with much of the tech industry, Tesla CEO and X owner Elon Musk had offered measured support for the legislation, tweeting in August that “California should probably pass the SB 1047 AI safety bill,” though he said coming out in favor of it was a “tough call”.

Critics of AI’s rapid growth decried the governor’s decision. Daniel Colson, the founder of AI thinktank the AI Policy Institution, called Newsom’s veto “reckless” and “out of step with the people he’s tasked with governing”.

This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.

For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk…

— Elon Musk (@elonmusk) August 26, 2024

SB 1047 had come under intense criticism for how some organizations say it would affect the open-source community. The Mozilla Foundation, a non-profit that owns the developer of the Mozilla Firefox browser, previously urged Newsom to veto the bill.

“Today, we see parallels to the early Internet in the AI ecosystem, which has also become increasingly closed and consolidated in the hands of a few large, tech companies,” the foundation wrote in an earlier statement. “We are concerned that SB 1047 would further this trend, harming the open-source community and making AI less safe – not more.”

The bill received support from a roster of Hollywood artists, who urged Newsom to sign the bill in a letter published earlier this month.

Actor Mark Ruffalo wrote in a statement: “Is this bill perfect? Nothing is. Does it set the right tone of regulating an industry that has the possibility of doing great harm as well as good? It does. It will protect society and set the groundwork for a safe AI expansion into our lives.”

Continue Reading