Listen to this story
|
On Wednesday, California lawmakers approved a contentious AI safety bill, which now requires a final procedural vote. Afterward, the decision will rest with Governor Gavin Newsom, who has until September 30 to either sign the bill into law or veto it.
SB 1047 — our AI safety bill — just passed off the Assembly floor. I’m proud of the diverse coalition behind this bill — a coalition that deeply believes in both innovation & safety.
— Senator Scott Wiener (@Scott_Wiener) August 28, 2024
AI has so much promise to make the world a better place. It’s exciting.
Thank you, colleagues.
California’s Senate Bill 1047, a proposed AI regulation, has sparked intense debate in Silicon Valley, drawing both praise and criticism from tech leaders, lawmakers, and AI experts.
xAI chief Elon Musk has also voiced support, emphasising the need for regulation to prevent AI misuse. “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” he posted on X.
Musk said that for more than 20 years, he has supported AI regulation, drawing parallels to how society regulates any technology or product that could potentially pose a risk to the public.
People have spent years calling Elon Musk the real-life Iron Man, but now that he's backing AI regulations, just like Iron Man did with the Sokovia Accords in the Civil War movie, people are freaking out. pic.twitter.com/8PiNo9wIaq
— Mukul. (@pathuglife) August 28, 2024
The bill, introduced by State Senator Scott Wiener, seeks to implement strict safety measures for large-scale AI models, aiming to prevent potential catastrophes, but critics argue it could hinder innovation.
The legislation requires developers of significant AI models—those costing over $100 million to train—to perform comprehensive safety testing before releasing them to the public.
It also mandates an “emergency stop” feature to shut down AI systems in critical situations and obligates developers to report any safety incidents to California’s Attorney General within 72 hours. A new state agency, the Frontier Model Division, would oversee compliance, with penalties of up to $30 million for repeated violations.
Proponents, including AI luminaries Geoffrey Hinton and Yoshua Bengio, believe the bill is crucial to addressing AI risks comparable to pandemics or nuclear threats, potentially setting a national standard for AI safety.
Conversely, the bill has faced strong opposition from major tech companies like Google, OpenAI, and Meta, who argue it could stifle innovation and drive talent out of California. Critics, such as U.S. Representative Nancy Pelosi and AI expert Fei-Fei Li, caution that the bill’s requirements could disproportionately burden smaller companies and slow technological advancement. They advocate for federal regulation to avoid inconsistencies across states.
OpenAI has publicly opposed SB 1047, stating that it poses a threat to AI’s growth and could push entrepreneurs and engineers to relocate.
The bill has passed the California Appropriations Committee with amendments and is awaiting a final vote in the state assembly. If signed into law by Governor Gavin Newsom, it would be the first AI regulation of its kind in the U.S., potentially shaping AI governance nationwide.
As California, a key hub for AI innovation, seeks to balance technological advancement with safety, the fate of SB 1047 could have significant implications for the global tech industry and the future of AI regulation.