Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
The community of Lusikisiki in the Eastern Cape is reeling from the massacre of 18 people.
Police Minister Senzo Mchunu
SACREMENTO – California Governor Gavin Newsom has vetoed a bill aimed at regulating powerful artificial intelligence models following pushback from tech giants and critics who argued the law went too far.
The bill had faced a barrage of critics, including members of US Congress from Newsom’s Democratic party, who argued that threats of punitive measures against developers in a nascent field would throttle innovation.
In a statement on Sunday, Newsom acknowledged that SB-1047 was “well-intentioned” but expressed concern that the bill was too “stringent” and unfairly focused on “the most expensive and large-scale models.”
“The bill applies stringent standards to even the most basic functions — so long as a large system deploys it,” the governor noted.
He added, “smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
The bill’s sponsor, Democratic state Senator Scott Wiener of San Francisco, lamented the “setback,” saying it left AI safety in the hands of the tech giants racing to release the technology.
Wiener had hoped the bill would set rules for AI giants in Silicon Valley’s home state, filling a void left by Washington, where a politically divided Congress struggles to pass legislation.
“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way,” Wiener wrote on X.
The California state bill would have required developers of large “frontier” AI models to take precautions such as pre-deployment testing, simulating hacker attacks, installing cybersecurity safeguards, and providing protection for whistleblowers.
To secure the legislation’s passage, lawmakers made several changes, including replacing criminal penalties for violations with civil penalties such as fines.
However, opposition remained, including from influential figures like Democratic Congresswoman Nancy Pelosi.
OpenAI, the creator of ChatGPT, also opposed the bill, preferring national rules instead of a patchwork of AI regulations across the 50 US states.
At least 40 states have introduced bills this year to regulate AI, and a half dozen have adopted resolutions or enacted legislation aimed at the technology, according to The National Conference of State Legislatures.
The bill had gained reluctant support from Elon Musk, who argued that AI’s risk to the public justifies regulation, as well as leading AI researchers like Geoffrey Hinton and Yoshua Bengio.
Dan Hendrycks, director of the Center for AI Safety, said that although the veto was “disappointing,” the debate around the bill “has begun moving the conversation about AI safety into the mainstream, where it belongs.”
He added on X that the bill has “revealed that some industry calls for responsible AI are nothing more than PR aircover for their business and investment strategies.”