Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Advocates say it is a modest law setting “clear, predictable, common-sense safety standards” for artificial intelligence. Opponents say it is a dangerous and arrogant step that will “stifle innovation.”
In any event, SB 1047 — California state Sen. Scott Wiener’s proposal to regulate advanced AI models offered by companies doing business in the state — has now passed the California State Assembly by a margin of 48 to 16. Back in May, it passed the Senate by 32 to 1. Once the Senate agrees to the assembly’s changes to the bill, which it is expected to do shortly, the measure goes to Gov. Gavin Newsom’s desk.
The bill, which would hold AI companies liable for catastrophic harms their “frontier” models may cause, is backed by a wide array of AI safety groups, as well as luminaries in the field like Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, who have warned of the technology’s potential to pose massive, even existential dangers to humankind. It got a surprise last-minute endorsement from Elon Musk, who among his other ventures runs the AI firm xAI.
Lined up against SB 1047 is nearly all of the tech industry, including OpenAI, Facebook, the powerful investors Y Combinator and Andreessen Horowitz, and some academic researchers who fear it threatens open source AI models. Anthropic, another AI heavyweight, lobbied to water down the bill. After many of its proposed amendments were adopted in August, the company said the bill’s “benefits likely outweigh its costs.”
Despite the industry backlash, the bill seems to be popular with Californians, though all surveys on it have been funded by interested parties. A recent poll by the pro-bill AI Policy Institute found 70 percent of residents in favor, with even higher approval ratings among Californians working in tech. The California Chamber of Commerce commissioned a bill finding a plurality of Californians opposed, but the poll’s wording was slanted, to say the least, describing the bill as requiring developers to “pay tens of millions of dollars in fines if they don’t implement orders from state bureaucrats.” The AI Policy Institute’s poll presented pro and con arguments, but the California Chamber of Commerce only bothered with a “con” argument.
The wide, bipartisan margins by which the bill passed the Assembly and Senate, and the public’s general support (when not asked in a biased way), might suggest that Gov. Newsom is likely to sign. But it’s not so simple. Andreessen Horowitz, the $43 billion venture capital giant, has hired Newsom’s close friend and Democratic operative Jason Kinney to lobby against the bill, and a number of powerful Democrats, including eight members of the US House from California and former Speaker Nancy Pelosi, have urged a veto, echoing talking points from the tech industry.
So there’s a strong chance that Newsom will veto the bill, keeping California — the center of the AI industry — from becoming the first state with robust AI liability rules. At stake is not just AI safety in California, but also in the US and potentially the world.
To have attracted all of this intense lobbying, one might think that SB 1047 is an aggressive, heavy-handed bill — but, especially after several rounds of revisions in the State Assembly, the actual law does fairly little.
It would offer whistleblower protections to tech workers, along with a process for people who have confidential information about risky behavior at an AI lab to take their complaint to the state Attorney General without fear of prosecution. It also requires AI companies that spend more than $100 million to train an AI model to develop safety plans. (The extraordinarily high ceiling for this requirement to kick in is meant to protect California’s startup industry, which objected that the compliance burden would be too high for small companies.)
So what about this bill would possibly prompt months of hysteria, intense lobbying from the California business community, and unprecedented intervention by California’s federal representatives? Part of the answer is that the bill used to be stronger. The initial version of the law set the threshold for compliance at $100 million for the use of a certain amount of computing power, meaning that over time, more companies would have become subject to the law as computers continue to get cheaper. It would also have established a state agency called the “Frontier Models Division” to review safety plans; the industry objected to the perceived power grab.
Another part of the answer is that a lot of people were falsely told the bill does more. One prominent critic inaccurately claimed that AI developers could be guilty of a felony, regardless of whether they were involved in a harmful incident, when the bill only had provisions for criminal liability in the event that the developer knowingly lied under oath. (Those provisions were subsequently removed anyway). Congressional representative Zoe Lofgren of the science, space, and technology committee wrote a letter in opposition falsely claiming that the bill requires adherence to guidance that doesn’t exist yet.
But the standards do exist (you can read them in full here), and the bill does not require firms to adhere to them. It says only that “a developer shall consider industry best practices and applicable guidance” from the US Artificial Intelligence Safety Institute, National Institute of Standards and Technology, the Government Operations Agency, and other reputable organizations.
A lot of the discussion of SB 1047 unfortunately centered around straightforwardly incorrect claims like these, in many cases propounded by people who should have known better.
SB 1047 is premised on the idea that near-future AI systems might be extraordinarily powerful, that they accordingly might be dangerous, and that some oversight is required. That core proposition is extraordinarily controversial among AI researchers. Nothing exemplifies the split more than the three men frequently called the “godfathers of machine learning,” Turing Award winners Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. Bengio — a Future Perfect 2023 honoree — and Hinton have both in the last few years become convinced that the technology they created may kill us all and argued for regulation and oversight. Hinton stepped down from Google in 2023 to speak openly about his fears.
LeCun, who is chief AI scientist at Meta, has taken the opposite tack, declaring that such worries are nonsensical science fiction and that any regulation would strangle innovation. Where Bengio and Hinton find themselves supporting the bill, LeCun opposes it, especially the idea that AI companies should face liability if AI is used in a mass casualty event.
In this sense, SB 1047 is the center of a symbolic tug-of-war: Does government take AI safety concerns seriously, or not? The actual text of the bill may be limited, but to the extent that it suggests government is listening to the half of experts that think that AI might be extraordinarily dangerous, the implications are big.
It’s that sentiment that has likely driven some of the fiercest lobbying against the bill by venture capitalists Marc Andreessen and Ben Horowitz, whose firm a16z has been working relentlessly to kill the bill, and some of the highly unusual outreach to federal legislators to demand they oppose a state bill. More mundane politics likely plays a role, too: Politico reported that Pelosi opposed the bill because she’s trying to court tech VCs for her daughter, who is likely to run against Scott Wiener for a House of Representatives seat.)
It might seem strange that legislation in just one US state has so many people wringing their hands. But remember: California is not just any state. It’s where several of the world’s leading AI companies are based.
And what happens there is especially important because, at the federal level, lawmakers have been dragging out the process of regulating AI. Between Washington’s hesitation and the looming election, it’s falling to states to pass new laws. The California bill, if Newsom gives it the green light, would be one big piece of that puzzle, setting the direction for the US more broadly.
The rest of the world is watching, too. “Countries around the world are looking at these drafts for ideas that can influence their decisions on AI laws,” Victoria Espinel, the chief executive of the Business Software Alliance, a lobbying group representing major software companies, told the New York Times in June.
Even China — often invoked as the boogeyman in American conversations about AI development (because “we don’t want to lose an arms race with China”) — is showing signs of caring about safety, not just wanting to run ahead. Bills like SB 1047 could telegraph to others that Americans also care about safety.
Frankly, it’s refreshing to see legislators wise up to the tech world’s favorite gambit: claiming that it can regulate itself. That claim may have held sway in the era of social media, but it’s become increasingly untenable. We need to regulate Big Tech. That means not just carrots, but sticks, too.
Newsom has the opportunity to do something historic. And if he doesn’t? Well, he’ll face some sticks of his own. The AI Policy Institute’s poll shows that 60 percent of voters are prepared to blame him for future AI-related incidents if he vetoes SB 1047. In fact, they’d punish him at the ballot box if he runs for higher office: 40 percent of California voters say they would be less likely to vote for Newsom in a future presidential primary election if he vetoes the bill.