Today Governor Newsom vetoed a bill purporting to regulate AI models, which passed California Assembly with flying colours. For most of you who don’t live in California and probably don’t care about its electoral politics, it still matters, because of three facts:
The bill would’ve applied to all large models that are used in California - i.e., almost everyone
It went through a lot of amendments, became more moderate as it went on, but still would’ve created a lot of restrictions
There was furious opposition and support of a form that’s usually hard to see, with the united push of “why” and “where’s the evidence”
His veto statement says the below.
By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.
Adaptability is critical as we race to regulate a technology still in its infancy. This will require a delicate balance. While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.
Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public.
In other words, he doesn’t worry about premature regulation (which regulator would!), but does worry about regulating general capabilities vs usage.
Rather than litigate whether SB 1047 was a good bill or not, which plenty of others have done, including me, I wanted to look at what comes next. Here’s a great analysis from
here. To look at one part of Gov. Newsom’s statement alongside the veto:The Governor has asked the world’s leading experts on GenAI to help California develop workable guardrails for deploying GenAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks. The Governor will continue to work with the Legislature on this critical matter during its next session.
Building on the partnership created after the Governor’s 2023 executive order, California will work with the “godmother of AI,” Dr. Fei-Fei Li, as well as Tino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley, on this critical project.
It’s of course better to work with Dr. Fei-Fei than not, and it’s a good initiative insofar as it’s about learning what there is to learn, but the impetus is the paragraph before.
It’s clear that this wasn’t a one time war with a winner and a loser, but more like something that’s likely to recur repeatedly in the coming years.
So, it stands to reason that barring the implosion of AI as a field, we will see more regulations crop up. Better crafted ones, perhaps specific ones, but more nonetheless.
Generally speaking with all these regulations there ought to be a view on what they’re good for. A view on its utility. SB 1047 seemed to have a diluted version of many such objectives that it went through. For instance:
Existential risk - like if a model gets sufficiently capable and self improves to ASI and kills us all through means unknown
Large scale risk from an agent - like an autonomous hack or bioweapon or similar causing untold devastation (later anchored to $500m)
Use by a malicious actor - like if someone used a model to do something horrible, like perform a major hack on the grid or develop a bioweapon
This bill started at the top of this list and came slowly towards the bottom as people asked questions, mainly due to the problem that there is no evidence for the top two examples at all. There are speculations, and there are some small indications which could go either way on whether these can happen (like the stories on GPT o1 apparently going out of distribution re its test parameters), but the models are very far today from being able to do this.
Many of the proponents argued that this bill is only minimally restrictive, so it should be passed anyway so we can continue building on it.
But that’s not how we should regulate! We shouldn’t ask for minimally invasive bills that only apply to some companies if we still aren’t clear on what benefits it will have, especially when multiple people are arguing it can have very real flaws!
Will they always remain so? I doubt it, after all we want AI to be useful! You can’t ask it to rewrite a 240k line C++ codebase in Python, for instance, without it having the ability to do a lot of damage as well. Just like you couldn’t hack the power grid before you had a power grid or computers though, the benefit you get from it really really matters.
Will the AI models be able to do much more, reach AGI and more, in the short/ immediate future? I don’t know. Nobody does. If you are a large lab you might say yes, because you believe the scaling hypothesis and that these models will get much smarter and more reliable as they get bigger, very soon. This is what Sam Altman wrote in his essay last week. Even though they all think that nobody actually knows if this is true.
You might therefore say these things, and they might even be true,
So the question is what should we optimise for? Well, just like any other technology, the #3 there is what most regulations should target. In fact, it’s what Governor Newsom has targeted.
Over the past 30 days, Governor Newsom signed 17 bills covering the deployment and regulation of GenAI technology, the most comprehensive legislative package in the nation on this emerging industry — cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation. California has led the world in GenAI innovation while working toward common-sense regulations for the industry and bringing GenAI tools to state workers, students, and educators.
Without saying all of this is good, this is at least perfectly sufficient if you believe that the conditions to know if #1 and #2 are true are going to remain confusing. As it is the blowback from safety-washing the models, to stop them from showing certain images or text or output of any sort, is only making them more annoying to use, without any actual benefit from a societal safety point of view1.
This is, to put it mildly, just fine. We do this for everything.
To go beyond this and regulate preemptive we have to believe two things:
We know what will happen, to a rough degree of accuracy, what will happen when the models get bigger. Climate change papers are a good example of the level of rigour and specificity needed. (And I’m not blind to the fact that despite the reams of work and excellent scholarship they are still heavily disputed).
We think the AI model creators are explicitly hiding things, either capabilities or even flaws in their models, that could conceivably cause enormous damage in the society. The arguments about AI companies resembling lead companies or cigarette companies or oil companies are in this vein. If so, the difference is, those had science at least on one side, which would be good to have here.
Okay, so considering all this, what should we consider as the objectives for any bill? What are the things we should focus on that we know, so that we can make sensible rules.
I think we should optimise for human flourishing. We need to keep the spigot of innovation going as much as we can, if only because as AI is getting better we are truly able to help mathematicians and clinicians and semiconductor manufacturers and drug discovery and material science, and upskill ourselves as a species. This isn’t a fait accompli, but it’s clearly happening. The potential benefits are enormous, so to cut that off would be to fill up the invisible graveyard.
And so, I venture the following principles:
Considering AI is likely to become a big part of our lives, solve the user problem. I would argue much of it is already regulated, but fine, makes sense to add more specific bits here. Especially in high-risk situations. If an AI model is being used to develop a nuclear reactor, you better show the output is safe.
Understand the technology better! For evaluations and testing and red-teaming, yes, but also to figure out how good it is. Study how it can cut red tape for us. How could it make living in the modern world less confusing. Can it file our taxes? Where’s the CBO equivalent to figure out its benefits and harms? Where’s the equivalent of FTC’s Bureau of Economics? Where’s the BEA?
Most importantly, be minimally restrictive. For things we don’t or can’t know about the future, let’s not preemptively create stringent rules that manipulate our actions. Don’t add too much bureaucracy, don’t add red tape, don’t add boxes to be checked, don’t add overseers and agencies and funded non-profits until you know what to do with them! Let the market actually do its job of finding the most important technologies and implement them and see its effects and help us understand better.
These, you’ll note, have nothing to do with model size, or how many GPUs it was trained on, or whether we implicitly believe it will recursively self-improve so fast we’re all caught flat footed. These are explicitly focused on finding rationale and evidence, essential if we are to treat the problem with the gravity it requires.
There are plenty of other issues which need well thought out regulation too. Like the issue of copyright and training models on top of artists’ works.
Regulatory attention is like a supertanker. Most of the lawmakers are already poised to regulate more, as an aftermath of what they see as the social media debacle, and the fact that the world is going more protectionist. You have to be careful how to point it and where to point it. And like Chekhov’s gun, to only bring it up if you plan on using it!
I think it’s important to understand that even if you’re on the side of “no regulation”, or at least “no regulation for a while”, you can’t stop policymakers from getting excited or scared. We should give them a way to deal with it, to learn as they go along and be useful instead of fearful. What’s above is one way to do that.
This is amongst the main reasons why I am also sanguine that OpenAI is turning into a regular corporation. We have collectively spent decades each trying to figure out the best ways to align an amoral superintelligence (corporations), and come up with Delaware Law. It’s a miracle that works well in our capitalist system. It’s not perfect, but it’s a damn sight better than almost anything else we’ve tried. I am happy that OpenAI will join its ranks, rather than be controlled by a few non-profit directors who are acting on behalf of all humanity.
I'm frustrated with this whole situation. AI companies are putting forward the incompatible assertions that AI will dramatically change all of society starting in the next couple of years and that it needs zero regulation. Even something as mundane as power cords is subject to multiple regulatory and standards bodies. Unlike earlier transformational technologies, AI adoption will not be a gradual process, and while we don't know exactly what may go wrong, we should assume that a powerful, widely-deployed new technology will go awry in some fashion. What if anything should we do to mitigate those risks? Right now, and for the foreseeable future, the answer is: Nothing. We'll assume the industry will act responsibly in the middle of a huge land grab, and if anything goes wrong, we'll let the aftermath get sorted out by lawsuits and panic legislation.
It is actually governmental overreach if we regulate AI at application level. We have enough laws in different veriticals. We only need guardrail to the AI related issues, which are at model level. Moreover, EU's AI Act is at application level and it's agreed that they are overregulating things. We don't want to go to that direction