Commentary on California SB-1047, The "AI Kill Switch Bill"
SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. (2023-2024)
I write in opposition to this bill, primarily because it is silly. It is based on a fictional view of artificial intelligence and will do nothing to improve safety.?
The key points of this comment are the following:
1.???? Protecting the public from significant harms is the proper target of government regulations.
2.???? The bill seeks to protect the public from systems that may never be developed and imposes an impossible burden on system developers to anticipate how their systems will be used.
3.???? The potential for artificial intelligence to cause Critical Harms envisioned by the bill is based on marketing hype and science fiction.
4.???? The bill asserts that artificial intelligence systems do, or soon will be capable of Critical Harms.
5.???? The bill requires developers to include mechanisms to block such harms and to anticipate and to block subsequent users from using it to enable the harms.
6.???? The bill requires developers to install a “kill switch” to terminate out of control AI systems.
7.???? The bill unwittingly excludes quantum computers from its safety requirements.
?
The bill has the laudable goal of protecting the public from unnecessary risks.? That is, in my opinion what a regulation should do. But, if there is a significant potential for such harms, this bill fails to offer any protection.
The bill seeks to regulate something (autonomous super-intelligence) that does not exist and is likely not to exist anytime in the foreseeable future. Frankly, it is based on AI promoter’s hype and a fictionalized view of artificial intelligence as portrayed, for example, in the “Terminator” movies.? It is not clear whether artificial intelligence models with the capabilities attributed to them in this bill will ever be possible or if they are possible, whether they will present the projected risks.
Although it may be reasonable to try to develop regulations before they are needed, regulations should still be based on reasonable expectations of that future.? The “Terminator” movies are entertaining, but they are not a forecast of the future.? They and the other fictionalized accounts of future AI running amok, are based on fundamental logical flaws and should not be the basis for legislation.
Contrary to Sec.2. (c), current models do not “have the potential to create novel threats.”? They are language models, not autonomous thinking machines. They are word guessers. They do not reason, they do not plan, they have no autonomy.? Autonomous intelligence, called artificial general intelligence, requires computational methods that have yet to be invented. There is not even general recognition that such methods would even be needed. ?
Let me explain why current approaches do not present a serious threat to humanity.? There is a great deal of hype from promoters of the current group of AI systems that they are currently at the level of high school students and complete general intelligence will be achieved in the next several months.? This prediction is groundless hype not based on any kind of valid evidence. There has been no comprehensive, let alone, coherent, analysis comparing the performance of these machines with human intelligence.? What benchmarks there are, are logically flawed and of dubious validity for the capabilities they supposedly measure.
领英推荐
Today’s models are based on the “transformer” architecture.? They are trained on massive amounts of text to predict the next word given a context of preceding words.? Given a context of “Mary had a little,” they would predict that the next word would be “lamb.”? Not every prediction is as easy as this one and the context from which they predict the word can be thousands of words long.? The longer the context, in general, the more fluent the models are at predicting the next word.? Once predicted, the word is added to the context and the model predicts the next word.? They are language models, predicting words.?
A model is summarization function that represents a simplified prediction given an input.? It consists of three sets of numbers: numbers representing the inputs, numbers representing the model (the relationships between the inputs and the outputs) and numbers representing the outputs. The model’s numbers are called “parameters” (Sec. 3. (m)).?
Generally, the more parameters, the better the model is at predicting the correct outputs.? Current models have billions or trillions of parameters. But, because the number of possible contexts and words is so large, each combination cannot be represented exactly.? Instead, the contexts and predicted output words must share parameters. There are not enough atoms in the universe to represent all possible combinations of contexts and words. Words with similar meanings tend to occur in similar contexts, so this sharing captures similarity in word meaning.? When a model is provided with a context, it guesses the next word to follow, which, therefore, is similar in meaning to word that followed a similar context during training.? Some researchers call these models “stochastic parrots.”? They are parrots because they repeat what they have been fed and stochastic because there is some variability in the words that they produce.
Current language models produce very fluent language that is often similar to what a human might produce, but that does not mean that they have similar human-level understanding.? The larger the language model (more computing capacity, more data), the more fluent its produced language. Fluency should not be confused with competence.
When a current generation model appears to be reasoning, for example, it is repeating, with some variability, a language pattern in the training text, produced by a human who may have been reasoning.? Many AI researchers fail to recognize this fluency/competence distinction because it is in their interest not to.? It is not consistent with the hype that they have been using to promote their work.? It is much more exciting to claim that one is on the verge of a breakthrough in AI than it is to say that they have built a great word guesser.?
Here is just one example of the lack of understanding in these models. When asked which is heavier a pound of feathers or a kilogram of lead, all of the current models of which I have asked that question responded that they weighed the same. ?Apparently, the similarity between my question and its most common version in their training data asks about pounds of each material.? Many examples like this are known.
Sec. 3. (b) states:?“Artificial?intelligence”?means an engineered or machine-based system?that varies in its level of autonomy and that can,?for explicit or implicit?objectives infer?from the input it?receives?how to generate outputs that can influence physical or virtual?environments.”
This is a definition of a thermostat.? I think that it means that a system with any level of autonomy, rather than a system that varies its autonomy.? The former is consistent with thermostats and every computer or electronic device ever.? The latter rules out all systems ever.? Systems do not vary their autonomy, but they may choose to cooperate.
Sec. 3 (e) tries to put a dollar or computational threshold on the definition of a covered model. That is a fool’s errand.? First, these numbers are consistent with the method used to train today’s models (massive amounts of data and massive amounts of computing space), but today’s models do not present any of the risks with which this bill is concerned.? They may not be relevant to future AI systems. Second, these variables may not even be calculable for quantum computers.? I expect that the transition to quantum computing will come much sooner than the autonomous intelligence envisioned by this bill.
Critical Harm (Sec. 3. (g)) is certainly something that is worthy of regulatory prevention.? Surely, we would want to protect the public from critical harm caused by a model, but the section also tries to protect from harms “enabled by” models. ?The word “enabled” seems to include just about anything.? A pocket calculator, a watch, or just about anything might enable a harm.? Primitive IBM computers enabled the Manhattan project.? If they met the other criteria (for scale and cost) in this bill, those computers would be prohibited.
The bill seeks to exclude from “Critical Harm” “harms caused or enabled by information that a covered model outputs if the information is otherwise publicly accessible” (Sec. 3. (g) 2.). That would exclude anything that could be produced by a current language model.? As stochastic parrots, they can only produce text that follows (repeats with variation or combines) the text on which they have been trained. That would also exclude any future models built on similar architecture. They do not originate information, they parrot it.
Whether a computer can be designed that will be able to do its own research and create its own facts on its own initiatives is, at this point, still speculation.? In any case, it would probably be part of some larger institution, and like the computers used in the Manhattan project, the degree to which they directly enabled such harm may be tenuous.
The bill would require a developer to determine (22603 (c) (i) ) “That a covered model does not pose an unreasonable risk of causing or enabling a critical harm,” or that derivatives of that model will not pose an unreasonable risk.? But these are impossible tasks.? Could the developers of the IBM computers anticipate that they would be used in the Manhattan project?? Could anyone anticipate all of the uses of any invention?
In some ways, the silliest of the requirements in this bill is that for an AI “kill switch” (22603. (a)(2), which comes straight out of the “Terminator” movies.? In the movie a computer system designed to protect national security becomes large enough to suddenly become sentient and, therefore, autonomous.? It then decides that its purpose is to protect itself from humans rather than to protect humans from enemies.? It simultaneously is smart enough to reinterpret its instructions but stupid enough to get that interpretation wrong.? Once it interpreted its purpose, it blocked attempts to shut it down.? That's why we need a kill switch. Regulations should be based on reasonably foreseeable facts, not on movie tropes.
As well-intentioned as this bill may be, it is a boogeyman regulation seeking to protect the public from computer systems that may never exist. It requires the establishment of a new bureaucracy, the Frontier Model Division, to receive reports that would be impossible for model developers to prepare adequately. It serves to exacerbate the current AI hype by certifying that these models are so powerful, government regulation is needed to protect the public from them. It would also serve to provide protection to current large enterprises by adding regulatory burden that smaller organizations may not be able to meet. ?I urge that this bill be rejected.
Founder, CTO at Dataparency, LLC
4 个月Herb, comment on whether the "Assistant" models used in fine-tuning could direct the base model towards other than stochastic parrots and can apply steps to organize the inferences.