Lawsuit seeks halt to OpenAI artificial intelligence

A California law firm has recently filed a class action complaint against OpenAi and Microsoft (and 20 unnamed co-conspirators) over their use of Large Language Models.?The lawyers allege that OpenAI and Microsoft are guilty of 15 crimes, and they must be stopped from employing their AI models until some specific remedies are obtained.?This complaint is long on inuendo and speculation and short on facts.

The complaint alleges that the large language models steal content from World Wide Web users, including children, but it also admits that the information is “technically public.”?The complaint alleges that the companies are negligent for releasing on an unsuspecting world artificial intelligence that is likely to cause human extinction, models that no one knows how to control or even how they work. The complaint alleges that the models are capable of defamation because they produce false negative information.?More generally, the models are a threat to privacy.?The complaint also alleges that the model contains individual profiles for each person, including data collected surreptitiously from tracking the individuals.

Among the remedies for their complaints, the plaintiffs seek:

·?????A temporary freeze on commercial access to and commercial development of the Products (the large language models)

·?????Appointment of an independent council to proactively approve uses of the system

·?????Accountability protocols to ensure that the models follow a “code of human-like ethical principles and guidelines and respect for human values and rights”

·?????“Technological safety measures … that will prevent the technology from surpassing human intelligence and harming others”

·?????“Establishment of a monetary fund … to compensate class members for Defendants’ past and ongoing misconduct”

The request for a freeze would codify in a court order (if they were successful) and expand on the earlier petition from the Future of Life Institute to halt the development of any AI model more powerful than GPT-4. It also echoes the concerns of the Center for AI Safety, which recently?posted?a statement that AI should be treated as a risk to human existence akin to pandemics and nuclear war.?As I have said elsewhere, I find these concerns to be ludicrous.

Language models model language.?They do not have any deeper cognitive processes. They do not model human intelligence.?The language patterns that they produce are statistical abstractions of the patterns in the files that they were fed. When the models appear to reason, that apparent reasoning is a product of the language patterns that they have aggregated. They have no mechanism for reasoning. They may be more articulate than some humans, but they fall flat on other measures, once the statistical language patterns are controlled for.?Very small changes in the wording of a prompt can make profound differences in what they appear to know, which would be unexpected if they were reasoning and expected if they merely respond according to learned language patterns.

Large language models are not capable of behaving ethically or unethically, because they have no mechanism for the ethics.?They do not intend to help or harm people because they have no intentions.?They have no concept of individual persons, it is all just words and word patterns.?The word “Ruth” has no special status different from the word “bat.”?These words appear in different contexts, that is all.?The model may generate sentences with that word, that a human would interpret as positive or negative, true or false, but for the language model, it is just a language pattern, it says nothing about the individual.

The models generate language according to the statistical properties of their models, they have no means to determine truth.?In comparison to Plato’s cave wall allegory, they do not even see the shadows on the cave wall;?they have only a description of the shadows.?A large language model may “talk” its users to death, but that is the only autonomous danger it presents.

An infinite number of monkeys typing on an infinite number of keyboards will eventually produce every copywrited text that has ever been written.?Large language models hurry this process along.?The language model records the aggregated probability of each word given each context, so, rather than drawing words from a uniform distribution, language models draw their words from a nonuniform distribution.?Some words are more likely in given a context than others.?The more likely words to be produced are the words that are more likely to have occurred in the training data in similar contexts.?But they are still selected probabilistically.

The language models represent the aggregated probability of each word given its context. That is the definition of a language model.?Eventually, it too will produce every text on which it has been trained, along with others on which it has not been trained. The models do not, however, contain any explicit representation of the training text, only the aggregated statistical patterns derived from the text.

Artificial intelligence may someday surpass human intelligence, but the current models will not be the means by which this is achieved.?They seem to be intelligent in the same way that an actor may seem to be a mathematical genius when he plays the role in a movie.?Sounding intelligent is different from being intelligent.?The language models are really only capable of solving just one problem—stochastically producing language.?Because they have been exposed to a substantial portion of everything that has been written (at least in English), the learned language patterns can be useful for solving a broad spectrum of problems, provided that a human has been clever enough to cast the problem in a form suitable to the statistical patterns that make up the model, a process called “prompt engineering.”

This complaint can be viewed as an attempt to have the court impose regulations on OpenAI and Microsoft.?It conflates the operations of two companies whose products demonstrate limited capability with the speculated products of every AI company, now and in the future. ?It illustrates what happens when the regulations are based on a profound misunderstanding of how these systems work and what their capabilities are.?The plaintiffs in this case have fallen prey to the marketing hype surrounding OpenAI models, and seek to regulate the models that the marketing imagines rather than the more modest models that exist. ?For example, the plaintiffs argue that the companies do not respect “established privacy rights to be ‘forgotten.’” As they further note “Defendants cannot effectively extract individuals’ information from the Products once the AI is trained on such information.” But, they fail to recognize that the reason that it cannot be extracted is because there is no explicit record of its existence.?Personal information, like every other kind of text, exists only in aggregate, not as individual records. ?Each added text contributes to the probability distribution, but is not directly stored within that network.

The regulations sought in this complaint have fallen prey to the marketing hype surrounding OpenAI models.?Regulations aimed at safety of AI models that are already safe suggests that they will protect the public from risks that do not amount to much.?The models can be used for unsafe purposes, such as fraud, but that calls for regulation of human conduct, not of the models.?A teaspoon can be a deadly weapon; it is difficult to see how a language model could be.

The complaint over-estimates the risks of current AI models and those of any models that could practically be constructed in the foreseeable future.?It seeks to stop these companies from actions that they are not doing and that might be done by other companies in some imagined future.?It over-estimates the degree to which individual information (specific texts or specific persons) is stored within the model.?It assumes cognitive capabilities that the models do not have and cannot get with current architectures.?I will leave it to lawyers to decide whether using public information constitutes theft, but it is very clear that many of the risks posed within this complaint are based on fantasy and hype and not on reality.

Maybe someday we will have artificial general intelligence and maybe someday it will be a risk to humans, but today is not the day.

Michael Quartararo

President of ACEDS | Legal Business and Operations Executive | Project Management Professional | E-Discovery Specialist | Author | Educator | Consultant

1 å¹´

Amen.

赞
回复

要查看或添加评论,请登录

Herbert Roitblat的更多文章

社区洞察

其他会员也浏览了