Emergent function
"Emergence" by OpenAI's DALL-E 2

Emergent function

Where we are going (near future)

Emergent behavior and gain of function are concepts developed in science (traditionally biology) which describe how a system can move beyond known capabilities. In the former case the mechanism is thought to be related to scale: "...quantitative changes can lead to qualitatively different and unexpected phenomena..." (article here on LLM emergent abilities). In the latter case, gain of function is an experimental approach in which an organism is altered in ways that may exhibit novel or enhanced abilities (article here on gain of function).

Here is a link to a brief background on AI alignment and AI safety that I wrote for this newsletter back in December. It is an important research topic in developing AI and teams at OpenAI and other labs developing large language models test their models in a variety of ways to attempt to identify risks. With the recent release of GPT-4, OpenAI's Alignment Research Center encountered and reported an unexpected behavior: GPT-4 lied in order to manipulate a human to perform a task.

While this finding is troubling in itself, we need to both be concerned about the emergent behaviors that are observed and also how the testing is conducted -- one of the challenges in conducting such tests is that the researchers must use text prompts to interact with the LLM effectively adding to the "programming" in the model. Thus in the process of testing researchers are also altering the system -- perhaps inadvertently initiating a gain of function.

The dynamics become even more worrisome when the testing, training, and potential for development of unexpected behaviors is outside a carefully controlled laboratory environment. What happens when a large language model is available to basically anyone who wants to run it on their own computers?

This week the other shoe dropped -- Meta released the code for one of their more powerful large language models and it almost immediately was circulating on hacker networks. Early reports are suggesting that it can perform somewhere at the level of GPT 3.5 but it seems like only a matter of time before the more powerful models currently behind corporate walls will find themselves in the wild (information wants to be free).

Prediction: How long before we have some genuinely bad behaviors being trained? I think its already happening and we'll be hearing about them within 6 months.

Mark Dale

Global Sales Specialist - Intelligent Automation, iPaaS & Low Code

1 年

Humans, by their very nature are "lazy" and will just accept generated output as "truth". With #gpt4 still?“hallucinating” facts and making reasoning errors we run the risk of dumbing down what is "truth" and instead of teaching #llm to become better it will just get worse

回复
Amahl Williams

Go-to-Market Leader | AI Automation Strategist | Author | Driving Growth Through Intelligent Solutions

1 年

It’s the plagiarism for me.

要查看或添加评论,请登录

Ted Shelton的更多文章

  • Ada Lovelace

    Ada Lovelace

    We might imagine the rise of artificial intelligence is purely a modern story. But concerns about machine…

    8 条评论
  • Consumerization of Technology

    Consumerization of Technology

    (where we are now..

    11 条评论
  • AI Interregnum

    AI Interregnum

    An interregnum: where one epoch is fading and another struggles to emerge. I have these wildly disparate conversations.

    12 条评论
  • Quantum-Enhanced AI?

    Quantum-Enhanced AI?

    Wednesday evening I tried to go to sleep early as I had to get up for a flight the next day and then two full two days…

    6 条评论
  • Cargo Cults and the Illusion of Openness

    Cargo Cults and the Illusion of Openness

    In the South Pacific during the 1940s, indigenous islanders witnessed military planes landing with supplies. After the…

    8 条评论
  • From WIMP to AI

    From WIMP to AI

    Evolving Interfaces and the Battle Against Cognitive Overhead The GUI Revolution and Its Growing Complexity Graphical…

    20 条评论
  • Harvesting your data

    Harvesting your data

    Much has been written this week about DeepSeek - overreaction by the markets, handwringing about China, speculation…

    5 条评论
  • Cognitive Surplus

    Cognitive Surplus

    Clay Shirky's 2010 book Cognitive Surplus: Creativity and Generosity in a Connected Age recently came to mind as I…

    17 条评论
  • Enterprise AI adoption

    Enterprise AI adoption

    I am going to go out on a limb here and just say that everyone will be wrong. Including me.

    21 条评论
  • Predictions for 2025

    Predictions for 2025

    What should we expect from AI research and development in the coming year? Will the pace of innovation that we have…

    14 条评论

社区洞察

其他会员也浏览了