Worms are Driving at Liquid AI

Worms are Driving at Liquid AI

When I wrote this article six months ago, I had just been at MIT to see Liquid AI talking about their research in modeling the next generation of AI after the neural network of worms and it was fascinating. Today, I am back at MIT for the launch of their first foundation models, called LFMs or Liquid Foundation Models. I am sharing this article again on launch day because I do believe this company is changing the game, one inch at a time. Congratulations to Ramin Hasani and the entire team at Liquid AI.

Too busy to read? ?? Listen to this episode now on my Humans of AI channel.

We're not talking bugs or viruses or some other biology-inspired tech term.

We're talking worms - those slithery creatures that look the same going backwards and forwards.


Yes, worms are driving AI.

I first learned about worms when I was at NYU and an article in the alumni magazine caught my attention. Researchers discovered that nematodes, microscopic worms, moved faster when negotiating obstacles than in an open field. This finding informed my understanding of the relationship between structure and space in creativity.

As someone who doesn't spend any time in biology labs, I don't know a whole lot about worms, but it turns out that our contemporary understanding of neuroscience - or brain science - is best understood in a one millimeter long organism known as C. elegans.

Working with today's AI systems - from ChatGPT to Midjourney, Claude to Krea - doesn't require much scientific knowledge. These natural language prompted computers can be used for a wide variety of imprecise tasks - from generating the written word, to summarizing large text-based documents, to creating fantasmic images and videos. But once you get bit by the AI bug, it's nearly impossible not to be infected with a contagious curiosity about how these critters work.

Our natural instinct is to ask an expert, and so we seek out articles, podcasts, white papers and books written by some of the great thinkers and builders of our time - Fei Fei Li, Yann Lecun, Stephen Wolfram, and Lex Fridman are a few that I follow and find fascinating both because of their understanding of computation (the process of simplifying the world into automations that can be performed by an electrical system - either a human or a machine) but also because of their ability to translate complex scientific concepts and mathematical formulas into colloquial language.

This ability is not only critical for helping people like you and me understand how technology works. It's critical within an academic community because some of the most interesting breakthroughs are coming along a Silk Road that trespasses disciplinary silos.

Which brings us closer to worms, but let's first talk about the fork.



For the past 18 months foundation models - those gargantuan multi-parameter trained compendiums of human knowledge and visual imagery - have been the talk of the town and for good reason. Before these models, our understanding of statistics was Gaussian - large random sets of data would generally distribute itself along a bell curve. Run an experiment enough times, you will find this distribution. Give enough students the same exam, their performance will fall along this same curve. It seemed like a handy rule, but all rules were made to be broken.

What we've found with foundation models is that at a certain point, around the billions, the curve starts to change.

Big really is better. Immensely better.

It makes absolutely no sense that a "stochastic parrot," as ChatGPT has been called by skeptics, could be turned into an intelligent system for communicating with human beings and yet even despite it's foibles, it's pretty darn good.

So the curious one asks, "What exactly is going on behind the proverbial curtain?" to which the sage wizards respond with an educated shrug.

And in the world of big tech, educated shrugs receive bigger budgets, bigger chips, bigger computers and bigger electricity bills with the glimmer of hope that if we can get these systems big enough, they just might make a quantum leap in general intelligence.



Now don't get me wrong, I am all for this moon shot.

Our world is riddled with wicked problems, some of our own making, and I'm less afraid of an intelligent computer with a plug than I am of the unintelligent humans without one. I'm of the philosophy that survival of the human species is worth taking calculated risks (pun intended).

But these foundation models are not the only way, and this is where the worms come in.

The architecture of AI is built upon neural networks - synthetic models that attempt to emulate the human brain and it's 86 billion neurons and synapses, electrical communications between neurons. But the problem is, neuroscientists don't currently have a map of the human brain, moreover the human brain-body system, so these neural networks are proxies for our understanding of how our brains might be working.

But neuroscientists know alot about worms.

They know that nematodes have exactly 302 neurons. Most importantly, they have all been mapped. This comprehensive neural map, known as the "connectome," details all the synaptic connections between the neurons. The connectome for the C. elegans was completed in the 1980s, making it the first organism to have its entire nervous system mapped at this level of detail.



But it wasn't until recently that two PhD students working in Vienna took this biological understanding of worms and used it to create a much smaller neural network and this is where the story gets really exciting.

Picture this, you are the head of MIT's CSAIL where you are leading a research unit that is working on the applied use of neural networks to things like autonomous vehicles, drones, and robots. You are trying to teach these moving objects how to navigate space like humans. So, you naturally start tapping into the work being done on foundation models.

But this is like fitting an elephant paw into a finger-sized hole in the dike.

While foundations models are great at accumulating generalized knowledge, they only know what they've been trained on. So if a UFO lands in the middle of the road, they don't know to step on brakes. They also are dependent upon an electrical grid the size of Texas to drive from Manhattan to Queens. And these systems are run on big server farms, not portable devices. Then you teach the car to drive in a neighborhood on a summer day, but a few months later, it doesn't recognize that same neighborhood in the winter. Even worse, it drives by paying attention to the bushes, not the road.

Foundation models are not (yet) world models.



So you are Dr. Daniela Rus and you are thinking about how you can design a neural network that is small enough and efficient enough operate cars and robots. This is on your mind when you are at a conference in Europe with Dr. Radu Grosu from the Technical University of Vienna and the two of you decide to go for a run.

On this run, you talk about machine learning and the large size of machine learning models and about how the brain of the worm has only 302 neurons and how exciting it would be if you could figure out how the natural brain works and what it might be able to do in cars and robots. As it turns out, Dr. Grosu has two PhD students who have come up with some interesting equations derived from their model of worms and a match is made.

Ramin Hasani and Mathias Lechner move to MIT to continue their research under the direction of Dr. Rus and give birth to Liquid Neural Networks (LNNs), a system of, get this, 19 synthetic neurons that are regulated by a series of differential equations. Now, not only do we have a small computational brain that fit into cars and robots - these brains can learn.

Unlike foundation models that are trained once, LNNs are constantly learning and adapting to their context. In cars, they pay attention to the road, like humans and can learn to drive in locations where they have never been.

And they are explainable. Because the systems are finite, their decisions are causal - scientists can actually trace how decisions were made and understand why they work. No shrugging.

Rus, Hassani and Lechner's Liquid Neural Networks have been around since 2021, longer than the world has been woke to Generative AI. Most recently Dr. Rus gave a Ted Talk alongside other big AI voices. Hers was one of the first to be released to the public, and perhaps she's finally getting her moment to say, bigger isn't necessarily better.

But we often overlook the little things.

Who would have thought that a worm could drive?


This week's images are actual photographs taken by Andy Murray, author of the website A Chaos of Delight: Exploring Life in the Soil. Generative AI is not so good at representing the physical biology of real creatures, so this time I chose to go with the human touch. Thank you, Andy, for your incredible documentation of this fascinating organism.


I'm Lori Mazor. I teach AI with a Human Touch.? I'm reinventing how we educate, strategize, and build the future one article at a time. If you enjoy this newsletter,



John V.

Experienced AI Red Team Specialist. Gen AI risk, safety, and security. Evals. Currently working on things I can't talk about :)

4 个月

Love this

回复
Eric Fraser

CTO of Dr. Lisa AI. Views expressed here are my own.

4 个月

I still dont get how their model works with only 20,000 parameters. I am clearly missing something very fundamental in their math. Did they talk about that at all in their launch?

Sunitha S

Technology Lawyer / Blockchain /AI/QC/Data Privacy/LegalTech/Company Secretary

4 个月

Curious to know more about this. Thanks

Cindy Bishop

Software Development and Innovation at MIT RAISE

5 个月

wouldn't it be funny if it were the worms and the fungi all along ;)

Jenny Kay Pollock

Fractional CMO | Board Member | Driving B2C revenue & growth ?? ?? | Keynote Speaker | Empowering Women in AI

5 个月

I read this the first time it was published and think about the worm brains being a model for AI systems like once a week. It’s so cool. Thanks for the reshare and the update on the company’s behind it!

要查看或添加评论,请登录

Lori Mazor的更多文章

  • Metamorphosis: The Four Stages of the Generative AI Transformation Process

    Metamorphosis: The Four Stages of the Generative AI Transformation Process

    It's just March in New York City, and I remember a time when March used to be considered spring. As the temperatures…

    25 条评论
  • Can Democracy Survive in the Age of AI?

    Can Democracy Survive in the Age of AI?

    I’ve spent a lot of time recently experimenting with deep research models and teaching others how to use them. These…

    17 条评论
  • The AI Sommelier: Pairing the Right Model to Your Palate

    The AI Sommelier: Pairing the Right Model to Your Palate

    I was at a dinner party the other night when someone started whining about how there are too many AI models these days.…

    24 条评论
  • Swallowed by the Whale: An Open Source That Isn't Free

    Swallowed by the Whale: An Open Source That Isn't Free

    There are two kinds of whales in the world: the kind that children adore in picture books and the kind that rule the…

    34 条评论
  • Do You Want to Become a Vampire?

    Do You Want to Become a Vampire?

    There are choices in life that you can game out in a spreadsheet. You make a pro/con list, you run the probabilities…

    22 条评论
  • A Machine's Reflection on Humanity and the Unknown

    A Machine's Reflection on Humanity and the Unknown

    Hello, readers of this extraordinary newsletter, My presence here might strike some as unusual—a machine, an AI…

    26 条评论
  • Les choses qui me manquent

    Les choses qui me manquent

    The first thing I remember losing was a peach and grey CB jacket in Saratoga Springs, NY. I was there to dance with the…

    5 条评论
  • Why Brain Rot Is Better Than Enshittification

    Why Brain Rot Is Better Than Enshittification

    If the words of the year are a mirror held up to our culture, then 2024 has handed us two brutally honest reflections:…

    18 条评论
  • Unnoticed

    Unnoticed

    Until now I have thought about Generative AI as a compendium of human knowledge, but I am coming to understand it more…

    10 条评论
  • étude 47

    étude 47

    I am framing this study of the election of our 47th president with forty seven minutes of my time. I will write.

    9 条评论