Worms are Driving AI
Photograph by Andy Murray, A Chaos of Delight

Worms are Driving AI

Too busy to read? ?? Listen to this episode now on my Humans of AI channel.

We're not talking bugs or viruses or some other biology-inspired tech term.

We're talking worms - those slithery creatures that look the same going backwards and forwards.

Yes, worms are driving AI.

I first learned about worms when I was at NYU and an article in the alumni magazine caught my attention. Researchers discovered that nematodes, microscopic worms, moved faster when negotiating obstacles than in an open field. This finding informed my understanding of the relationship between structure and space in creativity.

As someone who doesn't spend any time in biology labs, I don't know a whole lot about worms, but it turns out that our contemporary understanding of neuroscience - or brain science - is best understood in a one millimeter long organism known as C. elegans.

Working with today's AI systems - from ChatGPT to Midjourney, Claude to Krea - doesn't require much scientific knowledge. These natural language prompted computers can be used for a wide variety of imprecise tasks - from generating the written word, to summarizing large text-based documents, to creating fantasmic images and videos. But once you get bit by the AI bug, it's nearly impossible not to be infected with a contagious curiosity about how these critters work.

Our natural instinct is to ask an expert, and so we seek out articles, podcasts, white papers and books written by some of the great thinkers and builders of our time - Fei Fei Li, Yann Lecun, Stephen Wolfram, and Lex Fridman are a few that I follow and find fascinating both because of their understanding of computation (the process of simplifying the world into automations that can be performed by an electrical system - either a human or a machine) but also because of their ability to translate complex scientific concepts and mathematical formulas into colloquial language.

This ability is not only critical for helping people like you and me understand how technology works. It's critical within an academic community because some of the most interesting breakthroughs are coming along a Silk Road that trespasses disciplinary silos.

Which brings us closer to worms, but let's first talk about the fork.


Photograph by Andy Murray, A Chaos of Delight


For the past 18 months foundation models - those gargantuan multi-parameter trained compendiums of human knowledge and visual imagery - have been the talk of the town and for good reason. Before these models, our understanding of statistics was Gaussian - large random sets of data would generally distribute itself along a bell curve. Run an experiment enough times, you will find this distribution. Give enough students the same exam, their performance will fall along this same curve. It seemed like a handy rule, but all rules were made to be broken.

What we've found with foundation models is that at a certain point, around the billions, the curve starts to change.

Big really is better. Immensely better.

It makes absolutely no sense that a "stochastic parrot," as ChatGPT has been called by skeptics, could be turned into an intelligent system for communicating with human beings and yet even despite it's foibles, it's pretty darn good.

So the curious one asks, "What exactly is going on behind the proverbial curtain?" to which the sage wizards respond with an educated shrug.

And in the world of big tech, educated shrugs receive bigger budgets, bigger chips, bigger computers and bigger electricity bills with the glimmer of hope that if we can get these systems big enough, they just might make a quantum leap in general intelligence.


Photograph by Andy Murray, A Chaos of Delight


Now don't get me wrong, I am all for this moon shot.

Our world is riddled with wicked problems, some of our own making, and I'm less afraid of an intelligent computer with a plug than I am of the unintelligent humans without one. I'm of the philosophy that survival of the human species is worth taking calculated risks (pun intended).

But these foundation models are not the only way, and this is where the worms come in.

The architecture of AI is built upon neural networks - synthetic models that attempt to emulate the human brain and it's 86 billion neurons and synapses, electrical communications between neurons. But the problem is, neuroscientists don't currently have a map of the human brain, moreover the human brain-body system, so these neural networks are proxies for our understanding of how our brains might be working.

But neuroscientists know alot about worms.

They know that nematodes have exactly 302 neurons. Most importantly, they have all been mapped. This comprehensive neural map, known as the "connectome," details all the synaptic connections between the neurons. The connectome for the C. elegans was completed in the 1980s, making it the first organism to have its entire nervous system mapped at this level of detail.


Photograph by Andy Murray, A Chaos of Delight


But it wasn't until recently that two PhD students working in Vienna took this biological understanding of worms and used it to create a much smaller neural network and this is where the story gets really exciting.

Picture this, you are the head of MIT's CSAIL where you are leading a research unit that is working on the applied use of neural networks to things like autonomous vehicles, drones, and robots. You are trying to teach these moving objects how to navigate space like humans. So, you naturally start tapping into the work being done on foundation models.

But this is like fitting an elephant paw into a finger-sized hole in the dike.

While foundations models are great at accumulating generalized knowledge, they only know what they've been trained on. So if a UFO lands in the middle of the road, they don't know to step on brakes. They also are dependent upon an electrical grid the size of Texas to drive from Manhattan to Queens. And these systems are run on big server farms, not portable devices. Then you teach the car to drive in a neighborhood on a summer day, but a few months later, it doesn't recognize that same neighborhood in the winter. Even worse, it drives by paying attention to the bushes, not the road.

Foundation models are not (yet) world models.


Photograph by Andy Murray, A Chaos of Delight


So you are Dr. Daniela Rus and you are thinking about how you can design a neural network that is small enough and efficient enough operate cars and robots. This is on your mind when you are at a conference in Europe with Dr. Radu Grosu from the Technical University of Vienna and the two of you decide to go for a run.

On this run, you talk about machine learning and the large size of machine learning models and about how the brain of the worm has only 302 neurons and how exciting it would be if you could figure out how the natural brain works and what it might be able to do in cars and robots. As it turns out, Dr. Grosu has two PhD students who have come up with some interesting equations derived from their model of worms and a match is made.

Ramin Hasani and Mathias Lechner move to MIT to continue their research under the direction of Dr. Rus and give birth to Liquid Neural Networks (LNNs), a system of, get this, 19 synthetic neurons that are regulated by a series of differential equations. Now, not only do we have a small computational brain that fit into cars and robots - these brains can learn.

Unlike foundation models that are trained once, LNNs are constantly learning and adapting to their context. In cars, they pay attention to the road, like humans and can learn to drive in locations where they have never been.

And they are explainable. Because the systems are finite, their decisions are causal - scientists can actually trace how decisions were made and understand why they work. No shrugging.

Rus, Hassani and Lechner's Liquid Neural Networks have been around since 2021, longer than the world has been woke to Generative AI. Most recently Dr. Rus gave a Ted Talk alongside other big AI voices. Hers was one of the first to be released to the public, and perhaps she's finally getting her moment to say, bigger isn't necessarily better.

But we often overlook the little things.

Who would have thought that a worm could drive?


This week's images are actual photographs taken by Andy Murray, author of the website A Chaos of Delight: Exploring Life in the Soil . Generative AI is not so good at representing the physical biology of real creatures, so this time I chose to go with the human touch. Thank you, Andy, for your incredible documentation of this fascinating organism.


T

I'm Lori Mazor. I teach AI with a Human Touch.? I'm reinventing how we educate, strategize, and build the future one article at a time. If you enjoy this newsletter,


Brian Bradford Dunn

Founder & CEO of Rogatio.ai, an AI-native services company. ‘Retired’ Kearney Senior Partner with over 25 years of experience.

6 个月

Lori Mazor It took me a few days to find the headspace to catch up on reading - and, once again, you've proven to me the EXTREME value of making sure you are in the top of my reading pile... I absolutely loved your introduction to C. elegans - I'd have never known and I'm now fascinated to learn more. Keep doing what you're doing! And thank you!

Jenny Kay Pollock

Fractional CMO | Driving B2C revenue & growth ?? ?? | Keynote Speaker | Empowering Women in AI

6 个月

This was one of the most interesting reads I’ve had in awhile. Lots to think about Lori Mazor!

Axel C.

3x founder | I share tips on how to use AI reliably | Founder & CEO @Astellar AI

6 个月

Lori Mazor you just opened a can of the little fellas! Thanks for sharing what the team at Liquid is working on.

Bipasha Ghosh

AI Advisor I Strategist I Educator I Speaker I ex-CNN, BBC, Reuters & NBCUniversal. “She demystifies AI for companies, C-suites, classrooms and communities.”

6 个月

Loved reading this Lori. Thank you for writing. Prof. Daniela Rus was a guest lecturer on Robotics for one of the courses I took at MIT Sloan School of Management. She is absolutely amazing!!

James McGilvray

Scaling innovation by day | Exploring AI by night

6 个月

Really enjoying your newsletter Lori Mazor, and loving your book! I worked building industry partnerships for the Bartlett School of Architecture UCL for a couple of years. Your parallel between the crucial balance of rules and creativity in building design and temperature settings in AI was a lightbulb moment. It’s a great read!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了