Inducing a Stroke in ChatGPT… Could AI help Unlock the Mysteries of the Human Brain?
Sam Glassenberg
Level Ex CEO | Advancing medicine through videogame technology and design
One of the fascinating things about language models like ChatGPT is how they encode information.
You can delve into this in this prior article where we explore the internals of ChatGPT.
Two important takeaways from that post:
#1 and #2 result in a highly-optimized representation of human language. It wouldn't surprise me that our evolution as a species has a resulted in a similar architecture.
My grandmother, of blessed memory, had a stroke in her early twenties (I understand it was due to a badly typed blood transfusion). It left her paralyzed on her right side, and took her years to relearn how to speak and write - skills that she never recovered completely.
A Thought Exercise
Here's a thought exercise… one that I'd like to get around to in the coming weeks or months (unless a researcher wants to take the baton from me - I'll gladly hand it off.)
What would happen if we took ChatGPT's fully trained neural network of 175 billion connection weights, and just zero'd out a whole bunch of those values? 1% of them? 10%? 50%?
领英推荐
I'm curious if ChatGPT would behave like a human who just had a stroke - struggling to find certain words, producing gibberish in certain scenarios that it thinks is correct, etc.
What other symptoms might it exhibit?
A More Nuanced Approach
Now just zeroing out a bunch of connections is a na?ve approach. With ChatGPT, every neuron in an 85,000-wide layer of the ANN is connected to every neuron in the next layer. The human brain doesn't work that way. Most neurons have up to 100 connections to other neurons. The human brain also has a lot more 'physical locality' to consider - neurons usually don't connect directly to neurons that are physically far away. ChatGPT achieves something similar to this with its layers, but I suspect that locality isn't nearly enough.?
You'd want to run a simple traversal algorithm that takes a starting point in the network, and follows neural connections around (using parameter weights as a proxy for connectivity and proximity) to zero-out connections and simulate the 'stroke'. Once in a while you'd want to take a random 'jump' to a 'nearby' neuron that isn't directly connected.
What do you think would happen?
Might the results of this experiment yield clues to the nature of strokes in humans? The nature of how the human brain stores language?
Could similar experiments on more complex ANNs in the future reveal clues to the nature of other brain conditions - tumors, aneurysms, depression, OCD, etc.?
Dermatologist and Partner at Medical Dermatology Associates of Chicago
1 年“A model is the embodiment of knowledge.” -Jeff Hawkins, A Thousand Brains
Principal, Dual Citizen Creative: American Insight + Israeli Chutzpah | Bridging the communication gap between Israeli startups ??????? and American target audiences.
1 年Sam, your deep dive into this world - advancing the healthcare industry through games, and now this new exploration of AI's potential role - has been fascinating to follow. Wishing you and Level Ex nothing but success as you continue to blaze these important new trails.