Musings of a late joiner: From Neuro(bio)logy to Artificial Neural Networks....

Musings of a late joiner: From Neuro(bio)logy to Artificial Neural Networks....

For just over six months, I have been engaged with a bunch of rather bright people - each one is a champion in a chosen focus area.....and each one covers things like Artificial Neural Networks and LLMs and Generative AI with ease....and when they run on projects, sometimes, I am looked at (upon?) for inputs in context of the clinical in regard to what they do!

This has led me up a learning curve (and my upward climb has been duly assisted by these brighter beings), allowing me time to ponder (as a late joiner) about the monumental journey across one of these fantastic interfaces - one where Neuro(bio)logy meets Artificial Neural Networks....leading to the musings that follow. ?

#1. The kick-off point

In context, I believe that if one discounts the poet's (and philosopher's) version of Man's enduring fascination with the Brain, our march unto Neural Networks (and related themes) was probably kicked off way back in 1943, when a complicated publication co-authored by two researchers (McCulloch & Pitts, 1943) was being read by their peers and colleagues.

However, it is likely that very few of those readers could have believed the impact of that publication as a path-breaking and long-ranging piece that would set off a chain culminating in the contemporary tsunami of AI (buoyed by a slew of milestones in context of Neural Networks)!

Now, before we judge those early readers for their incredulity, let's humbly acknowledge that hindsight enjoys 20/20 vision....and the said publication did not call out "Artificial Intelligence" per se..... just outlined a mathematical model about the way a neuron (the basis of all things Neurology) functions!

Further, the publication was carried in a magazine about "Mathematical Biophysics" - not exactly the poster child of coffee table reading!

#2. The “simple yet powerful”

What's more noteworthy is that the said article was reflective of the frenetic movement of early research in Neuro(bio)logy - a movement that would eventually provide the foundational knowledge on which (today's) Artificial Neural Networks (and its manifestation AI) would emerge years (decades?) later.

The frenetic movement's best-known milestone? Donald Hebb - and his equally famous fundamental principle (loosely worded as "Neurons that fire together wire together").

That simple yet powerful principle, which extrapolates beautifully from Physiology to artificial Neural Networks (and spans across Physics and Electricity and Neuroplasticity and Learning & Memory and......well, you get the hint), remains a critical interconnecting bridge that everyone engaging with AI probably learns about! While McCulloch & Pitts and Hebb were (unwitting?) pioneers - the three musketeers of this wave of AI, it is arguable that Frank Rosenblatt (with his Perceptron - a forerunner of the Optical Character Reader in some ways), is the fourth musketeer enabling the slow and steady onward movement.

#3. The first refocus (and pivot)

While Rosenblatt gave us the Perceptron, its emergence also indicated how the early wave of research had matured (and refocused) some more, pivoting towards a specific goal of deciphering how the Human Brain deciphers visual stimuli.

So intense was this refocusing that it led to a Nobel Prize in 1981 (Hubel and Wiesel), and brought forward the NeoCognitron (Fukushima, 1982) - a neural networks - based scanner of visual images that leverages Convolution - an algorithm - to work just a little bit more like the Human Brain does!

#4. The first profound extrapolation

However, the more interesting observation (and I say this as a CI professional) from such innovations is that even as coders were learning from and experimenting with these milestones (and related artefacts), the smarter ones were frequently looking to Neuro(bio)logy for hints and inspiration to extrapolate into more transformative inventions.

For e.g. by noting how (conceptually) a Neuron (the functional basis of neural networks) would benefit from introducing a supporting and directive Astrocyte, innovators extended that inspiration from Neurobiology to transform neural networks and their modulation!?

That inspiration - one in a long series of Neuromimetic innovations that would follow incrementally - led to a crucial passageway between visual interpretation (visual cortex) to speech processing - arming us with better results in context!

#5. The challenging canvas

Interestingly, even as Neuro(bio)logy and its canvas made up of the Nervous System (and the Brain) inspires and guides, it throws up challenges too! Importantly, most of these challenges seem to inherently imbibe a paradoxical factor as well! Let me try and outline a few of these:

1. Frugality:

The Brain (and most of the Nervous System) is incredibly frugal about Energy and Effort expenditure. When one reflects on how far each penny - in context of Neuro(bio)logy, that would probably be represented by impulses and ATP (energy's Holy Grail) - goes, it boggles the Mind (Ahem!).

The paradox: ATP (to my Mind) is an incredibly intense entity - so much energy, so abundant (relatively speaking), and sometimes recycled with a felicity not seen outside Biology!

So, where do we find the network equivalent of ATP at its most glorious? If Economics were to have a say (and it always does), our problem would be about how to make the whole process less expensive without losing on sophistication - this last bit makes it more than a scalability/ubiquity problem!

2. "All on None": Neurons (and muscles) tend to fire impulses (and contract) ONLY when confronted by their fullest Action Potential - anything but is meh!

The paradox: The process of aggregating (or summing up of sundry spikes to a full Action Potential) is an analogue theme! So, how does Digital (deemed to be a step ahead of analogue) replicate it - especially when our attempts have proven less refined (so far)? Exploration is underway to develop chips which can aggregate spikes to yield a comparable result -but how that will work remains to be seen? Side note: This may not be a neuromimetic case but a biomimetic one because the concept of Action Potential is not constrained by a neural network (or Nervous System).

3. Adaptive Evolution: If one looks closely, there is a clear component of aggregation, analysis, learning, and adaptive evolution that forms the bedrock of Neurology. However, it is not an "eat all you can" setting but highly contextualized and specific.....to state simply, your impulse on touching a hot stove might cause you to respond by swiftly drawing away your affected hand and maybe even step away! However, even when all of that would be spontaneously governed by your Sensorineural machinery (and become a part of all comparable encounters in future), the response won't be so overpowering as to make you run out of the kitchen altogether! ? ?

The paradox: Over time, the stimulus -response pair can be modulated (almost instinctively). For e.g. when you choose to cautiously lift and lower a hot bowl of soup from your stove! How do we enable such an instinctive (intuitive?) grading of response.... more importantly, how do we accurately typify stimuli to pre-empt a graded (yet correct and adequate) response? Or how do we address Overfitting in Neural Networks!

Further, what happens when we try to integrate this demand with Immunology (concepts like say, selective and adaptive antibody proliferation)? While a rudimentary form of that melding already exists (it's how keywords allow spam management), the free ranging version needed for generative purposes is somewhat difficult to conceive (just yet)! ?

BTW, before you point to Neuralink, do spare a thought about whether it is a synergistic innovation (or a standalone neural network artefact)!

In closing - Food for thought….

Even as we keep looking to Neuro(bio)logy for inspiration and puzzles, our incremental capabilities to look harder and deeper would probably make it progressively difficult to remain mindful of how to process and apply all that we would come to know, creating an interesting problem - of Knowing vs. Learning!

What happens now.....will both acts run together? Parallel? Sequential?

Or will we have to come up with a Probe-Pause-Play protocol?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了