NEURO AI Workshop, Troms?, Norway, June 4, 2023

NEURO AI Workshop, Troms?, Norway, June 4, 2023

This article summarizes the keynotes and talks from prominent professors in Norway during the #NORA2023 Annual Conference! First, I would like to thank H?gskolen i ?stfold (Hi?) , Stefano Nichele , and Monica Kristiansen Holone for providing me an opportunity to go to the Neuro AI workshop at UiT Norges arktiske universitet and Conference NORA – The Norwegian Artificial Intelligence Research Consortium . It was my first-time experience being part of such a quality conference in Norway's beautiful Troms?. Here I present my technical report with first day, which was a workshop, mostly that I could remember. You can always reach me here: https://twitter.com/s4nyam.

A planned and organized schedule of the workshop can be found here. We went directly to Siva Innovation Centre after reaching Troms?.

No alt text provided for this image
Siva innovasjonssenter Troms?
No alt text provided for this image
UiT Lecture Room

Anders Dale and Klas Pettersen started the discussion. Anders is a world-known and prominent scientist in Neuroimaging whose techniques and work are widely used in Functional magnetic resonance imaging (fMRI) and brain activity. Some of the profiles of the prof I found online are [1], [2], and [3]. In the sense of AI, he started by talking about the Neural Connectivity of the brain and the emerging connections that neurons build while it tries to solve a problem or a task or even when trying to understand the problem. A thought discussion that further led to the early development of Artificial neurons compared with biological neurons. In a sense that Marvin Minsky, Seymour Papert's can be extended to the exploration of non-linearity inspired by biological systems, and the intricate dendritic processes transform the way for an extension of the current perceptron paradigm towards a more comprehensive understanding of the complexities observed in real-life neurons. Further, the discussion about the potential of computation in neurons or perception, with reference to the book [4], highlights the pivotal role of synaptic connections in information processing within the brain, serving as building blocks to the complexity of artificial neurons. With such a motivation, the thrust to build an artificial neuron capable of capturing the true nature and its representation of biological neurons intensifies, for example, Spiking Neural Networks (SNNs). However, it is still a part of the research to determine how the actual neuronal plasticity and time aspects of the real neuron's bio-AI, RNN, and Neuro modular responses work. With further discussion, respective professors from UiB, UiO, and UiS discussed and concluded by saying bio-AI and SNN hold great promise in advancing our understanding of brain function and unlocking new frontiers in artificial general intelligence (AGI). In summary, Neurons in the brain do not operate in isolation but instead function within complex neuromodulatory systems. These systems involve the release of various neurotransmitters and neuromodulators that dynamically modulate the behavior of neural circuits. Neuromodulators can have profound effects on the temporal characteristics of neural responses, altering the excitability, synaptic plasticity, and overall dynamics of neural networks.

Then other professors and students presented short presentations as follows:

Presentation 1: Gaute Einevoll AI vs. BI

No alt text provided for this image
Snapshots from the short presentation of AI vs. BI

One video that I found, which you should also look into, is as follows (The Podcast):

0:00 - Intro

3:25 - Beautiful and messy models

6:34 - In Silico

9:47 - Goals of the human brain project

15:50 - Brain simulation approach

21:35 - Degeneracy in parameters

26:24 - Abstract principles from simulations

32:58 - Models as tools

35:34 - Predicting brain signals

41:45 - LFPs closer to average

53:57 - Plasticity in simulations

56:53 - How detailed should we model neurons?

59:09 - Lessons from predicting signals

1:06:07 - Scaling up

1:10:54 - Simulation as a tool

1:12:35 - Oscillations

1:16:24 - Manifolds and simulations

1:20:22 - Modeling cortex like Hodgkin and Huxley

The references that were mentioned in the slides are

[5] Billeh, Y. N., Cai, B., Gratiy, S. L., Dai, K., Iyer, R., Gouwens, N. W., ... & Arkhipov, A. (2020). Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex.?Neuron,?106(3), 388-403.

[6] Einevoll, G. T., Destexhe, A., Diesmann, M., Grün, S., Jirsa, V., de Kamps, M., ... & Schürmann, F. (2019). The scientific case for brain simulations.?Neuron,?102(4), 735-744.

[7] Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011).?Principles of computational modeling in neuroscience. Cambridge University Press.

Finally, I remember prof mentioning about f-i curve, which was new to me so I noted it down, but of course, it is not the only thing. In general, the F-I curve shows that as the input current increases, the firing frequency of the neuron also increases. However, neurons have a threshold below which they do not produce any output spikes. Once the threshold is surpassed, the firing frequency increases rapidly and then saturates at higher current levels. The F-I curve illustrates this relationship by plotting the input current on the x-axis and the firing frequency (or rate) on the y-axis.

Presentation 2: Solve S?b? Introducing BONSAI – Biologically inspired self-Organizing Networks for Sustainable Artificial Intelligence (NMBU) - Its all spikes!

BONSAI here should not be confused with Microsoft's 7-year-old project

A quick reference to their youtube channel - https://www.youtube.com/@solvesb8322/videos (I really loved it!)

A very nice presentation including Why, What, and How Bio-Inspired AI?

Why? Manual AI or Deep Learning and its components are breakthroughs and are impressive. But they come with high cost, large training time, static (manual), Casualty, less Interpretability, Confidence, uncertainty, Catastrophic forgetting in continual learning, model generalizability, and sustainability.

What can we learn from the brain? The brain performs cheap computation. Fast to adaptability to the environment, Dynamic and Continual learning, Able to infer (epistemic) causality from observation and build "world models", Self-assessment of uncertainty and confidence in decisions. Brain still forgets things but does not do them catastrophically.

How?

No alt text provided for this image
A snapshot from the presentation

I was expecting prof should have added meta-learning as a piece of the puzzle, but I could not ask this question during the workshop :P

Prof also advertised the opening of three Ph.D. projects at NMBU - Norwegian University of Life Sciences

Presentation 3: Kai Olav Ellefsen More human-robot brains with inspiration from biology, psychology, and neuroscience (ROBIN)

A concise list of his affiliated videos on youtube is as follows which helped me a lot to understand and get a recap of his presentation as well as his works:

Less forgetful ANNs - YouTube

Costs and Benefits of Learning - YouTube

Evolved Sensitive Periods in Learning - YouTube

Emma Stensby- Co-optimising Robot Morphology and Controller in a Simulated Open-ended Environment - YouTube

Coevolutionary Learning of Neuromodulated Controllers for Multi-Stage and Gamified Tasks: ACSOS 2020 - YouTube

No alt text provided for this image
From the very early slides of the prof

I missed some part of the presentation while welcoming Sachin Gaur from NORA ??

However, I quickly noted the references from the presentation as follows:

[8] Ellefsen, K. O., Mouret, J. B., & Clune, J. (2015). Neural modularity helps organisms evolve to learn new skills without forgetting old skills.?PLoS computational biology,?11(4), e1004128.

[9] PIRC - Predictive and Intuitive Robot Companion

[10] Inspiration from "Thinking Fast and Slow" a research work of the prof and his students here RADAR: Reactive and Deliberative Adaptive Reasoning - Learning When to Think Fast and When to Think Slow

Prof's webpage

Finally, prof advertised a few Ph.D. and Postdoc positions at https://tinyurl.com/phdpos-pirc.

Presentation 4: Mikkel Elle Lepper?d Normative Modelling and Network Reconstruction Techniques for Studying Compositional Generalization in Spatial Cells of the Hippocampal Formation

The references used in the presentation slides were:

[11] Levine, S. S., & Prietula, M. J. (2012). How knowledge transfer impacts performance: A multilevel model of benefits and liabilities.?Organization Science,?23(6), 1748-1766.

[12] Li, Y. (2021). Concepts, Properties and an Approach for Compositional Generalization.?arXiv preprint arXiv:2102.04225.

[13] The age-old problem in neuroscience Greff, K., Van Steenkiste, S., & Schmidhuber, J. (2020). On the binding problem in artificial neural networks.?arXiv preprint arXiv:2012.05208.

[14] ICLR 2023 paper, Whittington, J. C., Dorrell, W., Ganguli, S., & Behrens, T. E. (2022). Disentangling with Biological Constraints: A Theory of Functional Cell Types.?arXiv preprint arXiv:2210.01768.

[15] Nyberg, N., Duvelle, é., Barry, C., & Spiers, H. J. (2022). Spatial goal coding in the hippocampal formation.?Neuron.

[16] Leutgeb, S., Leutgeb, J. K., Treves, A., Moser, M. B., & Moser, E. I. (2004). Distinct ensemble codes in hippocampal areas CA3 and CA1.?Science,?305(5688), 1295-1298.

[17] Leutgeb, S., Leutgeb, J. K., Barnes, C. A., Moser, E. I., McNaughton, B. L., & Moser, M. B. (2005). Independent codes for spatial and episodic memory in hippocampal neuronal ensembles.?Science,?309(5734), 619-623.

[18] Fyhn, M., Hafting, T., Treves, A., Moser, M. B., & Moser, E. I. (2007). Hippocampal remapping and grid realignment in entorhinal cortex.?Nature,?446(7132), 190-194.

[19] Gardner, R. J., Hermansen, E., Pachitariu, M., Burak, Y., Baas, N. A., Dunn, B. A., ... & Moser, E. I. (2022). Toroidal topology of population activity in grid cells.?Nature,?602(7895), 123-128

[20] David Marr: Unveiling the Layers of Perception - Exploring the Levels of Description in Cognitive Science

No alt text provided for this image
Image from the medium article with hyperlink at index 20

Some other misc. references which I could not note down.

Presentation 5: Tor Stensola Representational spaces

I am sorry I could not note down any of the slides and presentation notes here. I found three relevant research of them listed below:

  1. Stensola, T., & Stensola, H. (2022). Understanding Categorical Learning in Neural Circuits Through the Primary Olfactory Cortex.?Frontiers in Cellular Neuroscience,?16, 920334.
  2. Stensola, T., & Moser, E. I. (2016). Grid cells and spatial maps in entorhinal cortex and hippocampus.?Micro-, meso-and macro-dynamics of the brain, 59-80.
  3. Moser, E. I., Roudi, Y., Witter, M. P., Kentros, C., Bonhoeffer, T., & Moser, M. B. (2014). Grid cells and cortical representation.?Nature Reviews Neuroscience,?15(7), 466-481.

After finishing five presentations, Klas suggested a video to watch:

After finishing the talk, there was a video conferencing keynote talk by Tony Zador, Catalyzing next-generation Artificial Intelligence through NeuroAI

[21] Zador, A., Escola, S., Richards, B., ?lveczky, B., Bengio, Y., Boahen, K., ... & Tsao, D. (2023). Catalyzing next-generation artificial intelligence through neuroai.?Nature communications,?14(1), 1597.

A very similar video to his presentation that was delivered during Neuro AI Workshop can be found here:


Logistics

Thank you for reading the article. This was about the Neuro AI workshop, part of the NORA AI Annual Conference in Troms?. Myself, I am a full-time master's student at H?gskolen i ?stfold (Hi?) and working as Research Asst. and summer intern with Stefano Nichele .

I tweet about my work in ALife, Evolution, and Complexity with Stefano Nichele (nichele.eu) on my Twitter handle here https://twitter.com/s4nyam.

No alt text provided for this image
A typical Twitter account of a researcher :P

Stay tuned for NORA Main track conferences in my upcoming articles!

Alex Moltzau

EU AI Policy | European AI Office of the European Commission | Visiting Policy Fellow at University of Cambridge

1 å¹´

What a wonderful summary of presentations, your observations and further reading/watching. I will share this on the NORA – The Norwegian Artificial Intelligence Research Consortium page. ?????

Ashay Singh

Masters Student @ Simula Research | Machine Learning | Deep Learning | Generative AI | MLOPS | LLM

1 å¹´

Great recap! Valuable insights from an exciting conference.

要查看或添加评论,请登录

Sanyam J.的更多文章

  • ICAPAI 2024, Halden, Norway - 16th April

    ICAPAI 2024, Halden, Norway - 16th April

    I am currently a student at ?stfold University College (Hi?) in Applied Computer Science. While I presented two papers…

  • NORA AI Annual Conference 2023, Troms?, Norway

    NORA AI Annual Conference 2023, Troms?, Norway

    After Finishing Neuro AI workshop, The main conference track of NORA AI Annual Conference started at the venue Scandic…

  • NORA-AI Meet 2022, Oslo, Norway

    NORA-AI Meet 2022, Oslo, Norway

    The conference kicked with a total of ~113 participants on day 1 (As I heard in an announcement). Stunning kick start…

    4 条评论
  • AI-GA!

    AI-GA!

    Clune, J. (2019).

  • What you need to know about Quantum Information and Computation

    What you need to know about Quantum Information and Computation

    Also available at: https://s4nyam.medium.

    4 条评论

社区洞察

其他会员也浏览了