Reconstructing X from brain activity: what does it mean?
https://linkinghub.elsevier.com/retrieve/pii/S0960-9822(11)00937-7

Reconstructing X from brain activity: what does it mean?

Q: So, what this trending paper is about?

A: Scientists made several brain scans while presenting the subjects with some images/clips/words/sounds/whatever, then they put the stimuli and brain activity through the ML algorithm and were able to use the resulting model to successfully reconstruct stimuli from the brain data of the participant observing it.

No alt text provided for this image
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006633


Q: What does "successfully reconstructed" mean?

A: It means that the similarity metrics between reconstructed and original stimuli are higher than a random chance.


Q: What is a similarity metric/ statistical distance?

A: Basically, any metric quantifying differences that the authors decided to use. It must have some mathematical properties, like distance(a,b) should be equal to distance(b,a), and distance(a,a) should be zero, but there are many different metrics for different cases. While all of them quantify roughly the same thing, it's done from different points of view, and the choice of metric can heavily influence the results. For example, if we quantify the distance between the words "cow" "dog" and "wolf", some metrics will place cow and dog closer because they have fewer different letters, and some will place dog and wolf closer because they are both canine. The same goes for any other domain.


Q: Does it mean that if the distance is low, then the stuff we reconstructed from the brain activity is identical to what we presented to the subject?

A: It may be better than chance, but it's never zero, and sometimes you need to use your imagination to see the similarity. It may look more impressive if the underlying process is robust, for example, if it's the visual cortex that is retinotopic (the spatial distribution of activity is directly related to the image that is being observed) However, don't expect too much, especially if the decoding includes some higher order concepts, like when generating a story from the brain data of person watching the video.


Q: But the reconstructed pictures/movies in the paper look almost identical!

A: Firstly, they are not: look at the pictures closely. Also, they have been cherry-picked. Not necessarily in a bad sense - it's impossible to include every stimulus in the paper. However, you can be sure that many pictures in the dataset look less impressive.


Q: So, now you can show the subject any stimuli, even the one that was not part of the experiment, and get this "lower than chance" distance?

A: Ideally, the reconstruction should be tested with out-of-sample data, but it's not always the case. Also, the dataset usually can't represent every possible class of stimuli - no participant is able to sit through the experiment long enough to look through even a medium-sized CV dataset. Life is short, and the MRI machine is billed by the hour. If you come up with a stimulus that is completely different from what you presented to the model at the learning stage, you may get a bad reconstruction, especially if you eliminate all data leaks. This is basically the same as with any other ML model but with less data and a worse signal-to-noise ratio. However, this will work with simpler situations if the encoding of stimuli is reliable and straightforward, like in the primary visual cortex.

No alt text provided for this image
https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3

Q: Can we use the same model with different people?

A: Rarely - it's likely that it has to be built for every person separately (present stimuli, record data, build a model, then use it for this participant only). However, in several years we may be able to build between-subject models too, provided we have more than several tens of participants in a study.


Q: Will it work if you don't show stimulus to the participants, but rather ask them to think about it, or show it once and ask them to remember later? Can this mental image be reconstructed?

A: No, it can not, unless you have a very small and imbalanced set of stimuli, or a data leak somewhere, and even then it probably won't work. There were some famous papers on decoding dreams, but it's not quite the same - it looks more like a classification than a reconstruction.


Q: Jack Gallant already did it some 15 years ago: why still bother?

A: Firstly, the neural networks are much better now. Also, you can put the output of your model through any modern generative AI tool, and pictures will look nicer than 15 years ago (even though this arguably doesn't add much to the understanding of the brain). Secondly, nowadays people are trying to decode more high-level concepts, like semantics - see

or


We still don't fully understand how these things are encoded, so the results are more interesting.


Q: Does it make us closer to the BCI/Neuralink/direct exchange of information between brain and computer?

A: Somewhat - it's better than nothing, I guess. Still, don't expect too much - there is a lot of more relevant research, like decoding imagined/attempted speech or handwriting.


Q: Does all this mean that scientists can read minds now?

A: One more article in the mainstream press titled "Neuroscientists read minds with new AI tool", and I swear to God, I'll stop being so polite.


Q: So all this is completely useless, then?

A: Basic research is allowed to be useless, deal with it. Even though some insights can be drawn from this research: for example, what brain areas contribute more to the decoding? It doesn't necessarily mean much, but it means that something task-relevant is going on there, which is worth further investigation.




Want to add something to the subject? Let's discuss it in the comments!

Uma Vaidyanathan

Expert in Digital Health and Precision Psychiatry | Presidential Leadership Scholar

1 年

Seriously great post! Fantastic job explaining what these kinds of papers are really about and that computers can’t read our brains quite yet…

回复
Alex Rogozea

AI/ML Engineer @ AvatarOS

1 年

Insightful. For "Q: Will it work if you don't show stimulus to the participants, but rather ask them to think about it, or show it once and ask them to remember later? Can this mental image be reconstructed?" there's a caveat, for example a new paper on Neural Memory Decoding with EEG Data and Representation Learning: https://arxiv.org/pdf/2307.13181.pdf - more info is needed to accurately respond to that question.

要查看或添加评论,请登录

Rafael Grigoryan的更多文章

社区洞察

其他会员也浏览了