Seeing Images in Single Cell Data (Pareidolia)
This post will describe a bit of an unusual application for generative AI. To be honest, I’m still not sure if it falls into the bin of something genuinely useful, or just a bit of whimsy with data, but once I had the idea, I really, really wanted to see it implemented.?
I was sitting with two collaborators a few weeks back, analyzing single cell data. For those not familiar, a common analysis technique is to assign clusters to the cells in the data set, and then perform a dimensionality-reduction on the data set with UMAP or t-SNE to visualize. You wind up with a scatter plot of different cells, lumped together into blobs of various shapes by similarities in their patterns of gene expression.
Each blob has a cluster id, and eventually, maybe a name with a bit more biological meaning (“these are the activated fibroblasts!”), but the actual navigation with the collaborator went much more like this:
Collaborator: “Ok, now click on the cluster that looks like a snake. Hmmm…interesting.? Ok, now the round blob to the northeast shaped like Australia.”
Me: “This one?”
Collaborator: “No, the other one, which is sort of pear-shaped.”
This experience is familiar to anyone describing animals in clouds when cloud-gazing with friends, and it turns out to have a name: pareidolia .? Seeing familiar things in random patterns.? Ink blot tests are another case.
I was reading a bit about image generation models a bit after this, and got to thinking - why don’t we just make all this description explicit?? Instead of cluster numbers, let’s just agree on what the blobs look like, generate a map of those, and then use it to navigate!
The two innovations that made this scheme possible were first the availability of stable diffusion models that could even be accommodated on my creaky-old personal computer GPU and the fact that these models supported in-painting. With in-painting, you can mask off areas of an image that you don’t want generated.? A normal use is for retouching existing images; I used it to generate images that approximate specific shapes.
So, the workflow in the end is:
领英推荐
As someone who has not worked with direct graphics manipulation in a while, the last point was a bit painful to program, and I spent quite a bit of time on this, when I should have been rambling about GPT-4 .?
For the test, I snagged a medium-sized data set from GEO , on cells from a mouse paw, which was uploaded by Morgan G Anderson-Crannage at New York Medical College.? This fit my criteria of "not something I'm directly working on for real projects" and "not an insane number of clusters to test things out."
I'd started by generating a typical UMAP to get the clusters:
After going through the process I'd described above, here’s an example of the final product.
And...here's another less space-filling version. Probably best to pick and choose the most visually-appealing representation for each cluster for the final result.
Now, if you are navigating a dataset with your collaborator, rather than boring old "cluster 1," you can easily refer to the “cluster of two fishes” or the “curled up cat” and find your place in the data set!
The notebook is available on Github. ?It would be great to make it into a more robust tool; I’d be happy to collaborate a bit if you’d enjoy this as a project. No reason this couldn't be applied to other clusters as well, outside of single cell.
Hopefully, someone will find this useful in their work and…if not, maybe I’ll just make a submission to the annual ISMB art exhibition. :-)