Your Daily AI Research tl;dr | 2022-06-06
What's AI by Louis-Fran?ois Bouchard
Artificial Intelligence clearly explained to everyone
Welcome to your official daily AI research tl;dr (and news) intended for AI professionals and enthusiasts.
In this newsletter, I share the most exciting papers I find on a daily basis, along with a short summary to help you quickly seize if the paper is worth investigating. I will also take this opportunity to share daily interesting news in the field. I hope you enjoy the format of this newsletter, and I would gladly take any feedback you have in the comments to improve it.
Now, let's get started with this iteration!
1?? Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules
For this one, I will let the authors speak through their great conclusion:
This paper introduces "novel continuous-time sequence processing neural networks that learn to use sequences of ODE-based continuous learning rules as elementary programming instructions to manipulate shortterm memory in rapidly changing synaptic connections of another network."
"Our novel models outperform the best existing Neural Controlled Differential Equation based models on various time series classification tasks, while also addressing their scalability limitations."
Link to the paper: https://arxiv.org/pdf/2206.01649.pdf
领英推荐
2?? Compositional Visual Generation with Composable Diffusion Models
The researchers introduced a new approach to the current hottest topic: diffusion models (Dalle-2, Imagen) interpreting them as energy-based models. Their method composes multiple diffusion models during inference to generate images containing all concepts described in a text prompt sent as input. This is to address the fact that while models such as DALLE-2 are powerful, they often fail in understanding the context of the text input and generate the objects in a random manner in the image. Here, each diffusion model will be responsible for modeling a certain component of the image. "Images are iteratively refined starting from a Gaussian noise, with a small amount of Gaussian noise added at each iterative step" following the given prompt.
Link to the paper: https://arxiv.org/pdf/2206.01714.pdf
?? Google no longer allows deepfake projects on its Colaboratory (Colab) online computing service.
Google Colab updated its terms to ban the training of AI models for deepfakes, including videos where the subject's face is swapped with another's. I feel like this is a weird decision by Google as "AI models for deepfakes" and face swap models are quite popular and not only used for disinformation or to do bad things. Is this a trick to save on compute while also being able to detach themselves completely from this societal issue instead of tackling it with the governments?
What do you think of this?
And we are already at the end of this iteration! Please subscribe and share it with your techy friends if you've enjoyed it. Once again, let me know how to improve this format as this is something I have wanted to do for quite some time and haven't figured out the best way to do so. I hope you liked the decisions here, and I would be glad to hear from you to make it even better with time.
Thank you for reading, a fellow AI enthusiast and researcher.