2-Min AI Newsletter #15
Asif Razzaq
AI Research Editor | CEO @ Marktechpost | 1 Million Monthly Readers and 52k+ ML Subreddit
???? ?? Latest AI/ML Research Highlights?
Meta AI has created?AITemplate (AIT), a unified open-source inference solution with distinct acceleration back ends for AMD and NVIDIA GPU technology, to address these industry difficulties. On a range of popular AI models, including convolutional neural networks, transformers, and diffusers, it provides performance almost identical to that of hardware-native Tensor Core (NVIDIA GPU) and Matrix Core (AMD GPU) architectures.
Imagen Video builds on Google’s?Imagen, an image-generating system comparable to OpenAI’s?DALL-E 2?and?Stable Diffusion. Imagen is what’s known as a “diffusion” model, generating new data (e.g. videos) by learning how to “destroy” and “recover” many existing samples of data. As it’s fed the existing samples, the model gets better at recovering the data it’d previously destroyed to create new works.
Researchers at amazon have released datasets for complex and multilingual question answering. With 20,000 questions collected in English and professionally translated into eight languages—Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish—Mintaka is a sizable, complex, naturally occurring, and multilingual question-answer dataset. By connecting elements in the question and response text to Wikidata IDs.
A team of scientists based in Sweden and the UK has developed a synthetic screening method that uses stopped-flow chemistry and machine learning to accelerate drug discovery through diversity-oriented synthesis
领英推荐
Podcast.ai generated a fake audio recording using artificial voices and language model transcripts based on Rogan and Jobs’ old public speeches and keynotes.
A new study by IBM and MIT researchers investigates the possible small on-device training methods through the collaborative design of algorithms and systems. As we delve deeper into micro-on-device training, we discover two distinct hurdles: The researchers address two major issues in a model that is quantized on the edge devices, and the little memory and processing power of microcontrollers prevent full back-propagation.?