??Top ML Papers of the Week
This issue highlights the top ML Papers of the Week (Feb 27 - Mar 5).
1). Language Is Not All You Need - introduces a multimodal large language model called Kosmos-1; achieves great performance on language understanding, OCR-free NLP, perception-language tasks, visual QA, and more. (Paper, Tweet)
2). Comparing Brain Activations and Language Models - finds that human brain activity is best explained by the activations of modern language models enhanced with long-range and hierarchical predictions. (Paper, Tweet)
3). EvoPrompting - combines evolutionary prompt engineering with soft prompt-tuning to find high-performing models; it leverages few-shot prompting which is further improved by using an evolutionary search approach to improve the in-context examples. (Paper, Tweet)
4). Consistency Models - a new family of generative models that achieve high sample quality without adversarial training. (Paper, Tweet)
领英推荐
5). D5 - a new task that automatically discovers corpus-level differences via language description in a goal-driven way; applications include discovering insights from commercial reviews and error patterns in NLP systems. (Code, Paper, Tweet)
6). Reconstructing Images from Human Brain Activity with Diffusion Models - proposes an approach for high-resolution image reconstruction with latent diffusion models from human brain activity. (Project, Tweet )
7). Grounded Decoding - a scalable approach to planning with LLMs in embodied settings through grounding functions; GD is found to be a general, flexible, and expressive approach to embodied tasks. (Paper, Tweet)
8). Voltron - a framework for language-driven representation learning from human videos and captions for robotics. (Paper, Models, Evaluation, Tweet)