??Top ML Papers of the Week
Welcome to the Top ML Papers of the Week (August 5 - August 11).
1). SAM 2 - an open unified model for real-time, promptable object segmentation in images and videos; can be applied to unseen visual content without the need for custom adaptation; to enable accurate mask prediction in videos, a memory mechanism is introduced to store information on the object and previous interactions; the memory module also allows real-time processing of arbitrarily long videos; SAM2 significantly outperforms previous approaches on interactive video segmentation across 17 zero-shot video datasets while requiring three times fewer human-in-the-loop interactions. (paper | tweet)
2). Structured Generation Limits Reasoning - investigates if structured generation can impact an LLM’s reasoning and domain knowledge comprehensive capabilities; observes that there is a significant decline in LLM’s reasoning abilities when applying format restrictions compared to free-form responses; this degradation effect is further amplified when applying stricter format constraints to reasoning tasks. (paper | tweet)
3). From LLMs to LLM-based Agents for Sofware Engineering - a survey paper on current practices and solutions for LLM-based agents for software engineering; covers important topics such as requirement engineering, code generation, test generation, and autonomous decision making; it also includes benchmarks, metrics, and models used in different software engineering applications. (paper | tweet)
4). Transformer Explainer - presents an open-source interactive tool to learn about the inner workings of a Transformer model; it runs a GPT-2 instance locally in the user's browser and allows experimenting with your own inputs. (paper | tweet)
5). Enhancing LLMs for RAG - introduces RAGFoundry, an open-source framework for augmented LLMs for RAG use cases; it supports data creation, training, inference, and evaluation; one useful application is the creation of data-augmented datasets for tuning and evaluating LLMs in RAG settings. (paper | tweet)
Sponsor message
DAIR.AI presents a live cohort-based course, Prompt Engineering for LLMs, where you can learn about advanced prompting techniques, RAG, tool use in LLMs, agents, and other approaches that improve the capabilities, performance, and reliability of LLMs. Use promo code MAVENAI20 for a 20% discount.
领英推荐
6). Synthesizing Text-to-SQL Data from Weak and Strong LLMs - proposes integrated synthetic data to build a highly specialized SoTA text-to-SQL model called SENSE; the synthetic data from strong models enhances data diversity while valuable erroneous data from weaker models combined with an executor to learn from execution feedback; preference learning is used to instruction-tune LLMs to learn from both correct and incorrect samples; SENSE achieves state-of-the-art results on the SPIDER and BIRD benchmarks, which bridges the performance gap between open-source models and methods that use closed-source models. (paper | tweet)
7). Conversational Prompt Engineering - proposes an approach to help users create personalized prompts by articulating the preferred outputs via interactions; it involves two stages: 1) an initial instruction shaped by the model based on user-provided unlabeled data, and 2) the model shares the output and the user provides feedback with refinements on outputs and instruction; this iterative process results in a personalized few-shot prompt that performs better and more optimally on the desired task. (paper | tweet)
8). Self-Taught Evaluators - an approach to improve model-based evaluators using synthetic training data only; it first generates contrasting outputs (good and bad model responses) and trains an LLM-as-a-Judge to produce reasoning traces and final judgments; the self-improvement scheme repeats the training process in an iterative way using its improved predictions; claims to outperform LLM-judges such as GPT-4 and match top-performing reward models trained on labeled examples; improves a strong LLM (Llama3-70BInstruct) from 75.4 to 88.3 (88.7 with majority vote) on RewardBench. (paper | tweet)
9). RAGEval - proposes a simple framework to automatically generate evaluation datasets to assess knowledge usage of different LLM under different scenarios; it defines a schema from seed documents and then generates diverse documents which leads to question-answering pairs; the QA pairs are based on both the articles and configurations. (paper | tweet)
10). Survey of Mamba - provides a systematic review of existing Mamba-based models across domains and tasks; specifically, focuses on advancements of Mamba-based models, techniques for adapting Mamba to diverse data, applications where Mamba excels, and promising research directions. (paper | tweet)
Reach out to [email protected] if you would like to partner and promote with us. Our newsletters are read by over 75K AI Researchers, Engineers, and Developers.