??Top ML Papers of the Week
Welcome to the Top ML Papers of the Week (September 16 - September 22).
1). Moshi - introduces a speech-text foundation model and full-duplex spoken dialogue framework; they present several components of the systems; Helium is a 7B parameter text LLM; Mimi is a semantic-acoustic neural audio code with state-of-the-art performance on audio quality; a hierarchical multi-stream architecture that can generate arbitrary conversation in a speech-to-speech manner. (paper | tweet )
2). Training LLMs to Self-Correct via RL - develops a multi-turn online reinforcement learning to improve the capabilities of an LLM to self-correct; it’s based entirely on self-generated data; SFT is shown to be ineffective at learning self-correction and suffers from distribution mismatch between training data and model responses; proposes a two-stage approach that first optimizes correction behavior and then uses a reward bonus to amplify self-correction during training; when applied to Gemini 1.0 Pro and 1.5 Flash models, it achieves state-of-the-art self-correction performance, improving the base models’ self-correction by 15.6% and 9.1% respectively on the MATH and HumanEval benchmarks. (paper | tweet )
3). Qwen2.5 Coder - a series of models including 1.5B and 7B parameters; it’s built upon the Qwen2.5 architecture which is continuously pretrained on 5.5 trillion tokens; achieves state-of-the-art performance across more than 10 benchmarks; includes strong capabilities in code generation, completion, reasoning, and repairing. (paper | tweet )
4). Diagram of Thought (DoT) - enhances the reasoning capabilities of LLMs through mathematical rigor; DAT models iterative reasoning in LLM as the construction of a directed acyclic graph; it integrates propositions, critiques, refinement, and verification into a unified DAG structure; this allows DoT to capture complex logical deduction beyond linear or tree-based approaches. (paper | tweet )
5). Agents in Software Engineering - provides a comprehensive overview of frameworks of LLM-based agents in software engineering. (paper | tweet )
Sponsor message
DAIR.AI is excited to introduce a new catalog of self-paced courses in prompt engineering and LLMs. Join the academy to learn how to build effectively with AI.
Use code PROMPTING20 to get an extra 20% discount. Only valid to the first 500 enrollments.
领英推荐
6). To CoT or not to CoT? - investigates what kinds of tasks benefit the most from chain-of-thought (CoT) prompting; after a meta-analysis on 100+ papers and several evaluations, it finds that CoT produces strong performance benefits primarily on tasks involving math and logic; they find that most of the CoT gain comes from improving symbolic execution, but a symbolic solver outperforms it. (paper | tweet )
7). A Comprehensive Evaluation of Quantized Instruction-Tuned LLMs - evaluates the performance of instruction-tuned LLMs across various quantization methods on models ranging from 7B to 405B; the key findings are 1) quantizing a larger LLM to a similar size as a smaller FP16 LLM generally performs better across most benchmarks, 2) performance varies significantly with different quantization methods, model size, and bit-width, with weight-only methods often yielding better results in larger models, and 3) task difficulty does not significantly impact accuracy degradation due to quantization. (paper | tweet )
8). Iteration of Thought - proposes the Iteration of Thought (IoT) framework to enhance the LLM responses and reasoning capabilities with adaptive reasoning paths; it leverages an inner dialogue agent, acting as a guide, to dynamically adjust reasoning paths which allows adaptive cross-path exploration and enhance response accuracy; it's different from CoT and ToT (both rigid processes) in that its prompt generation is a dynamic process that allows it to adapt. (paper | tweet )
9). Schrodinger’s Memory - uses the Universal Approximation Theorem to explain the memory mechanism of LLMs. It also proposes a new approach to evaluate LLM performance by comparing the memory capacities of different models; the Transformer architecture functions as a dynamic fitting UAT model, with a strong ability to adaptively fit inputs; this enables LLMs to recall entire content based on minimal input information. (paper | tweet )
10). Math Jailbreaking Prompts - uses GPT-4o to generate mathematically encoded prompts that serve as an effective jailbreaking technique; shows an average attack success rate of 73.6% across 13 state-of-the-art; this highlights the inability of existing safety training mechanisms to generalize to mathematically encoded inputs. (paper | tweet )
Reach out to [email protected] if you would like to partner and promote with us. Our newsletters are read by over 85K AI Researchers, Engineers, and Developers.
PhD student at the University of Cagliari
2 个月Everything is just about LLM!!
AI Undergraduate student at PAF-IAST | President of AI Society, PAF-IAST (2023-24)
2 个月Impressive Thank You
Gen AI | Data Science | Data Bricks | Over a Decade in IT industry
2 个月Insightful