??Top ML Papers of the Week

??Top ML Papers of the Week

Welcome to the Top ML Papers of the Week (June 3 - June 9).

1). NLLB - proposes a massive multilingual model that leverages transfer learning across 200 languages; it’s based on a sparsely Gated Mixture of Experts architecture and trained on data via an approach tailored for low-resource languages; evaluates on 40K translations and achieves an average of 44% improvement in translation quality. (paper | tweet )


2). Extracting Concepts from GPT-4 - proposes a new scalable method based on sparse autoencoders to extract around 16 million interpretable patterns from GPT-4; the method demonstrates predictable scaling and is more efficient than previous techniques. (paper | tweet )


3). Mamba-2 - a new architecture that combines state space models (SSMs) and structured attention; it uses 8x larger states and trains 50% faster; the new state space duality layer is more efficient and scalable compared to the approach used in Mamba; it also improves results on tasks that require large state capacity. (paper | tweet )



Sponsor message

Prolific is a platform that connects AI researchers with a pool of 150k+ active participants and domain specialists.

Through Prolific, AI researchers collect rich, reliable data that reflects the breadth of humanity, easily and within a matter of hours. Giving them the insights to train models in the race to AGI.

Getting Started



4). MatMul-free LLMs - proposes an implementation that eliminates matrix multiplication operations from LLMs while maintaining performance at billion-parameter scales; the performance between full precision Transformers and the MatMul-free models narrows as the model size increases; claims that by using an optimized kernel during inference, memory consumption is reduced by more than 10x. (paper | tweet )


5). Buffer of Thoughts - presents a thought-augmented reasoning approach to enhance the accuracy, efficiency, and robustness of LLM-based reasoning; it leverages a meta-buffer containing high-level thoughts (thought templates) distilled from problem-solving processes; the relevant thought template is then retrieved and instantiated with task-specific reasoning structures for the thought-augmented reasoning process; it demonstrates SOTA performance on 10 challenging tasks while requiring 12% of the cost of multi-query prompting methods like Tree-of-Thoughts. (paper | tweet )


6). SaySelf - a training framework to teach LLMs to express more accurate fine-grained confidence estimates and self-reflective rationales; it performs supervised finetuning on a dataset that contains summaries of the difference between multiple reasoning chains; reinforcement learning is then applied to calibrate confidence estimates, encouraging the LLM to produce accurate, high-confidence predictions and penalize overconfidence in erroneous outputs. (paper | tweet )


7). The Geometry of Concepts in LLMs - studies the geometry of categorical concepts and how the hierarchical relations between them are encoded in LLMs; finds that simple categorical concepts are represented as simplices by the LLMs and complex concepts are represented as polytopes constructed from direct sums of simplices, which reflect the hierarchical structure. (paper | tweet )


8). Aligning LLMs with Demonstrated Feedback - proposes a method to align LLMs to a specific setting via a very small number of demonstrations as feedback; it aligns LLM outputs to a user’s demonstrated behaviors and can learn fine-grained style and task alignment across domains; outperforms few-shot prompting, SFT, and self-play methods on the tested benchmarks. (paper | tweet )


9). Towards Scalable Automated Alignment of LLMs - provides an overview of methods used for alignment of LLMs; explores the 4 following directions: 1) aligning through inductive bias, 2) aligning through behavior imitation, 3) aligning through model feedback, and 4) aligning through environment feedback. (paper | tweet )


10). AgentGym - a new framework featuring various environments and tasks for broad, real-time, and concurrent agent exploration; builds a generally capable LLM-based agent with self-evolution abilities and explores its potential beyond previously seen data across tasks and environments. (paper | tweet )


Reach out to [email protected] if you would like to promote with us. Our newsletter is read by over 60K AI Researchers, Engineers, and Developers.

Patricia H.

Connector of People and Ideas | Learning Design | Memory Formation | ML Translation

5 个月

Thank you as ever, Elvis.

要查看或添加评论,请登录

DAIR.AI的更多文章

社区洞察

其他会员也浏览了