??Top ML Papers of the Week
Welcome to the Top ML Papers of the Week (October 21 - 27).
1). Agentic Information Retrieval - provides an introduction to agentic information retrieval, which is shaped by the capabilities of LLM agents; discusses different types of cutting-edge applications of agentic information retrieval and challenges. (paper | tweet )
2). Aya Expanse - a family of open-weight foundation models for multilingual capabilities; releases an 8B and 32B parameter model, including one of the largest multilingual dataset collections to date, with 513 million examples; the release also includes Aya-101 which the authors claim is the most comprehensive multilingual models covering 101 languages; Aya Expanse 32B outperforms Gemma 2 27B, Mistral 8x22B, and Llama 3.1 70B, a model 2x its size. (paper | tweet )
3). A Theoretical Understanding of CoT - finds that adding correct and incorrect reasoning paths in demonstrations improves the accuracy of intermediate steps and CoT; the proposed method, Coherent CoT, significantly improves performance on several benchmarks; in the Tracking Shuffled Objects dataset, Gemini Pro shows a 6.60% improvement (from 58.20% to 64.80%), and in Penguins in a Table, DeepSeek 67B demonstrates an increase of 6.17% (from 73.97% to 80.14%). (paper | tweet )
Editor Message
We’re excited to launch AI Agents Weekly , our new weekly series to help AI researchers and developers keep track of all the top signals and developments in AI Agents. The weekly series compliments our Top ML Papers of the Week newsletter and dives deeper into AI Agents research, industry tips, and developments.
领英推荐
4). A Survey on Data Synthesis and Augmentation for LLMs - provides a comprehensive summary of data generation techniques in the lifecycle of LLMs; includes discussions on data preparation, pre-training, fine-tuning, instruction-tuning, preference alignment, and applications. (paper | tweet )
5). LongRAG - enhances RAG's understanding of long-context knowledge which includes global information and factual details; consists of a hybrid retriever, an LLM-augmented information extractor, a CoT-guided filter, and an LLM-augmented generator; these are key components that enable the RAG system to mine global long-context information and effectively identify factual details; LongRAG outperforms long-context LLMs (up by 6.94%), advanced RAG (up by 6.16%), and Vanilla RAG (up by 17.25%). (paper | tweet )
6). Evaluation Feature Steering in LLMs - evaluates featuring steering in LLMs using an experiment that artificially dials up and down various features to analyze changes in model outputs; it focused on 29 features related to social biases and study if feature steering can help mitigate social biases; among its findings, it reports that feature steering sometimes leads to off-target effects and that a neutrality feature can help decreases social biases in 9 social dimensions without negatively affecting text quality. (paper | tweet )
7). Granite 3.0 - presents lightweight foundation models ranging from 400 million to 8B parameters; supports coding, RAG, reasoning, and function calling, focusing on enterprise use cases, including on-premise and on-device settings; demonstrates strong performance across academic benchmarks for language understanding, reasoning, coding, function calling, and safety. (paper | tweet )
8). LLMs Reflect the Ideology of their Creators - finds that LLMs exhibit a diverse ideological stance which reflects the worldview of its creators; finds consistent normative differences between how the same LLM responds in Chinese compared to English; identifies normative disagreements between Western and non-Western LLMs about prominent actors in geopolitical conflicts. (paper | tweet )
9). Scalable Watermarking for LLMs - proposes SynthID-Text, a text-watermarking scheme that can preserve text quality in LLMs, enable high detection accuracy, and minimize latency overhead; it integrates watermarking with speculative sampling that consists of the final pattern of scores for a model’s word choices combined with the adjusted probability scores; the authors test the feasibility and scalability of the approach by assessing feedback on nearly 10 million Gemini responses. (paper | tweet )
10). Reasoning Patterns of OpenAI’s o1 Model - when compared with other test-time compute methods, o1 achieved the best performance across most datasets; the authors observe that the most commonly used reasoning patterns in o1 are divide and conquer and self-refinement; o1 uses different reasoning patterns for different tasks; for commonsense reasoning tasks, o1 tends to use context identification and emphasize constraints; for math and coding tasks, o1 mainly relies on method reuse and divide and conquer. (paper | tweet )
Blog for AI Articles
3 周"AI Algorithms"?-->..... A brandnew article. Leave a??LIKE??on?: English : https://aifornoobsandexperts.com/ai-algorithms/ Nederlands :?https://aivoorjanenalleman.nl/ai-algoritmes/
Chief Innovation & AI Officer | Driving $50M+ Impact Through AI, Quantum Computing, & Ethical Tech Strategies | Sustainability & Risk Management Advocate
3 周QIXAI: A Quantum-Inspired Framework for Enhancing Classical and Quantum Model Transparency and Understanding," recently submitted to arXiv (https://arxiv.org/abs/2410.16537).?