??Top ML Papers of the Week

??Top ML Papers of the Week

Welcome to The Top ML Papers of the Week (April 8 - April 14).

1). Leave No Context Behind - integrates compressive memory into a vanilla dot-product attention layer; the goal is to enable Transformer LLMs to effectively process infinitely long inputs with bounded memory footprint and computation; proposes a new attention technique called Infini-attention which incorporates a compressive memory module into a vanilla attention mechanism; it builds in both masked local attention and long-term linear attention into a single Transformer block; this allows the Infini-Transformer model to efficiently handle both long and short-range contextual dependencies; outperforms baseline models on long-context language modeling with a 114x compression ratio of memory. (paper | tweet)


2). OpenEQA - proposes an open-vocabulary benchmark dataset to measure the capabilities of AI models to perform embodied question answering (EQA); it contains 1600 human-generated questions composed from 180 real-world environments; also provides an LLM-powered evaluation protocol for the task and shows that models like GPT-4V are significantly behind human-level performance. (paper | tweet)


3). CodeGemma - a family of open code LLMs based on Gemma; CodeGemma 7B models excel in mathematical reasoning and match the code capabilities of other open models; the instruction-tuned CodeGemma 7B model is the more powerful model for Python coding as assessed via the HumanEval benchmark; results also suggest that the model performs best on GSM8K among 7B models; the CodeGemma 2B model achieves SoTA code completion and is designed for fast code infilling and deployment in latency-sensitive settings. (paper | tweet)


4). LM-Guided Chain-of-Thought - applies knowledge distillation to a small LM with rationales generated by the large LM with the hope of narrowing the gap in reasoning capabilities; the rationale is generated by the lightweight LM and the answer prediction is then left for the frozen large LM; this resource-efficient approach avoids the need to fine-tune the large model and instead offloads the rationale generation to the small language model; the knowledge-distilled LM is further optimized with reinforcement learning using several rational-oriented and task-oriented reward signals; the LM-guided CoT prompting approach proposed in this paper outperforms both standard prompting and CoT prompting. Self-consistency decoding also enhances performance. (paper | tweet)


5). Best Practices and Lessons on Synthetic Data - an overview by Google DeepMind on synthetic data research, covering applications, challenges, and future directions; discusses important topics when working with synthetic data such as ensuring quality, factuality, fidelity, unbiasedness, trustworthiness, privacy, and more. (paper | tweet)



Sponsor message

DAIR.AI presents a live cohort-based course, Prompt Engineering for LLMs, that teaches advanced prompting techniques, RAG, tool use in LLMs, and other approaches that improve the capabilities, performance, and reliability of LLMs. Use promo code MAVENAI20 for a 20% discount.



6). Reasoning with Intermediate Revision and Search - presents an approach for general reasoning and search on tasks that can be decomposed into components; the proposed graph-based framework, THOUGHTSCULPT, incorporates iterative self-revision capabilities and allows an LLM to build an interwoven network of thoughts; unlike other approaches such as Tree-of-thoughts that shape the reasoning process using a tree, this new approach incorporates Monte Carlo Tree Search (MCTS) to efficiently navigate the search space; due to its ability for continuous thought iteration, THOUGHTSCULPT is particularly suitable for tasks such as open-ended generation, multip-step reasoning, and creative ideation. (paper | tweet)


7). Overview of Multilingual LLMs - a survey on multilingual LLMs including a thorough review of methods, a taxonomy, emerging frontiers, challenges, and resources to advance research. (paper | tweet)


8). The Physics of Language Models - investigates knowledge capacity scaling laws where it evaluates a model’s capability via loss or benchmarks, to estimate the number of knowledge bits a model stores; reports that "Language models can and only can store 2 bits of knowledge per parameter, even when quantized to int8, and such knowledge can be flexibly extracted for downstream applications. Consequently, a 7B model can store 14B bits of knowledge, surpassing the English Wikipedia and textbooks combined based on our estimation." (paper | tweet)


9). Aligning LLMs to Quote from Pre-Training Data - proposes techniques to align LLMs to leverage memorized information quotes directly from pre-training data; the alignment approach is not only able to generate high-quality quoted verbatim statements but overall preserve response quality; it leverages a synthetic preference dataset for quoting without any human annotation and aligns the target model to quote using preference optimization. (paper | tweet)


10). The Influence Between NLP and Other Fields - aims to quantify the degree of influence between 23 fields of study and NLP; the cross-field engagement of NLP has declined from 0.58 in 1980 to 0.31 in 2022; the study also finds that NLP citations are dominated by CS which accounts for over 80% of citations with emphasis on AI, ML, and information retrieval; overall, NLP is growing more insular -- higher growth of intra-field citation and a decline in multidisciplinary works. (paper | tweet)


Reach out to [email protected] if you would like to partner and promote with us. Our newsletter is read by over 50K AI Researchers, Engineers, and Developers.

Vishal Nautiyal

Deputy Head of AI.Cloud - AWS Business UK&I

5 个月

Thanks for sharing

回复
Nitin Aggarwal

CEO & Senior Research Scientist at PradhVrddhi Research | Data Science & AI Solutions Provider | Helping Businesses Automate & Innovate

5 个月

Thanks for making this compilation. I was always dreaming of having such a compilated newsletter about the advances in LLMs from such a long time. This will be a great help in planning what I can prioritize in reading.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了