Berkeley Artificial Intelligence Research转发了
Congratulations to BAIR alumni Aviral Kumar and Jun-Yan Zhu who have been named Samsung AI Researcher of the Year Awardees! https://lnkd.in/gpiVbt6f
The Berkeley Artificial Intelligence Research (BAIR) Lab brings together UC Berkeley researchers across the areas of computer vision, machine learning, natural language processing, planning, and robotics. BAIR includes over two dozen faculty and more than a hundred graduate students pursuing research on fundamental advances in the above areas as well as cross-cutting themes including multi-modal deep learning, human-compatible AI, and connecting AI with other scientific disciplines and the humanities.
Berkeley Artificial Intelligence Research的外部链接
US,CA,Berkeley,94720
Berkeley Artificial Intelligence Research转发了
Congratulations to BAIR alumni Aviral Kumar and Jun-Yan Zhu who have been named Samsung AI Researcher of the Year Awardees! https://lnkd.in/gpiVbt6f
Berkeley Artificial Intelligence Research转发了
Can we make Transformers better and more efficient for robot learning? Excited to introduce Body Transformer (BoT) ??, an architecture that leverages robot embodiment in the attention mechanism, by treating it as a graph of sensors and actuators. Corrective localized actuation is crucial for efficient locomotion and manipulation (e.g., humans use ankles to correct for imbalance at the feet). Robot policies do not typically exploit such spatial interrelations, mostly reusing architectures developed for NLP or computer vision. In practice, we separate observations and actions into a graph of nodes representing the sensors and actuators spread across the robot body. Then, we use masked attention to make sure that, at each layer of the Body Transformer, a node can only attend to itself and its neighbors. Information propagates throughout the graph over different layers. Provided that we have a sufficient number of layers, this simply guides the learning process, without compromising the representation power of the architecture. It makes for a ‘flexible’ but strong inductive bias! BoT surpasses MLP and Transformer baselines on both imitation and reinforcement learning. It shows better generalization and strong scaling properties, as well as potential for much more efficient learning (up to 2x less FLOPs in the attention mechanism). We deployed BoT on a real robot (Unitree A1), sim-to-real, showing its feasibility for real-world deployment! A lot more details in the paper! Work done at Berkeley Artificial Intelligence Research, with absolutely great collaborators Dun-Ming H. Fangchen Liu Jongmin Lee Pieter Abbeel Website: https://lnkd.in/d8a4r2j6 Arxiv: https://lnkd.in/ddfbJjmw Code: https://lnkd.in/dqjVQx4N
Berkeley Artificial Intelligence Research转发了
Congratulations to BAIR faculty and alumni for their awards at CVPR2024 this week in Seattle. BAIR faculty Angjoo Kanazawa won the Young Researcher Award which "recognizes one or two researchers within seven years of receiving their Ph.D. who have made distinguished research contributions to computer vision" The R-CNN paper from Ross Girshick, Jeff Donahue, Trevor Darrell and Jitendra MALIK received the Longuet-Higgins Prize. "Awarded to a paper that has withstood the test of time, the 2024 Longuet-Higgins Prize recognizes the CVPR paper from 2014 with the most impact."
Congratulations to BAIR faculty Alberto Sangiovanni-Vincentelli on his election to the American Academy of Arts and Sciences!
Looking to hire top AI talent? We've compiled a list of the brilliant Berkeley AI Research Ph.D. Graduates of 2024 who are currently on the academic and industry job markets. Check it out here: https://lnkd.in/g-AzpsxK
CITRIS and the Berkeley Space Center are excited to welcome former BAIR graduate and current NASA - National Aeronautics and Space Administration #Astronaut Woody Hoburg on Feb. 6, at 4 p.m. for a special lecture. Free and open to the public! Bookmark our YouTube channel so you can watch the livestream:?https://lnkd.in/gZVjvjjZ
Berkeley Artificial Intelligence Research转发了
How can we make LLM agents work together efficiently on complex tasks at a large scale? Introducing LLMCompiler, a tool that compiles an effective plan for executing multiple tasks in parallel. It helps create scalable LLM applications, identifies tasks for parallel execution, and manages dependencies. LLMCompiler is compatible with both open-source and OpenAI models, marking a stride towards more efficient and intelligent software systems. ??Up to 1.3x faster than OpenAI’s recent Parallel Function Calling ?? Up to 9% higher accuracy as compared to ReAct ?? Up to 6x lower cost due to efficient token usage See this twitter post for a quick TLDR: https://lnkd.in/gxs2R3ur Link to Paper: https://lnkd.in/gA-4y8RA Link to Code: https://lnkd.in/gjY6uyiv Joint work with Sehoon Kim Suhong Moon Ryan Tabrizi Nicholas Lee Michael Mahoney Kurt Keutzer
Check out this new language model from Jiantao Jiao's group at BAIR! https://lnkd.in/dUmmfR49
Excited to announce the latest research from my lab at Berkeley Artificial Intelligence Research: #Starling7B and RLHF dataset #Nectar: ??Introducing new (synthetic) RLHF Dataset Nectar and new open model Starling-LM-7B-alpha?? ?? Model & Dataset Highlights: ?? Scores 8.09 in MT Bench: Surpassing all existing models except OpenAI's GPT-4 and GPT-4 Turbo. ?? 183K Chat Prompts + 7 responses in Nectar: With 3.8M pairwise comparisons for comprehensive analysis, responses collected from all existing models, ranking labeled by GPT-4. We train the reward model using our latest K-wise loss, and fine-tune the model Openchat 3.5 (based on Mistral-7B) with online RL. Checkout our blog for more details! starling.cs.berkeley.edu ?? Available Now: ?? On HuggingFace: Our dataset Nectar, reward model Starling-RM-7B-alpha and language model Starling-LM-7B-alpha are ready for use! https://lnkd.in/gVh5XFPH ?? LMSYS Chatbot Arena: The model is supported for direct chat and anonymous human comparison in Chatbot Arena by the amazing LMSYS Organization. Try it out here! chat.lmsys.org ?? Upcoming Releases: ? Detailed Code & Paper: Stay tuned for in-depth insights and methodologies. ?? Continuous Updates: We will soon release a more stable version. Follow our journey in advancing AI safety and training techniques. #AI #LanguageModel #Starling7B #LLM University of California, Berkeley UC Berkeley College of Engineering UC Berkeley College of Computing, Data Science, and Society