Thoughts on BayLearn 2023
Recently I got an opportunity to attend BayLearn 2023 conference and present our work on "Zero and Few-shot Techniques for Intent Classification using LLMs". In this post I will briefly talk about some of the highlights from the conference including keynotes from Professors Fei Fei Li, Percy Liang and Chris Re.
Here is our poster presented on Zero Shot Techniques for Intent Classification using LLMs. More details on this work is in this ACL 2023 paper.
Some of the highlights were the keynotes. In the first keynote, Prof Fei Fei Li talked about exciting progress on computer vision in the last 15 years, especially in the last 10 years using deep learning techniques.
In the second keynote, Prof. Percy Liang talked about need for benchmarks in LLM era, for example, HELM and HALIE. He also talked about generative agents in games and performing data science tasks using LLMs.
In the third keynote Prof. Chris Re talked about Building Blocks for Foundation Model Systems. He talked about how LLMs show emergent behavior in many language tasks as well as generalizing to new tasks (with few shot), and death by 1000 cuts. That is, each of the sub problems in language/data tasks are easy, but breadth of problems are large, and LLMs are able to address breadth of problems in zero/few shot settings. He also talked about how to make LLM training and inference more efficient using Hardware and Software innovations like Flash Attention. He also talked about some recent research work on alternate to attention mechanism (which is the core in any Transformer) using signal processing techniques.