Our CEO, Ramin Hasani and Jim Rowan of Deloitte in conversations with The Wall Street Journal CIO Journal: “In the coming years, two challenges facing AI will need to be overcome,” says Liquid AI CEO Ramin Hasani. “One is the energy cost. Another is making sure we humans stay in control.” Read the full story here: https://lnkd.in/e5YUrAKp
Liquid AI
信息服务
Cambridge,Massachusetts 12,132 位关注者
Build capable and efficient general-purpose AI systems at every scale.
关于我们
Our mission is to build capable and efficient general-purpose AI systems at every scale.
- 网站
-
https://liquid.ai
Liquid AI的外部链接
- 所属行业
- 信息服务
- 规模
- 11-50 人
- 总部
- Cambridge,Massachusetts
- 类型
- 私人持股
- 创立
- 2023
地点
-
主要
314 Main St
US,Massachusetts,Cambridge,02142
Liquid AI员工
动态
-
Liquid AI转发了
This is the proudest release of my career :) At Liquid AI, we're launching three LLMs (1B, 3B, 40B MoE) with SOTA performance, based on a custom architecture. Minimal memory footprint & efficient inference bring long context tasks to edge devices for the first time! ?? Performance We optimized LFMs to maximize knowledge capacity and multi-step reasoning. As a result, our 1B and 3B models significantly outperform transformer-based models in various benchmarks. And it scales: our 40B MoE (12B activated) is competitive with much bigger dense or MoE models. ?? Memory footprint The LFM architecture is also super memory efficient. While the KV cache in transformer-based LLMs explodes with long contexts, we keep it minimal, even with 1M tokens. This unlocks new applications, like document and book analysis with RAG, directly in your browser or on your phone. ?? Context window In this preview release, we focused on delivering the best-in-class 32k context window. These results are extremely promising, but we want to expand it to very, very long contexts. Here are our RULER scores (https://lnkd.in/e3xSX3MK) for LFM-3B ↓ ?? LFM architecture The LFM architecture opens a new design space for foundation models. This is not restricted to language, but can be applied to other modalities: audio, time series, images, etc. It can also be optimized for specific platforms, like Apple, AMD, Qualcomm, and Cerebras. ?? Feedback Please note that we're a (very) small team and this is only a preview release. ?? Things are not perfect, but we'd love to get your feedback and identify our strengths and weaknesses. We're dedicated to improving and scaling LFMs to finally challenge the GPT architecture. ?? Open science We're not open-sourcing these models at the moment, but we want to contribute to the community by openly publishing our findings, methods, and interesting artifacts. We'll start by publishing scientific blog posts about LFMs, leading up to our product launch event on October 23, 2024. ?? Try LFMs! You can test LFMs today using the following links: Liquid AI Playground: https://lnkd.in/dSAnha9k Lambda: https://lnkd.in/dQFk_vpE Perplexity: https://lnkd.in/d4uubMj8 If you're interested, find more information in our blog post: https://lnkd.in/enxMjVez
-
Today we introduce Liquid Foundation Models (LFMs) to the world with the first series of our Language LFMs: A 1B, 3B, and a 40B. LFM-1B performs well on many public benchmarks in the 1B category, making it the new state-of-the-art model at this size. This is the first time a non-GPT architecture significantly outperforms transformer-based models. LFM-3B delivers incredible performance for its size. It positions itself as first place among 3B parameter transformers, hybrids, and RNN models, but also outperforms the previous generation of 7B and 13B models. It is also on par with Phi-3.5-mini on multiple benchmarks, while being 18.4% smaller. LFM-3B is the ideal choice for mobile and other edge text-based applications. LFM-40B offers a new balance between model size and output quality. It leverages 12B activated parameters at use. Its performance is comparable to models larger than itself, while its MoE architecture enables higher throughput and deployment on more cost-effective hardware. LFMs are large neural networks built with computational units deeply rooted in the theory of dynamical systems, signal processing, and numerical linear algebra. LFMs are Memory efficient LFMs have a reduced memory footprint compared to transformer architectures. This is particularly true for long inputs, where the KV cache in transformer-based LLMs grows linearly with sequence length. Read the full blog post: https://lnkd.in/dhSZuzSS Read more on our research: https://lnkd.in/dHwztmfi Try LFMs today on? Liquid AI Playground: https://lnkd.in/dSAnha9k Lambda:?https://lnkd.in/dQFk_vpE Perplexity: https://lnkd.in/d4uubMj8 Get in touch with us: https://lnkd.in/dttAbPgs Join our team: https://lnkd.in/dwpfpwyt Liquid Product Launch Event - Oct 23, 2024 Cambridge, MA Come join us at MIT Kresge, Cambridge, MA on October 23rd 2024, to learn more about Liquid as we unveil more products and progress on LFMs and their applications in consumer electronics, finance, healthcare, biotechnology, and more! RSVP: https://lnkd.in/dYhxqFHU
-
We are thrilled to welcome Mikhail Parakhin, the former Head of Microsoft Bing, Windows and Ads, ex-CTO of Yandex and current advisory board of Perplexity to our advisory board. Mikhail will help us grow faster as an enterprise foundation model provider!
-
We are excited to connect with you at #ICLR2024! Meet us at our booth and apply to join our social on Thursday, May 9th, 6-12pm CET. You can RSVP here:?https://lnkd.in/d-54_MCu You can learn more about job opportunities at Liquid here:?https://lnkd.in/dwpfpwyt
-
Today we are proud to announce our partnership with ITOCHU Techno-Solutions Corporation (CTC), the largest system integrator in Japan, to bring to market sustainable edge AI capabilities powered by Liquid AI’s learning systems technologies! https://lnkd.in/dXW7yaPa As the Japanese Nikkei 225 reaches its highest level since 1989, we are excited to partner Liquid with ITOCHU-CTC to further accelerate and drive continued transformation and growth in Japan. https://lnkd.in/dC2b4Dph Special thanks to Ichiro Tsuge, Nagaki Fujioka, Masanori Tanaka, Tomohiro Igarashi and Atsu Aiyama for making this happen! with Joseph Jacks, Ramin Hasani, and Louis Hunt!
-
Liquid AI转发了
Innovating with AI is apart of all business conversations. Which is why it gives me great pleasure to share the news that Capgemini will be collaborating with Liquid AI to build next-generation AI solutions for enterprises across the globe. This collaboration will be focused on developing and advancing AI solutions in various domains, including manufacturing, healthcare and finance. It will also open new applications of advanced AI solutions on the Edge. Thoroughly looking forward to seeing the cutting-edge solutions that this collaboration will deliver: https://lnkd.in/exV8vGgJ #GetTheFutureYouWant #AI #EnterpriseTransformations
We are thrilled to announce?our collaboration with Capgemini to build next-generation AI solutions for enterprises! For the last months, we've been working on this together and now following Capgemini's participation in Liquid AI's successful $37.6m seed round, we are committed to delivering cutting-edge AI systems, powered by Liquid Neural Networks, to help drive enterprises’ AI transformation. We're are looking forward to building with Andy Vickers Keith Williams William Rozé Anne-Laure CADENE Dany Tello Lucia Sinapi-Thomas Mark Roberts Benjamin Mathiesen Patrick Chareyre Paul Nokes Arnaud de Scorbiac Mark Knight Fabio Fusco and the entire Capgemini and Capgemini Engineering teams! https://lnkd.in/di5QhhTD
-
Liquid AI转发了
We are thrilled to announce?our collaboration with Capgemini to build next-generation AI solutions for enterprises! For the last months, we've been working on this together and now following Capgemini's participation in Liquid AI's successful $37.6m seed round, we are committed to delivering cutting-edge AI systems, powered by Liquid Neural Networks, to help drive enterprises’ AI transformation. We're are looking forward to building with Andy Vickers Keith Williams William Rozé Anne-Laure CADENE Dany Tello Lucia Sinapi-Thomas Mark Roberts Benjamin Mathiesen Patrick Chareyre Paul Nokes Arnaud de Scorbiac Mark Knight Fabio Fusco and the entire Capgemini and Capgemini Engineering teams! https://lnkd.in/di5QhhTD
-
We are thrilled to announce?our collaboration with Capgemini to build next-generation AI solutions for enterprises! For the last months, we've been working on this together and now following Capgemini's participation in Liquid AI's successful $37.6m seed round, we are committed to delivering cutting-edge AI systems, powered by Liquid Neural Networks, to help drive enterprises’ AI transformation. We're are looking forward to building with Andy Vickers Keith Williams William Rozé Anne-Laure CADENE Dany Tello Lucia Sinapi-Thomas Mark Roberts Benjamin Mathiesen Patrick Chareyre Paul Nokes Arnaud de Scorbiac Mark Knight Fabio Fusco and the entire Capgemini and Capgemini Engineering teams! https://lnkd.in/di5QhhTD