?? DeepMind’s New Gemini and The $1.3 Billion Acquisition

?? DeepMind’s New Gemini and The $1.3 Billion Acquisition

Hey,

Welcome to this week's edition of AlphaSignal the newsletter for AI experts.

Whether you are a researcher, engineer, developer, or data scientist, our summaries ensure you're always up-to-date with the latest breakthroughs in AI.

Let's get into it!

Lior


On Today’s Email:

  • Top Releases and Announcements
  • DeepMind’s CEO Unveils Gemini
  • Databricks is acquiring MosaicML for $1.3 billion
  • Hugging Face testimony before the US Congress


?? RELEASES & ANNOUNCEMENT

1. MosaicML releases a new Open Source LLM

MPT-30B is a commercially licensed open-source model, boasting superior performance over MPT-7B and even the original GPT-3. It supports an 8k token context window, and demonstrates robust coding abilities, thanks to its pretraining data mixture.

2. Stability AI launches SDXL 0.9

SDXL 0.9 represents a major upgrade to their text-to-image models, enhancing image detail and composition. It enables hyper-realistic AI imagery across diverse industries on consumer-grade GPUs.

3. George Hotz shares his thoughts on GPT-4 Architecture

AI expert George Hotz, co-founder of Comma.ai, reveals GPT-4's architecture, stating it consists of eight models, totalling 1.76 trillion parameters, using a Mixture of Experts' design.

4. AWS is putting $100 million into a center to help companies use generative AI

The center is designed to keep up with Microsoft and Google. Despite larger announcements from these companies, AWS CEO Selipsky suggests this is just the start of a longer race.

5. Google’s spreadsheet-generating AI feature is rolling out

Google's Duet AI now has a "Help me organize" feature for Sheets. It uses generative AI to create and suggest customizable table templates, aiding in complex task organization.?


Access an index of billions of pages with a single API call

No alt text provided for this image

If your work involves AI, then you know the overwhelming need for new data. Your competitors might be building incredible products…but if they’re all using the same datasets to train their models, then they’re at a disadvantage.

The Brave Search API gives you access to an independent, global search index to train LLMs and power AI applications.

Brave Search is the fastest-growing search engine since Bing, and it’s 100% independent from Big Tech. Its index features billions of pages of high-quality data from real humans - and it’s constantly refreshed thanks to being default in the Brave browser.

Get started testing the API for free:

Try Brave Search API (FREE)


DeepMind’s CEO Unveils Gemini, the Next Generation AI Algorithm to Surpass ChatGPT

No alt text provided for this image

Head of Google DeepMind, Demis Hassabis, has revealed new details about the company’s novel AI system, Gemini, which aims to outshine OpenAI's ChatGPT. This potentially groundbreaking model leverages techniques used in the company's historical victory with AlphaGo, which defeated a champion Go player back in 2016.

DeepMind's engineers are working on integrating large language model technology, similar to what powers ChatGPT, with reinforcement learning techniques derived from AlphaGo. This combination aims to equip Gemini with novel planning and problem-solving capabilities. Hassabis's disclosure indicates that the new model is a major leap in AI development, a multimodal, efficient, and integrative approach paving the way for future breakthroughs. This advancement aligns perfectly with Google CEO Pichai's previous remarks on Gemini, which highlighted the emergence of remarkable multimodal capabilities which surpass those found in previous models.

Gemini's development remains ongoing and costly, with a timeline spanning months and a price tag potentially reaching hundreds of millions.

Opinion

Considering DeepMind's profound experience with reinforcement learning (RL), I am strongly convinced that they are in an ideal position to blend this approach with standard large language models (LLMs) to bring unparalleled capabilities to Gemini. Their track record in this domain is undeniable, and I suspect that marrying RL and LLMs could spark novel innovations, as was initially seen with RL from human feedback.

In my opinion, Hassabis and his team might very well leverage insights from diverse AI subfields to augment LLM technology. Their work, spanning robotics to neuroscience, often results in impressive innovations, such as the recently demonstrated algorithm capable of manipulating a variety of robot arms.

READ MORE


Want to promote your company, product, job, or event to 100,000+ AI researchers and engineers? You can reach out here.


Databricks is acquiring MosaicML for a jaw-dropping $1.3 billion

No alt text provided for this image


Databricks, a leading data lakehouse vendor, announced it will acquire San Francisco-based AI startup MosaicML in a $1.3 billion deal. The acquisition aligns with Databricks' ambition to provide a unified platform for managing data assets and building secure generative AI models. In a broader context, enterprises across various sectors are exploring the benefits of large language models (LLMs) for a range of use cases.

MosaicML addresses a crucial issue faced by modern enterprises - the challenge of feeding data into generative AI models. Their platform enables organizations to build, train, and deploy state-of-the-art models using proprietary data, while also ensuring model ownership and data privacy. Among their offerings is an open-source series of models, the MPT, which can be fine-tuned according to specific needs.

This acquisition is expected to reduce the cost of training and deploying LLMs from millions to thousands of dollars, heralding a significant step towards democratizing AI.

Opinion

I see this merger of MosaicML's offerings with Databricks as an exciting phase for the open-source LLM community. Both companies have already made significant contributions in that regard, with Databricks presenting Dolly and MosaicML delivering the MPT series of models. I believe the new partnership has the potential to further fuel these ventures.

Handling vast volumes of data is a prominent challenge when it comes to LLMs, and I believe Databricks, with its expertise in data management, is well-equipped to address this issue. Furthermore, MosaicML has an impressive team of researchers, possessing considerable experience in training large-scale models

READ MORE


Hugging Face CEO Advocates Open-Source AI as Key to American Innovation and Interests

No alt text provided for this image


Testifying before the full U.S. House Science Committee, Clement Delangue stressed the crucial role of open science and open-source AI in advancing American interests and innovation. This hearing on AI, attended by key players in the field, comes amidst a heated debate about the open dissemination of AI models.

Delangue argued that America's AI leadership can be credited to open-source tools developed within the country. His statement comes in the wake of concerns about the potential misuse of Meta's open-source large language model (LLM) LLaMA.

Hugging Face CEO stressed how open source helps AI startups grow and provides a counterbalance to the power of big corporations. He praised Hugging Face's approach to openness, which includes clear policies, safety measures, and incentives for the community, seeing it as a crucial way to tackle AI-related issues, a viewpoint that aligns with the ongoing conversation on open-source AI.

Opinion

Delangue's stress on how open-source nurtures AI startups and balances big corporations' power resonates with me. It's refreshing to see Hugging Face's commitment to an open approach, which blends clear policies, safety measures, and community incentives, setting an example for the AI industry.

WATCH HERE


Want to promote your company, product, job, or event to 100,000+ AI researchers and engineers? You can reach out here.



要查看或添加评论,请登录

社区洞察

其他会员也浏览了