Generative AI News - June 2024
Napoleon Bonaparte a la Bill and Ted’s Excellent Adventure

Generative AI News - June 2024

We’re at it again - your monthly recap of news, product announcements, new applications of developing tech, and other interesting Generative AI action that caught our attention.?

Click the ? to subscribe to our newsletter or join our mailing list to receive a recap straight to your inbox. Convenient? We think so.

Temperatures weren’t the only thing rising in June - it was another month of one piece of hot news after the other. Here you have, collected for your convenience, this month in GenAI.

  • Google just dropped Gemma 2, their latest AI models, “a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models,” now available to researchers and developers. They’re not one to brag (OK, they are, but they can, because they’re Google), but the latest release is beyond, and makes these advanced AI tools more accessible and cost-effective for a broader range of applications. The Gemma 2 models come in 9B and 27B sizes, and the 27B one is a beast, running smoothly on a single NVIDIA H100 GPU or TPU host. They’re super flexible, working on everything from gaming laptops to cloud setups, and they play nice with tools like Hugging Face and Google AI Studio. Equally as important? They’ve got smart safety features to keep things responsible.?
  • What else? Well, Google continues to increase its hold on the world and improve its products by leveraging the best of tech. In this case, relying on the PaLM 2 large language model, they’ve added 110 new languages to Google Translate, representing more than 614 million speakers, and opening up translations for around 8% of the world’s population, all part of their recent commitment to build AI models that will support the 1,000 most spoken languages around the world. 唔該 m-goi! That’s “thank you” in Cantonese.
  • Can’t talk about the battle for world domination without mentioning a company that’s growing faster than headlines can keep up with, the impressive tech giant NVIDIA. They’ve teamed up with leading AI platform Hugging Face to simplify deploying generative AI models using NVIDIA NIM. This teamwork that makes the AI dream work allows you to easily deploy LLMs like Llama 3 directly from Hugging Face. ?? Faster dev that’s more accessible for everyone? That’s a partnership we can get behind.

  • Not to be outdone or outgunned, Apple integrated new Foundational Models, their personal intelligence system, into all your favorite OSs. Two of these models (a smaller on-device model and a bigger server-based model) were built, trained, adapted, and optimized to perform specialized tasks efficiently, accurately, and responsibly. But efficiency isn’t the only target they’re hitting, they handle everyday tasks like writing and summarizing as well, never forgetting Apple’s core values of privacy and user empowerment. Enabling powerful capabilities across language, images, actions, and personal context? Sounds like this Apple again didn’t fall too far from the tree of knowledge…
  • Companies don’t often make waves or make money and headlines by taking the longer, safer, conscientious way, but that’s what Dr. Ilya Sutskever, formerly of OpenAI, is doing. His newest venture, an artificial intelligence company, emphasizes safe developments in the field. Leaving no room for confusion, he says, “It’s called Safe Superintelligence. SSI is our mission, our name, and our entire product roadmap because it is our sole focus.” A singular goal of progress without harming humanity? Something we imagine we can get behind…Will they manage to avoid the usual pressures of moving fast and raising funds faster? Time will tell, but we like his style, his clarity, and his focus. And with the experience and reputation of those heading this up, success is likely in the cards. We’re rooting for you, SSI.
  • AI startup Luma invited people to play (for free) with their new video-generating tool Dream Machine. But the hype quickly turned - people pointed out a character in the short Monster Camp that looks so much like Mike Wazowski, that he's picked it as himself out of a lineup. Luma CEO Amit Jain lays the blame on the user (customers are NOT always right when it comes to AI, folks, because there are terms of engagement). He said visual content uploaded by the user was the cause for this similarity in animation. This accentuates the need for AI to include moderation and enforcement of terms that ensure infringement and plagiarism don’t abound. This is a problem we are likely to continue seeing, especially as long as the lack of transparency into GenAI tech continues, even as some phenomenal improvements and strides in what is possible are made.

  • We talk about the need for humans and AI being supple-ment to be, and another scenario in which we see this symbio-sis-terhood (sorry, we can’t stop with the puns, but why try?) is in code creation and review. OpenAI’s new LLM Critics are AI models trained to spot bugs in code written by other AI models. Look at that, even AI has peer reviews! They’ve been found to be extremely efficient and effective at catching bugs in AI-generated code, finding hundreds in ChatGPT training data that had previously been deemed flawless. But they’re not perfect - they sometimes find imaginary friends, bugs that don’t exist. So once again we see that the combination of humans and AI makes for the best results, and perhaps an interesting crime-solving duo. Content-streaming services, you paying attention? Get that made!
  • Looking for a bit of fun or escapism as you study the amazing capabilities of today’s GenAI? We’re enjoying playing with character.ai, a mind-bending way to spend time interacting with the blend of technology, knowledge, and imagination. AI-based, it takes s(t)imulating to the next level, allowing you to have conversations with historical, fictional, and celebrated figures and characters. When this writer engaged in conversation, Napoleon admitted that invading Russia was a mistake and his greatest regret, Oscar Wilde’s character was a shameless flirt, and Princess Buttercup had a slight accent as she pined for her true love. This is fun for now, but keep an eye out for how sophisticated language models continue to help chatbots evolve and blur the unbelievable. This may be the closest to the historical lineup Bill and Ted managed in their phone booth adventures. Someone get Keanu on the phone.?
  • Meta’s new LLM Compiler, a suite of openly-available, pre-trained models designed specifically to optimize coding tasks, is here to shake up the coding world! Built on the Code Llama model and trained on a massive dataset to compile code more efficiently, it’s the upgrade they knew we needed before we did (how very Meta of them).? What does this mean for developers? Faster, more efficient code with less hassle, which is the dream. Well, that and world peace.
  • Hopefully you reinforced your windows, because it’s hurricane season, and NVIDIA’s latest language model, the Nemotron-4 340B, might just blow you away. Their most powerful language model family yet (trained on trillions of tokens), designed to generate synthetic data to improve training for other AI models, its three types of models (Base, Instruct, and Reward) are all optimized for different tasks. Running on multiple GPUs and servers, it’s extremely efficient, and it’s open-source, so developers can use and modify it. We love that convenience and control, and stellar accessibility and customizability when it comes to AI training data and evaluation. This will help accelerate the development of AI applications and responsible use of LLMs.? We give that an NVIDI-A plus. ??
  • With Hugging Face’s new GenQA project, they’ve landed on a method for generating large instruction datasets from a single prompt, with minimal human oversight. They use LLMs to create diverse examples, from simple tasks to complex dialogs, which saves time and has been proven to produce high-quality datasets that meet or exceed standards set in evaluation, and rival those made by humans. Saving face with Hugging Face, and saving time. ??

Look out for our next edition, next month, like clockwork. And don’t forget to sign up to make sure you stay up to date with each update. Looking for more content in the interim? Check out our blog, or explore previous editions, like our May roundup.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了