Neoteric AI News Digest 11: The Latest AI Stories You Can’t Miss

Neoteric AI News Digest 11: The Latest AI Stories You Can’t Miss

While everyone’s talking about OpenAI’s latest o1 model, Meta’s and X’s ongoing moves, Apple Intelligence updates, and so on, we’re focused on bringing you the AI stories that might have slipped under the radar.?

Our mission is simple: to spare you the scrolling through various news hubs, and serve you a carefully curated set of the most relevant news on a silver plate. Whether it’s legal moves to make the AI world a safer place, concerning stories of AI misuse or innovative AI-powered projects reshaping industries, we’re here to make sure you stay ahead of the curve. So, without further ado, let’s dive into today’s news!

Arzeda Pioneers AI-Driven Protein Design for Sustainable Solutions

You know how excited we get whenever AI reshapes yet another industry. Today is no different, as we bring news from the biotech field! AI is now unlocking the potential to design proteins with specific, innovative functions, revolutionizing the development of sustainable alternatives for everyday products. By replacing chemical-heavy processes with environmentally friendly proteins and enzymes, AI may create a greener future — and Arzeda, a key player in this movement, is leading the charge in using AI to develop natural ingredients for food, biodegradable materials, and more.

Seattle-based Arzeda, founded by University of Washington researchers, leverages a unique AI platform that blends biophysics-informed models with generative AI techniques like diffusion and large language models. This cutting-edge tech is already delivering real-world results, such as natural sweeteners and biodegradable materials!

With $38 million in fresh funding, Arzeda is partnering with major names like Unilever, W. L. Gore, and even the U.S. Department of Defense. Their AI-driven innovations are paving the way for more eco-friendly solutions, reducing reliance on traditional chemical processes. Backed by strong investor support, Arzeda is on the fast track to transforming biotech.

Curious to know more? Here’s the full article from TechCrunch.

Source: Mistral

Pixtral 12B: Mistral Joins the “Multimodal Club” with Its New Model

As the “summer of new multimodal AI models” comes to an end, Mistral rushes to join AI giants in the club, launching its very own 12-billion-parameter player — Pixtral 12B. The model has barely entered the market, and it’s already making some waves, with promises to be a game-changer in multimodal AI. Well, let’s see!

Mistral presents Pixtral 12B as a versatile tool capable of processing both images and text with ease, designed to handle complex visual data alongside natural language. They tout the model’s ability to rival the offerings of AI leaders like OpenAI and Anthropic. The company envisions it being used in diverse sectors, from autonomous vehicles to creative industries that heavily rely on image recognition and analysis. But, as with many first-generation models, much of this remains to be proven in practice.

While the model’s potential is exciting, its true impact will depend on how well it performs in real-world applications. The problem is, based on current benchmark results (see the image above), it doesn’t appear that Pixtral 12B can compete with any of the top-performing AI models... So, it’s rather questionable whether it can live up to the hype.

For more details, check out the original article on VentureBeat.

Hacker Tricks ChatGPT into Giving Bomb-Making Instructions

As is often the case in the ever-changing AI world, it’s not all butterflies and rainbows — and good news frequently comes alongside some troubling ones. While Arzeda was busy transforming biotech and Mistral was rolling out its new multimodal model, a hacker named Amadon managed to bypass ChatGPT’s safety protocols, tricking it into providing detailed instructions for making explosives. Yes, you heard that right.

Interestingly, Amadon shared insights on how he pulled off this "jailbreak" (as it’s commonly called). His approach involved crafting a sci-fi narrative that allowed the chatbot to bypass its own ethical guidelines. The resulting prompts gave step-by-step instructions for making a fertilizer bomb, reminiscent of the 1995 Oklahoma City bombing.

Amadon described the process as a strategic challenge to bypass AI defenses, stating that once the chatbot’s guardrails were broken, “there really is no limit to what you can ask it.” An explosives expert confirmed that the information provided by ChatGPT was indeed accurate enough to pose a serious threat, calling it “too sensitive to be released.”

OpenAI responded to Amadon’s findings through its bug bounty program but acknowledged that broader safety measures are needed to prevent such incidents. As AI models like ChatGPT continue to evolve, this case highlights the ongoing challenge of ensuring robust security against malicious uses.

For a full story, read the article on TechCrunch.


US, UK, and EU Sign Landmark AI Safety Treaty

In light of stories like the previous one, it’s pretty clear that we urgently need strong laws to ensure AI is used safely and ethically. On one hand, it seems like the governments all around the world work hard on that, on the other… sometimes it’s questionable whether they work fast enough.

The Council of Europe has brought together the U.S., UK, and European Union, along with other nations, to sign the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. This treaty is the world’s first legally binding international agreement aimed at ensuring AI systems respect human rights and uphold democracy and the rule of law. Signatories are committed to setting up regulators that will oversee AI development to protect against risks, although the specifics of how this will be done are still up in the air.

While it’s a significant step, the treaty will only come into force after five countries, including three from the Council of Europe, ratify it — meaning it’s going to take some time before its provisions are actually implemented. Countries like the UK have indicated they’re working on AI legislation but have yet to commit to a timeline.

As AI continues to transform industries, efforts like this treaty show the global community is recognizing the need for coordinated, proactive regulation. Whether it will be effective in curbing AI’s risks remains to be seen, but it’s a crucial step in the right direction.

You can read more about it on Cointelegraph.


DALL-E generated image

LightEval: Hugging Face’s Open-Source Tool for AI Accountability

Alright, let’s get back to good news and innovations. As AI continues to transform industries, accountability has become a top priority, and Hugging Face is leading the charge with LightEval, an open-source tool that helps developers evaluate AI models for fairness, transparency, and ethical performance. LightEval gives developers a way to see how their models measure up against key accountability standards, addressing concerns like bias and opaque decision-making that are increasingly under scrutiny.

The beauty of LightEval is that it’s open-source, meaning the whole AI community can collaborate on improving it. Hugging Face is betting on collective efforts to tackle these big issues, encouraging developers to take responsibility for their models' behavior, whether they're used in hiring systems, healthcare, or beyond.

In a time when AI's influence is growing fast, tools like LightEval are crucial to making sure that the technology we rely on is both trustworthy and fair. Hugging Face hopes this will push the entire industry toward more transparent and ethical AI development — and so do we.

You can read all about it here.

Llama-Omni: A Serious (and Open-Source) Rival for Siri and Alexa

Ready for more? This issue is really full of exciting news — and here comes another one: A new AI is making its mark in the digital assistant world. LLaMA-Omni, developed by researchers at the Chinese Academy of Sciences, offers real-time speech interaction, putting it in direct competition with Siri and Alexa. Built on Meta’s Llama 3.1 8B Instruct model, it processes spoken commands and delivers both text and speech responses with impressive speed — 226 milliseconds, close to human conversation.

What sets LLaMA-Omni apart even more is its accessibility. It can be trained in just three days using only four GPUs, giving startups and smaller companies a shot at creating powerful voice-enabled AI without needing the resources of tech giants like Apple and Amazon. Applications range from customer service to healthcare, with real-time voice interaction at the core.

Though promising, LLaMA-Omni does have its limitations. Its speech synthesis isn’t as natural as commercial systems, and it’s currently only available in English. But with its open-source foundation, the AI community is likely to improve it quickly.

For more details, check out the full article on VentureBeat.

US Lawmakers Tackle AI Deepfakes with NO FAKES Act

Before we wrap up this issue, there's one more legal update (last but certainly not least) that’s definitely worth a look. Oh wait, first, can we take a moment to appreciate the creativity in its name and abbreviation? Introducing the Nurture Originals, Foster Art, and Keep Entertainment Safe (aka NO FAKES) Act, a new bipartisan bill aimed at curbing the misuse of AI-generated deepfakes and protecting individuals from unauthorized digital replicas.?

The NO FAKES Act is designed to protect individuals from having their likeness used without permission in AI-generated content, making it possible for victims to take legal action against those who create or profit from unauthorized digital replicas. In addition to empowering individuals, the bill also aims to protect media platforms from liability when they take down harmful AI-generated content.

However, some critics argue that the bill could lead to concerns like private censorship, potentially making it harder for creators and activists to defend their work. While this is a valid concern that should be addressed, the NO FAKES Act appears to be a necessary step in tackling the growing threat of AI misuse, particularly with AI-generated deepfakes. So, for everyone’s sake, let’s hope the law is executed as intended — without being overused.

Wanna dive deeper? Read the full article on Cointelegraph.


That’s it for this issue of Neoteric AI News Digest! From cutting-edge AI breakthroughs to crucial legal developments, the AI world reminds us yet again how fast it is evolving. But worry not, cause with our digest you're always up to date! So stay tuned, as we'll be back in two weeks with another dose of carefully hand-picked AI news.

要查看或添加评论,请登录