Neoteric AI News Digest No. 7: New AI Models, Regulatory Waves, and Ethical Concerns

Neoteric AI News Digest No. 7: New AI Models, Regulatory Waves, and Ethical Concerns

The past two weeks have been like a new season launch in fashion stores: it seems like we now have AI models in all colors and sizes. But that’s not all we will talk about in this issue. After all, we’re here to bring you the info you might have missed among the “buzzier” news, right?

The pace in which AI companies keep popping up with new launches makes us wonder… Do these people ever rest? But jokes aside, it’s really inspiring to see all the advancements, especially when you imagine all that amazing AI-powered software that will follow. Apart from the summary of all the AI model launches, in this issue we bring some important news about the EU AI Act and AI legislation in the UK, YouTube videos being used to train AI, and more.

It’s Raining AI Models

There’s been so many new releases lately that keeping up starts to be challenging. Not with us, though! Here’s a brief summary of the main models that hit the market these past two weeks:

GPT-4o mini — released on July 14, it’s a compact version of GPT-4o that retains (though this remains to be seen) the advanced capabilities of its larger counterpart but is optimized for environments with limited computational resources. According to OpenAI 's, it’s perfect for everyday tasks. You can read all about it here.

Llama 3.1 405B — released on July 16, is Meta ’s most capable model so far. Featuring 405 billion parameters, it enhances natural language processing tasks with improved accuracy and efficiency, making it a robust solution for complex AI applications, especially in reasoning and contextual understanding. Here’s the detailed info about Llama 3.1.

SmoLLM by Hugging Face — a family of state-of-the-art small models introduced on July 20, described as “blazingly fast and remarkably powerful” by its creators. It’s available in three versions: 135M, 360M, and 1.7B parameters and was created for applications where computational resources are limited. It’s meant to ensure high-quality language processing without requiring the heavy computational resources of larger models. You can learn all about it here.

Apple’s small model was released on July 18. Engineered for optimal performance on 苹果 hardware, this model enhances functionalities such as voice recognition, text prediction, and personalized user experiences. Which makes us wonder… does this mean Siri will now actually understand the commands we give her? Would be cool if it also finally covered the languages of all the Apple distribution countries. Or is it still too much to ask?

Mistral AI ’s Large 2 was presented to the world on July 22. According to TechCrunch, it’s an “answer to Meta and OpenAI’s latest models”, although, just like Llama 3.1, it’s still missing the multimodal capabilities of GPT-4o. What’s quite exciting about it though, is that it is said to be trained to “acknowledge when it cannot find solutions” or provide on informed answer, which, no matter how basic it sounds, is a pretty significant upgrade — especially when you think of models like GPT-4o, making things up without even batting an eye. You’ll find all the information on Large 2 here.

EU’s AI Act: A New Era for AI Regulation

EU AI Act, the European Union’s landmark regulation for AI applications is finally ready and has been published in the bloc’s Official Journal. Starting August 1, this groundbreaking law will come into force, with full application by mid-2026.

The Act sets different rules for AI developers based on how their tech is used and the risks involved. Most AI applications, considered low-risk, won’t face much regulation. However, high-risk uses, like biometric AI in law enforcement or AI in critical infrastructure, come with strict requirements. And some AI practices, like social credit scoring or indiscriminate facial recognition, are outright banned.

What’s next? By early 2025, banned AI applications will be illegal. Developers working on high-risk AI will need to follow new codes of practice starting April 2025. General-purpose AI models, like OpenAI’s GPT, will have to meet transparency requirements by August 2025. Some high-risk AI systems will get a bit more time, up until 2027, to comply.

The EU AI Office will oversee the rollout, creating codes of practice — though there’s concern that industry players might have too much influence over these guidelines.

Want the full scoop? Read the detailed article on TechCrunch.

UK's Wait-and-See Approach to AI Regulations

Unlike the EU, the UK is still in the early stages of defining its approach to AI regulations. The recent King's Speech outlined a tentative commitment to formulating appropriate rules for the most powerful AI models, but stopped short of proposing a dedicated AI bill.

The new Labour government aims to "ensure the safe development and use of AI models" by introducing binding regulations for the most powerful AI systems and banning the creation of sexually explicit deepfakes.

The UK seems to be observing how the EU AI Act unfolds and impacts the market before drafting its own legislation. This cautious approach may allow the UK to tailor its regulations more effectively, but could also mean lagging behind in establishing a robust AI governance framework.

The King's Speech also touched on leveraging AI for economic growth and strengthening product safety frameworks to respond to new tech risks. A proposed Product Safety and Metrology bill aims to update UK product rules to address AI advancements, reflecting similar initiatives in the EU.

In addition to AI regulations, the UK government plans to introduce a Digital Information and Smart Data bill and a Cyber Security and Resilience bill. These initiatives focus on data reforms, digital identity, and enhancing protections against cyber threats.

As the UK navigates its AI regulatory journey, the balance between fostering innovation and ensuring safety will be crucial. The impact of these evolving regulations will be closely watched by industry and policymakers alike.

For more details, read the full article on TechCrunch.

Apple, Anthropic, and Other Companies Used YouTube Videos to Train AI

Last week's news about over 170,000 YouTube videos being included in a dataset used to train AI systems couldn't be a more accurate proof that the world indeed desperately needs clear regulations for AI development.

An investigation by Proof News, co-published with Wired, revealed that Apple, Anthropic, Nvidia, and Salesforce used a massive dataset containing subtitles from over 170,000 YouTube videos to train their AI systems. This dataset, known as “YouTube Subtitles,” was scraped from the platform without permission and includes content from more than 48,000 channels, featuring popular creators and major news outlets

Marques Brownlee , known as MKBHD, expressed his concerns on X, noting that Apple sourced data from multiple companies, one of which scraped YouTube data, including his own videos. “This is going to be an evolving problem for a long time,” he added.

Despite requests for comment from The Verge, YouTube has yet to respond. The platform has previously stated that using creators’ content to train AI systems would violate its terms of service.?

The dataset in question is part of a larger collection called The Pile, an open-source project by the nonprofit EleutherAI. This revelation follows last year's controversy over the Books3 dataset, which led to lawsuits from authors whose works were used to train AI systems without permission.

This situation underscores the need for transparency and clear regulations in the AI industry. As much as we love AI, there cannot be space for plagiarism or unauthorized use of any materials published online for AI training purposes. Hopefully, stories like this one are just growing pains of the AI boom, and we’ll soon move past them, enjoying innovations without privacy concerns.

For more details, check out the full article on The Verge.

And if you're interested in the gen AI privacy & security topic, check out this article on our blog: Is Your Data Safe With Generative AI?

image credit: Nvidia

Nvidia's Blackwell AI Chip Set to Enter Chinese Market

There's been a lot of talk about Nvidia in the past few months, and it seems like nothing's changing on that front. This time, the buzz is about its game-changing Blackwell AI chip, set to make waves in the Chinese market.

Nvidia is creating a version of its newest AI chips, called the "B20," specifically for China, aligning with U.S. export regulations. This new chip, part of the Blackwell series unveiled in March, will be launched with Inspur, a major retail partner in China, with shipments starting in the second quarter of 2025.

With U.S. export controls tightening, Nvidia's move to introduce the B20 aims to counter challenges from Chinese companies like Huawei and Enflame. Despite a rough start due to pricing issues, Nvidia's sales in China are now rapidly rising. Following the announcement, Nvidia's stock rose by 1.4% to $119.67 pre-market.

For more details, check out the full article on The AI Wired.

***

That wraps up this issue of Neoteric AI News Digest! Don’t hesitate to share your thoughts on today’s topics in the comments section! If you found this edition insightful, don't forget to share it with your network. And as always, stay tuned for more updates in two weeks!

P.S. Looking for a trusted tech partner for your AI-powered software development project? We’ve been building AI projects since 2017 ?? See how we can help you!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了