Neoteric AI News Digest No. 9: On the Path Towards Safety & Reliability

Neoteric AI News Digest No. 9: On the Path Towards Safety & Reliability

Welcome to the next edition of Neoteric AI News Digest! While summer slowly comes to an end, the industry is focusing on the growing need for thoughtful regulations and ways to improve AI safety.

In this issue, we’ll dive into the latest AI Hallucination Index and explore the current state of AI’s reliability. We’ll also discuss why AI advancements might be slowing down (and what that means for the industry), and share updates about the EU AI Act. Last but not least, we’ll take a look at new studies on AI risk management by PwC and MIT. Would you believe that while 73% of organizations are embracing gen AI, only 11% have fully integrated responsible AI practices?

Alright, without further ado, let’s dive in and unwrap the news of the past two weeks!


Source:

The 2024 AI Hallucination Index: AI Models Still Telling Tall Tales

AI models are getting smarter, but they're still far from perfect. The latest 2024 Hallucination Index puts some of our favorite AI models under the spotlight for an issue that has been around since the early days of generative AI — hallucinations. It seems like, by now, the models should finally be free from it, but sadly, no — your trusty AI assistant might still be spouting off fiction instead of facts more often than you’d think.

The Index ranks top models like OpenAI’s GPT-4 and Google’s Bard based on how frequently they hallucinate. And while there have been improvements, the report shows that hallucinations are still a pretty big deal. Apparently, 30% of AI models still produce inaccurate information! Whether it’s providing incorrect facts, making up sources, or just plain inventing details, these slip-ups could have real consequences, especially when we think of high-stakes areas like healthcare.

But it's not all that pessimistic. The accuracy of newer models is improving, though it comes with higher costs and complexity — which highlights what the industry should focus on: balancing performance with efficiency.

For more details, check out the article on The AI Citizen and the full report itself.


Fighting AI Scams with AI: Scamnetic's New Tools Take on Fraudsters

AI has brought us countless amazing innovations, but we all know it’s also opened the door for scammers to get even craftier. With AI-generated scams on the rise, it’s about time we had some reliable tools to protect ourselves.?

Scamnetic, a startup founded in 2023, is on a mission to fight AI-powered scams with its own AI tools. Their flagship product, KnowScam, uses machine learning to analyze emails, texts, and social media posts, assigning a risk score indicating how likely they are to be scams.. It helps consumers identify suspicious messages by scanning for red flags like sender details, links, and attachments.

Scamnetic is also rolling out IDeveryone, a tool that lets users verify people they don't know before sending money or personal details, providing an extra layer of protection on dating sites or marketplaces like Facebook. This is part of a broader trend of using AI to defend against the rise of AI-generated scams, such as phishing attempts and even deepfakes.

With a direct-to-consumer service launching soon, Scamnetic aims to empower consumers to fight back against fraudsters armed with AI.

For more details, check out the full article on CNET.


Is LLM Progress Slowing Down??

It’s been nearly two years since the big LLM boom has started and it’s been nothing short of revolutionary. But recent trends suggest that this pace might be hitting a speed bump.?

The first year (and especially the first few months) was full of groundbreaking advancements — so much so it felt almost overwhelming. But subsequent updates and newly released models have focused more on incremental improvements rather than groundbreaking innovations. Of course, we still observe significant developments, but now it’s more about polishing existing technology.

This slowdown in LLM progress could have significant implications for the broader AI landscape.

As LLMs slow down, we might see a shift toward more specialized AI models tailored for specific tasks. Developers could also experiment with new user interfaces (UIs) that offer more structured interactions, moving away from the traditional chatbot format. Open-source LLMs might close the gap with commercial giants if advancements plateau, and we may even witness the emergence of new AI architectures beyond the current transformer models.

When you think of it, this change of dynamics seems quite natural. We went through a certain revolution when the new technology hit the market and its capabilities expanded quickly. Now, it’s time for it to settle into the world, to be carefully polished so it’s as reliable as possible, and to be properly adopted across the industries.?

I’d dare to say it would be foolish to assume we won’t witness another AI revolution soon. The first wave of the world’s AI transformation has passed, and now we’re in a “settling phase.” But everything we’ve seen over the past two years suggests that a new wave of groundbreaking advancements could be just around the corner.

For a deeper dive into this topic, check out the full article on VentureBeat.

Tech Firms Push for More Time to Meet EU AI Act Deadlines

Remember the EU AI Act? We wrote about it a few weeks ago, in one of the previous issues of Neoteric News Digest. It took effect on August 1, but the first wave of regulations is set to start in February 2025 — banning certain AI applications, such as those exploiting individual vulnerabilities or creating unauthorized facial recognition databases.?

By August 2025, new requirements will be implemented, focusing on GPAI (general-purpose AI) models — versatile AI systems capable of handling various tasks. However, several major tech organizations, including DOT Europe and The Software Alliance, are asking the European Commission for an extension on the compliance deadlines.?

In a letter sent on August 9, the companies expressed concerns that the current timeline, set during the summer recess, doesn’t allow for sufficient industry feedback. They argue that the importance of these regulations, especially those impacting GPAI, necessitates more time to ensure high-quality responses.?

The industry is keen on ensuring that these rules are both effective and practical, but this is only possible if done right. Their request emphasizes the need for thoughtful and well-considered regulations that can effectively guide the AI ecosystem in the EU without stifling innovation.?

You can read more about this topic on Cointelegraph.


Doing AI Right: The Importance of Assessing Risks

Regarding doing things right and ensuring AI's reliability, the recent PwC study highlights a concerning trend: while 73% of organizations are embracing generative AI, only 58% are assessing the associated risks. This gap is worrying, especially as AI adoption scales across industries.

PwC emphasizes that responsible AI — focused on value, safety, and trust — should be integral to any company’s risk management strategy. Yet, many organizations are still struggling to implement comprehensive risk assessments, with only 11% claiming to have fully integrated such practices.

As AI projects move from small internal teams to large-scale deployments, the need for robust responsible AI strategies becomes more critical. PwC suggests that companies focus on creating clear ownership and accountability for AI practices, embedding AI risk specialists, and ensuring that AI safety is seen as a competitive advantage, not just a checkbox.

For a deeper dive into this topic, read the full article on VentureBeat.

MIT's Guide to AI Safety Challenges

Since we're already on the topic of AI risks, MIT researchers have launched an extensive AI risk repository designed to bridge gaps left by existing frameworks, many of which cover only a fraction of potential risks.

MIT’s project stands out because it doesn’t just highlight risks but also helps researchers, companies, and policymakers understand and address them effectively. Developed with input from academic and industry experts, the repository categorizes over 700 AI risks, ranging from privacy concerns to misinformation.

What makes this repository crucial is its potential to guide the future of AI regulation and development. As AI becomes more integrated into critical systems, understanding its risks is essential for creating robust, responsible AI strategies. MIT researchers are optimistic that this repository will not only help shape AI regulations but also push organizations to adopt more thoughtful, risk-aware practices.

MIT's next step? Using the repository to evaluate how well different AI risks are being addressed in practice. This could reveal overlooked areas and drive more balanced, comprehensive responses to AI's risks.

Sounds interesting? You can read more about it on TechCrunch and the AI Risk Repository website.


And that’s a wrap for the 9th edition of Neoteric AI News Digest! We hope these stories gave you some valuable insights into the evolving world of AI. Feel like sharing this piece with your network? Don’t think twice! Got thoughts or questions? We’re always up for a good discussion — leave a comment or drop us a message.

P.S. Looking for a trusted tech partner for your AI-powered software development project? We’ve been building AI projects since 2017 ?? See how we can help you!

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

3 个月

This digest promises to unpack the latest in emergent AI paradigms, diving deep into the symbiotic relationship between neural architectures and quantum computation. I'm eager to see how the author navigates the complex ethical considerations surrounding explainability in these advanced models. Can you elaborate on how the principles of federated learning might be leveraged to ensure responsible development within this burgeoning quantum-AI landscape?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了