Neoteric AI News Digest No. 8: The Push for Regulations and Responsible AI

Neoteric AI News Digest No. 8: The Push for Regulations and Responsible AI

The first August edition of Neoteric AI News Digest was not that easy to put together, as the past 2 weeks were surprisingly full of noteworthy news from the artificial intelligence scene. It seems that this summer, the main theme in the AI world will be new models and regulations — and that’s where we’ll start today.

When it comes to AI policies, the first thing that pops into our mind is: it’s about time. This industry desperately needs some solid, well-considered laws that would allow it to grow without all the unnecessary concerns that keep roaming around it. Especially since this week we have another “AI plagiarism” story on our plate — and we all know it’s probably just the tip of the iceberg. What else do we bring to your attention today? Read on and find out!

AI Giants’ Focus on AI Safety as Regulations Tighten

It's quite interesting that now that the EU AI Act is in power, suddenly the Big Techs seem a lot more interested in AI safety. Leading the charge is OpenAI, which has been actively engaging with U.S. lawmakers and regulatory bodies to shape the future of AI governance.

Recently, U.S. lawmakers sent a letter to OpenAI CEO Sam Altman, questioning the company's safety standards and practices. They asked OpenAI to dedicate 20% of its computing power to safety research and give the U.S. government early access to its next AI model for pre-deployment testing. This comes amidst whistleblower reports alleging lax safety standards and retaliation against employees who raised concerns.

In response, OpenAI has pledged to work closely with the U.S. AI Safety Institute, promising early access to its next generative AI model for safety testing. This move aims to counter the narrative that OpenAI has deprioritized AI safety. Additionally, OpenAI has committed to eliminating restrictive non-disparagement clauses and creating a safety commission, although these steps haven't fully satisfied all critics.

Moreover, OpenAI has endorsed several Senate bills that could significantly influence U.S. AI policy. These include the Future of AI Innovation Act, which would formalize the U.S. AI Safety Institute as a federal body, as well as the NSF AI Education Act and the CREATE AI Act, which aim to bolster AI research and education. These endorsements are seen as OpenAI's strategy to secure a favorable position in future regulatory discussions.

For more detailed insights, check out the full article on Cointelegraph and these two on TechCrunch: first one & the second one.

Image credit: Google

Google Unveils New 'Open' AI Models with a Focus on Safety

OpenAI isn't the only one taking actions in the AI safety area. Google has just launched a trio of new "open" generative AI models, part of its Gemma 2 family, highlighting safety, transparency, and versatility.

These new models — Gemma 2 2B, ShieldGemma, and Gemma Scope — cater to various applications, but according to Google, they share a common goal: making AI safer and more transparent. Unlike the proprietary Gemini models, the Gemma series is Google's way of building trust within the developer community by being more accessible.

Gemma 2 2B is a lightweight model designed for text generation and analysis, compatible with a range of hardware, from laptops to edge devices. It’s licensed for specific research and commercial uses and can be accessed through platforms like Google’s Vertex AI model library and Kaggle.

ShieldGemma is a collection of safety classifiers aimed at detecting and filtering toxic content such as hate speech and harassment. This ensures that both the inputs and outputs of generative models remain safe and appropriate.

Gemma Scope allows developers to delve into the inner workings of Gemma 2 models, providing valuable insights into how the model processes information and makes predictions. This transparency helps researchers understand and trust the model’s operations better.

The release of these models aligns with a recent endorsement from the U.S. Commerce Department, which praised open AI models for their accessibility and highlighted the need for monitoring potential risks.

Wanna know more? Check out the full article on TechCrunch.

Image credit: European Comission website

EU Launches a Consultation Process to Shape AI Rules?

The EU is taking a proactive step in refining its AI regulations, and what's even better, it's calling the public to participate. The European Union has launched a consultation process to shape the rules for general-purpose AI models (GPAIs) under the AI Act, which comes into force on August 1. This approach aims to involve a broad range of stakeholders to create a Code of Practice, ensuring the development of “trustworthy” AI systems.

The consultation invites input from GPAI providers like Anthropic, Google, Microsoft, and OpenAI, as well as businesses, civil society representatives, rights holders, and academic experts. The goal is to develop a comprehensive Code of Practice by April 2025, providing ample time for thorough guidance creation.

Divided into three sections, the consultation covers transparency and copyright provisions, risk assessment and mitigation for high-compute models, and the monitoring of Codes of Practice. Responses will help shape the template for summarizing the content used for training GPAIs, ensuring detailed and practical guidance.

In addition, the EU is calling for expressions of interest from eligible stakeholders to participate in virtual meetings and workshops to draft the Code. This process aims to be transparent and inclusive, addressing concerns about the potential exclusion of civil society organizations.

Interested parties can submit their input by September 10, 2024, but the deadline for expressing interest in the drafting process is August 25, 2024.

You can read more about it here. And if you’re interested in this topic, be sure to watch the recording from our webinar, where Matt Kurleto discussed “AI innovations vs. regulations” with Federico Menna, a CEO of EIT Digital.?

AI Music Startups Face Legal Challenges Over Copyright Infringement

This story is yet another perfect example on why proper AI regulations are so urgently needed. AI music startups Suno and Udio are facing copyright infringement lawsuits, as Recording Industry Association of America (RIAA) accuses them of using copyrighted music to train their AI models without permission. (Anyone surprised? We’re not.) The AI-generated songs are said to resemble those of famous artists like Bruce Springsteen and Michael Jackson.

The RIAA, representing major labels like Universal Music Group and Sony Music Entertainment, is demanding damages up to $150,000 per infringed work. They argue that Suno and Udio's activities amount to "unlicensed copying of sound recordings on a massive scale."

Suno and Udio, however, defend their methods, claiming it's fair use. They compare their AI's learning process to how a musician studies existing music to create new songs. Suno even described its model training as similar to "a kid learning to write new rock songs by listening religiously to rock music." They also noted that companies like OpenAI and Google use comparable training techniques.

But the RIAA isn't convinced, insisting that proper consent should have been obtained. They highlight that platforms like YouTube secure licenses for using copyrighted content. "There’s nothing fair about stealing an artist’s life’s work," the RIAA stated.

For more details, check out the full article on The Verge.

X Uses User Data to Train AI Without Notice

Need more reasons? Here you go. X activated a setting allowing it to train Grok AI on user posts and interactions by default... and never cared to announce it. So, even though the setting can be disabled, nobody even knew it was on in the first place. We'd say such practices are rather far from fair or decent, but what do we know?

The setting, found under data sharing, reads, "Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning." This data may also be shared with X's "service provider xAI." While users can opt out via the desktop version of X, there's currently no option to do so on the mobile app. Additionally, users with multiple accounts must opt out for each one separately.

The training of AI systems on user data without explicit consent has sparked controversy. Companies like Apple, OpenAI, and Meta have faced scrutiny for similar practices. As AI systems require increasing amounts of data to improve, tech companies are seeking new sources, potentially leading to a data shortage in the coming years. This issue is particularly pressing in Europe, where tech restrictions are more stringent.

You can read the whole story on CNET.

Watermark or Not to Watermark? OpenAI Hesitates Over User Concerns?

This one is quite funny when you think of it. OpenAI worked out a way of watermarking AI-generated text, so it could be determined what was generated by ChatGPT. Moreover, like a year ago already, they built a tool for detecting AI-generated text — meaning everything was ready to ensure that AI content is distinguishable from human-written texts.

The watermarking process subtly alters word predictions to create a detectable pattern, found to be 99.9% effective and resistant to tampering. However, it can be bypassed by rewording with other models. OpenAI is also exploring embedding metadata as an alternative, though its effectiveness is still being evaluated.

The thing is, now OpenAI is hesitating whether to implement a watermarking system at all. While some employees support its use, there are concerns it might deter users, with nearly 30% indicating they’d use ChatGPT less if watermarking were introduced.

It feels like that’s where the circle of privacy and safety concerns over AI closes. On one hand, we all want fair AI that doesn’t learn on things it’s not authorized for, it’s free of plagiarism and so on. Creators want their hard work to be recognized and distinguished from AI-generated stuff, professionals wanna get paid what they deserve without hearing “it should be cheaper because you can use ChatGPT”. On the other hand, we do recognize the benefits of tools such as ChatGPT and we use them quite often — for a wide range of purposes, from private stuff to actual work. But at the same time, even though we openly speak of using it, we don’t necessarily want anyone to know which parts of our work are indeed AI-generated.

OpenAI’s struggle is real, stemming from a serious paradox we’re observing in this new AI-powered reality. So, the question remains: “to watermark, or not to watermark?” — and it looks like finding an answer to this one might help solving an even bigger dilemma.

What’s your take on this story? Would you rather have AI-generated text watermarked or not?

Interested in the full story? Read that piece on The Verge.

Will 30% of Gen AI Projects Be Abandoned After PoC By End of 2025?

Last but not least, before we wrap up this issue of Neoteric AI News Digest, let's dive into Gartner's predictions regarding the future of generative AI projects.

The company's latest insights reveal that at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025. The reasons? Poor data quality, inadequate risk controls, escalating costs, and unclear business value. Speaking at the Gartner Data & Analytics Summit, VP Analyst Rita Sallam highlighted the financial burdens of developing and deploying GenAI models, with costs ranging from $5 million to $20 million.

A significant challenge lies in justifying the investment in GenAI for productivity enhancement, which often doesn't translate directly into financial benefit. Gartner's research indicates that GenAI projects require a higher tolerance for indirect, future financial investment versus immediate ROI. Early adopters report various business improvements, but the substantial costs and variable impacts pose challenges.

Rita Sallam emphasized the importance of analyzing the business value and total costs of GenAI business model innovation to establish direct ROI and future value impact. If business outcomes meet or exceed expectations, it presents an opportunity to scale GenAI innovation across a broader user base or additional business divisions. Conversely, falling short may necessitate exploring alternative innovation scenarios.

We must say that these insights are not surprising to us. Having years of experience as a tech partner for businesses building gen-AI-powered products or implementing AI in their organizations, we know all too well that AI projects often fail. But that's exactly why we always underline:

  • The right approach to AI adoption is key; building a solid and well-thought-through strategy can hugely impact your success;
  • Always start with a PoC because, thanks to this, you can validate your ideas at a minimum risk.

Here at Neoteric, we can help you navigate the challenges of a generative AI project. We help identify use cases, estimate Total Cost of Ownership (TCO) and Return on Investment (ROI), define success-failure criteria, assess impact and complexity, create a pipeline of PoC projects, AND more. So, if you want to maximize your chances for success — you know where to find us?

If you have some ideas in mind, now is a perfect time to discuss them. Until the end of August, we have a great offer for our AI workshops, which is the best first step to kicking off your project.

Be sure to take a look at it: Neoteric AI Workshops, and don’t hesitate to contact us if you have any questions!

For more detailed insights, read the full article on Gartner’s blog.

***

That’s it for the 8th issue of Neoteric AI News Digest. Like always, we encourage you to share your thoughts on the news and pass this article on to your network, if you find it insightful. See you back here in two weeks!

P.S. Looking for a trusted tech partner for your AI-powered software development project? We’ve been building AI projects since 2017 ?? See how we can help you!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了