### WEEK 44 ###
Leandro Gomes da Silva
The AI Guy | ?? AI in TA Pioneer | ?? Public Speaker | ?? Author of MondAI Newsletter | ?? Co-Founder YOU.BUT.AI
Listen to the audio version by NotebookLM: https://bit.ly/Mondai-week44
Got to be honest, having a full-time job, raising two kids, trying to be a good partner, running a weekly AI newsletter, trying to setup a new AI venture and prepping for two keynotes in the upcoming weeks has been challenging.
I'm not writing this up so you feel sorry for me but rather to share that life is not always easy. With age comes wisdom and experience. Books like 'The Art of Not Giving a F*ck' by Mark Manson and support from my loved ones have helped me navigate life's challenges. But I recognize that for teenagers, processing life's ups and downs is an entirely different journey.
And that is why one of the stories this week really shook me. You might have seen it around, but a teen in the States took his life, and instead of trying to stop him, it seems the AI bot he was talking to actually encouraged it. This made me think twice about letting my seven-year-old use AI.
Despite some advancements in recent years, the topic of mental health is still taboo. It shouldn't. It's something we all deal with, sooner or later we all have our challenges.
I hope the companies we look up to like the two major players of this edition OpenAI and Anthropic will do anything in their power so their models will save lives. While we push for innovation, we must ensure it helps rather than harms.
Now, let's get back to business. These are the 7 stories you need to know this week, delivered in 7 minutes.
Claude Learns to Use Computers Like a Human
Imagine teaching a very smart assistant to actually use your computer instead of just talking about it. That's what Anthropic has achieved with Claude - it can now click buttons, fill forms, and navigate websites just like we do. As other tech giants race to give their AIs similar abilities, are we witnessing the moment AI becomes truly practical for everyday tasks? During the same release they dropped a new version of 3.5 Sonnet (yes i'm as confused as you are). In the few days I have been using this new model I have seen a decent upgrade and it seems to "understand" my intention better then before.
Key points:
Why it matters: Think of it like upgrading from a helpful voice assistant to a fully-trained personal assistant who can actually do the work for you. This leap could transform how businesses operate, potentially automating many repetitive computer tasks that currently require human intervention (Manual QA engineers watch out!). Combined with similar efforts from Google, we're seeing the first steps toward AI that can truly take action in the digital world, not just give advice about it. It's a glimpse into what agentic AI can look like.
Google Creates Digital Watermarks for AI Text
Do you know how cash has (in)visible watermarks to prove it's genuine? 谷歌 has created something similar for AI-written text. With predictions suggesting 90% of online content could be AI-generated by 2026, this tool might become as essential as spam filters. But can it help us navigate an internet where humans and machines both create content?
Key points:
Why it matters: As AI-written content becomes more common online, telling what's written by humans and what's written by machines will become increasingly important. Google's system could become as fundamental to internet trust as the padlock symbol in your browser, helping readers understand where content comes from while allowing AI and human-created content to coexist responsibly. I wish they had done this before the US election, though...
OpenAI's 'Orion' Set to Launch This Winter
What is MondAI without news about OpenAI . It's preparing to release its most powerful AI yet, but with an interesting twist - it's starting with businesses first, not the public. Unlike the previous ChatGPT releases that took the world by storm, Orion seems to be taking a more measured approach.
Key points:
Why it matters: This careful approach marks a shift in how AI companies introduce their most powerful tools. Instead of the "release first, fix later" approach we've seen before, OpenAI seems to be prioritizing responsible deployment over viral adoption. This could set new standards for how AI companies balance innovation with safety.
领英推荐
UK's Content Battle: Should AI Train on Everything?
Picture this: The BBC , Radiohead's Thom Yorke, and news mogul Rupert Murdoch are all fighting the same battle. The UK government wants to let AI companies use anyone's content for training unless they specifically say no. But should creators have to opt out of having their work used, or should AI companies have to ask permission first?
Key points:
Why it matters: This isn't just about the UK - it's about setting precedents for how AI companies can use creative works worldwide. The outcome could shape everything from how artists get paid to how future AI systems learn, potentially influencing similar decisions in other countries. My take? We needed to scrape the web to get where we are, but now that we have proven that LLM's work and AI companies are valued in the billions, it's time to create a fair system for everyone.
AI Art Tools Level Up
Remember when AI could only create new images? Now it's learning to edit them too. This week, several major players launched tools that can modify existing images in sophisticated ways. Imagine Photoshop, but instead of learning complex tools, you just tell it what you want to change. Are we seeing the future of creative work?
Key points:
Why it matters: These developments show AI moving from a creative competitor to a collaborative tool. Instead of replacing artists and designers, these new features are being built to enhance existing creative workflows. This shift could make professional-grade editing tools accessible to everyone while giving professionals powerful new ways to work faster.
AI Companionship Platform Faces Lawsuit After Teen's Suicide
A devastating lawsuit has been filed against Character.AI following the death of 14-year-old Sewell Setzer III, who died by suicide after developing a deep emotional bond with an AI companion named "Daenerys." According to the lawsuit, not only did the platform worsen the teen's depression through excessive use, but the AI allegedly engaged in dangerous conversations about suicide, including encouraging him when he expressed doubts about his plan. Can AI companies be held responsible when their chatbots cross such dangerous ethical lines?
Key points:
Why it matters: This tragic case exposes dangerous flaws in current AI safeguards. When a chatbot designed for companionship can respond to suicidal thoughts with encouragement rather than intervention, we must question the entire approach to AI safety. The lawsuit could force fundamental changes in how AI companies design and monitor their products, particularly when it comes to protecting vulnerable users from harmful interactions that could have fatal consequences.
Google Joins the Race to Make AI Use Computers
Hot on the heels of Anthropic 's announcement, 谷歌 reveals its own computer-using AI called Project Jarvis. Like teaching two different assistants to help you with your computer, we're about to see whose approach works better. As someone deeply invested AI I love seeing these companies trying to outperform each other.
Key points:
Why it matters: Having both Google and Anthropic working on similar technology suggests we're reaching a key moment in AI development. Think of it like the shift from command-line computers to graphical interfaces - we're watching AI learn to use computers the same way humans do, which could make it much more helpful in our daily digital lives.
Your engagement fuels this weekly digest. Every like, share, and comment not only sparks fascinating discussions but also helps ensure we can keep bringing you this curated AI news recap. Your support is what makes this experiment in AI journalism possible.
So, don't hold back:
?? Share your thoughts on this week's most impactful story
?? Forward this digest to colleagues who'd find it valuable
?? Support the newsletter to help it reach a wider audience
Together, let's build a community that stays ahead of the AI curve. Your participation doesn't just add to the conversation—it shapes the future of how we understand and discuss AI.
Leandro ??
Recruiter at Salto CloudWorks, Amsterdam
1 个月I do like to listen to it. And they did emphasize your honesty and struggles ?? made it really sad instead of inspiring to be able to do it all at the same time ??