Episode #39 - AI Weekly: by Aruna

Episode #39 - AI Weekly: by Aruna

Welcome back to "AI Weekly" by Aruna - Episode 39 of my AI Newsletter!

I'm Aruna Pattam, your guide through the intricate world of artificial intelligence.

Now, let's delve into the standout developments in AI and Generative AI from the past week, drawing invaluable lessons along the way.

#1? OpenAI’s Shifts in AI Safety

Recent changes at OpenAI have raised questions about the company’s approach to AI safety. The disbandment of key safety teams, including “AGI Readiness,” signals a shift in priorities, but what does this mean for the future of AI governance?

Miles Brundage’s departure and OpenAI’s restructuring reflect ongoing tensions between innovation and safety. As AI advances rapidly, companies face pressure to develop cutting-edge products, sometimes at the expense of comprehensive safety measures.

OpenAI’s former Superalignment and AGI Readiness teams aimed to address the risks of superintelligent AI, but their disbandment raises concerns about the industry’s ability to prioritise long-term safety while racing toward AGI.

Building smarter-than-human AI requires a safety-first mindset. OpenAI’s recent changes highlight the urgent need for transparent, robust safety frameworks.

Read more here:?

#2: Biden Administration Issues National Security Memorandum on AI

The Biden administration has taken a significant step by issuing the first-ever National Security Memorandum on artificial intelligence.

The goal: maintain US leadership in AI while ensuring the technology’s safe and ethical use.

The memorandum outlines plans to strengthen the US AI ecosystem by investing in infrastructure, semiconductors, and clean energy. It emphasises the need for "critical guardrails" to ensure AI safety, echoing the 2023 executive order on trustworthy AI.

The document stresses that, while AI can drive innovation across industries, it also poses risks, such as the misuse of deepfakes. The National Economic Council has been tasked to assess the country’s competitive AI advantage, but it remains uncertain how future administrations will continue this strategy.

As AI evolves, maintaining leadership requires a balance between innovation and regulation.

Click here for more details!

#3: Meta’s Quantized Llama Models: Faster, Smaller, and Ready for Mobile

Meta has launched new, smaller versions of its Llama AI models, making them faster and easier to use on mobile devices. These models can run up to 4 times faster and use 56% less space, while still keeping their accuracy.

How It Works:

Meta used two techniques to make the models smaller and faster:

Quantization-Aware Training (QAT):

Think of this as training the AI to work well with fewer resources. During training, QAT teaches the AI to be efficient by simulating how it would behave with lower memory and processing power. This way, when the AI is used on devices, it runs quickly without needing a lot of space.

SpinQuant:

This technique comes in after the AI is already trained. It shrinks the AI down even more, without needing the original data that was used for training. SpinQuant is great for making the AI models work well on different devices, even if they have less memory or power.

With these methods, Meta’s Llama models can now work smoothly on many smartphones, allowing users to have fast, smart AI tools directly on their devices.

Click the link to know more:

#4: ASIC Flags Governance Gap in Rapid AI Adoption by Financial Services

ASIC’s first review on AI use by financial and credit licensees reveals a potential governance gap as AI adoption accelerates, with around 60% of licensees planning to expand AI use. Currently, AI supports human decision-making and efficiency, but few licensees have policies addressing consumer fairness or bias, and even fewer disclose AI usage to consumers.

ASIC Chair Joe Longo emphasized that strong governance is crucial to avoid risks like misinformation, bias, privacy issues, and consumer harm. Licensees must update governance frameworks proactively, rather than waiting for new regulations, and ensure AI use aligns with existing consumer protection standards.

ASIC will continue monitoring AI’s impact on consumers and financial markets, enforcing actions as necessary to safeguard system integrity.

#5: Cohere for AI: Bridging the Language Gap in AI

Cohere for AI has launched Aya Expanse, a powerful new family of multilingual models designed to bring AI to more languages worldwide. With open-weight models, Aya Expanse aims to make AI accessible, inclusive, and equitable across diverse linguistic communities.

How It Works:

Aya Expanse models, available in sizes like 8B and 32B, are designed to handle multiple language tasks, from text generation to translation. They use advanced transformer architecture, allowing them to capture nuances in languages that are often underrepresented in AI, like Swahili and Bengali.

Unlike many existing models, Aya Expanse was trained on a diverse range of datasets, ensuring it performs well across languages, not just English. By making these models open to the community, Cohere encourages collaboration, enabling developers to build tailored AI solutions without needing extensive resources.

Click here?to read the full story!

#6: Google’s Project Jarvis: A Digital Assistant for Your Web Browser

Google is developing Project Jarvis, an advanced AI agent designed to automate tasks directly within web browsers, especially Google Chrome. Think of it as your personal digital assistant, capable of making online tasks easier and faster.

Project Jarvis can navigate webpages, click buttons, and type responses by analysing screenshots of what appears on your screen. This means it can help with everyday activities like online shopping, booking travel, or conducting research without you needing to lift a finger. The AI uses smart commands to handle routine tasks, aiming to save you time and effort.

With Jarvis, Google is stepping up the competition in the AI world, creating a tool that could change how we interact with the internet. If it works as planned, Project Jarvis could bring the ease and efficiency of a digital assistant to your browser, simplifying online tasks.

Read the link for more details.

#7: Anthropic’s New AI Can Control Your PC

Anthropic has released an upgraded version of its AI model, Claude 3.5 Sonnet, capable of interacting with desktop apps.

Using a new “Computer Use” feature, the AI can handle tasks like typing, clicking, and navigating software, making it a powerful tool for automating everyday computer tasks.

Claude 3.5 Sonnet uses screenshots to understand what’s on your computer screen. Once it has this information, it can move the cursor, click buttons, and type responses just like a person. Developers can control what the AI does by giving specific instructions, such as "fill out this form" or "navigate to this page." It can even connect to the web, opening up possibilities for tasks like browsing websites or using apps.

Safety Measures:

Anthropic has added safeguards to prevent risky actions, like posting on social media or accessing sensitive sites. Screenshots are retained for 30 days to monitor misuse, and safety checks will continue to improve.

Claude 3.5 Sonnet brings more automation to everyday computing, but ensuring safe use will be key to its success.

#8: AI Launches Grok API: Expanding AI Access

Elon Musk's AI startup, xAI, has launched its Grok API, offering developers access to its generative AI model. Priced at $5 per million input tokens and $15 per million output tokens, the API aims to compete with established players like OpenAI.

The "grok-beta" API supports function calling, connecting Grok to tools like databases. Future features may include vision models for analysing text and images.

Despite a few payment issues, the API marks a step toward wider adoption of xAI's technology, already integrated into X (formerly Twitter) for tasks like image generation and unfiltered Q&A.

With plans to leverage data from Musk's ventures like Tesla and SpaceX, xAI aims to lead the AI race.

Regulatory hurdles remain, but Grok’s release signals xAI's push for broader AI accessibility and innovation.

#9: MIT's New AI Technique Makes Robot Training Faster and More Efficient

MIT researchers have developed a groundbreaking method inspired by large language models (LLMs) to train robots more efficiently. This new approach, called Heterogeneous Pretrained Transformers (HPT), combines data from diverse sources, enabling robots to learn various tasks faster and adapt better.

Traditional robot training often relies on task-specific data, making it difficult for robots to adapt to new environments. HPT changes this by using data from multiple domains—like simulation, real-world sensors, and even human demonstrations. The system processes this data using a transformer model (similar to what's used in LLMs), allowing the robot to understand and learn from a wide range of inputs.

HPT improves robot performance by over 20% compared to traditional methods and reduces the need for extensive retraining. This makes training faster, cheaper, and more adaptable, allowing robots to handle complex tasks more effectively.

MIT’s HPT could lead to a "universal robot brain," helping robots learn new skills quickly and adapt to various environments, transforming the future of robotics.

#10: AI Is Bringing the Dead ‘Back to Life’ — A Controversial New Technology

AI is pushing the boundaries of technology, allowing people to "bring back" loved ones even after they’ve passed away. Companies like StoryFile and Project December are creating digital replicas that can interact with people, raising questions about grief, ethics, and what it means to be human.

Speaking at the Sydney South by Southwest conference, expert Bryony Cole explained how AI is being used to revive the voices of deceased loved ones. Companies like StoryFile make digital clones by recording people while they’re alive, enabling them to “attend” their own funerals and answer questions from mourners.

Another example includes a South Korean mother who spoke to her deceased child one last time through an AI-generated interaction.

These innovations provide comfort to some, but also raise ethical concerns.

Critics question whether this helps or hinders the grieving process and warn about potential risks to privacy and authenticity.

?

That wraps up our newsletter for this week.

Feel free to reach out anytime.

Have a great day, and I look forward to our next one in a week!

Thanks for your support˙

Great insights, Aruna! The ethical implications of AI bringing the dead 'back to life' are truly thought-provoking. Keep demystifying AI for us! Aruna Pattam

回复

Aruna Pattam, the shifts in AI safety by OpenAI definitely raise eyebrows. Is progress worth potential risks?

回复
David Norris

Founder at Occupational Therapy Brisbane

3 周

The evolving landscape of AI indeed raises critical ethical questions. How do you perceive these challenges?

回复
David Norris

Founder at Occupational Therapy Brisbane

3 周

Aruna, great rundown. Appreciate the effort you make to curate this.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了