32nd Edition - Last Week in AI - Brain Computer Interfaces - AI talks to AI
Future Forward - Emerging Tech & AI Newsletter - 32nd Edition - Last week in AI - Brain Computer Interfaces - AI talks to AI

32nd Edition - Last Week in AI - Brain Computer Interfaces - AI talks to AI

Welcome to the 32nd Edition of Future Forward - the Emerging Tech & AI Newsletter!

This newsletter aims to help you stay up-to-date on the latest trends in emerging technologies. Subscribe to the newsletter today and never miss a beat!

Subscribe to the newsletter here.

Here's what you can expect in each new issue of the Emerging Tech & AI Newsletter:

  • A summary of the top AI news from the past week
  • Introductory details/Primer on any emerging technology (We explore Brain-Computer Interfaces this week)
  • A key topic in AI or Examples of how AI is being used or How it will impact the future ( We explore a recent advancement where AI talks to AI this week)


Last Week in AI

The field of AI is experiencing rapid and continuous progress in various areas. Some of the notable advancements and trends from the last week include:

Big Tech in AI:

Respective companies own the copyright to logos. Big Tech in AI Cover Image

  1. 苹果 unveils new MM1 models - multimodal AI models.
  2. 谷歌 released VLOGGER which can generate photorealistic talking avatar videos with full upper body motion.
  3. Jensen Huang unveiled Nvidia’s next-gen Blackwell GPU architecture and the GB200 Superchip.
  4. YouTube updated its policies to mandate disclosure whenever videos contain realistic content generated by artificial intelligence.
  5. 英伟达 Announced Project GR00T Foundation Model for Humanoid Robots.
  6. Google DeepMind introduced TacticAI - an AI assistant for football tactics.
  7. 英伟达 Announced Earth Climate Digital Twin.
  8. 微软 struck a $650 million licensing deal with Inflection AI, bringing the startup's models to its Azure cloud platform. This follows Microsoft's recent acquisition of Inflection's co-founder Mustafa Suleyman and a large portion of its staff.
  9. 谷歌 AI could soon use a person’s cough to diagnose disease.
  10. France Fines Google $272M for Training AI on News Articles.
  11. Google Research uses AI to accurately predict riverine flooding.
  12. Meta to launch WhatsApp tipline to detect deepfakes in India.
  13. 亚马逊 and 英伟达 extend their collaboration to advance generative AI.
  14. Meta Says its AI Tools Boost Ad Campaigns’ Returns by 32%.

Funding & VC Landscape:

  1. Saudi Arabia reportedly plans to create a $40B fund to invest in AI.
  2. Biden awards Intel $20B in AI chip incentives.
  3. Web 3 Startup Tensorplex Labs Raises $3M Seed Funding to Decentralize AI.
  4. Foundry Emerges From Stealth With $80M For Purpose-Built AI Cloud.
  5. Borderless AI raises $27 mln in funding backed by Susquehanna, Aglaé Ventures.
  6. Maruti Suzuki India Limited acquires 6.44% stack in Amglo Labs.
  7. Hippocratic AI secures $53M, bringing valuation to $500M.
  8. Together AI Raises $106 Million in Funding.
  9. Spectral AI secures $30 million equity funding deal.
  10. Urbanic secures $150 million investment to enhance AI design capabilities.
  11. AI inferencing startup NeuReality raises $20m in the latest funding round.
  12. Spectral AI secures a portion of $30M SEPA funding.

Other AI news:

  1. Pipio - Instantly translate videos with AI-powered lip sync.
  2. maisa announced the beta release of KPU - which leverages the power of LLMs with the decoupling of reasoning and data processing in an open-ended system.
  3. Stability AI released Stable Video 3d - Quality Novel View Synthesis and 3D Generation from Single Images
  4. Buildbox.ai released Buildbox 4 Alpha, the AI-first game engine where you simply type to create.
  5. GitHub launches CodeQL to discover vulnerabilities across a codebase.
  6. Sakana AI released new foundational models using the Evolutionary Model Merge technique.
  7. Stanford gives AI an inner monologue.
  8. Top computer scientists say the future of artificial intelligence is similar to that of Star Trek.
  9. AI's excessive water consumption threatens to drown out its environmental contributions.
  10. Supply Trace can expose cases of forced labour in the clothing industry.
  11. ETH Zurich researchers released a model that could broaden the manipulation skills of four-legged robots.


Brain-Computer Interfaces

Brain-computer interfaces (BCIs), or brain-machine interfaces (BMIs), are essentially devices that create a direct line of communication between your brain and an external system. Imagine controlling a computer cursor or even a robotic limb with your thoughts! That's the potential of BCIs.

Understanding the Tech:

BCIs work in three main steps:

  1. Signal Detection: BCIs use various methods to capture brain activity. Non-invasive BCIs typically use electroencephalography (EEG) which reads electrical signals from the scalp. Invasive BCIs involve implanting electrodes directly in the brain, providing a clearer signal.
  2. Signal Processing: The captured signals are then analyzed by a computer program that deciphers the underlying patterns. These patterns often correspond to specific thoughts or intentions.
  3. Output Generation: Once the desired action is identified, the BCI translates it into instructions for an external device, like a computer or prosthetic limb.

A Brief History of BCIs

The concept of BCIs has been around for decades. The journey began in 1924 when Hans Berger discovered the electrical activity of the brain, paving the way for EEG. Since then, BCI research has progressed through different stages:

  • Early research focused on understanding brain signals and developing basic communication systems.
  • Later advancements explored using BCIs for restoring motor function in paralyzed individuals.
  • Current research is delving into more complex applications and improving signal processing techniques for greater accuracy.

Recent Advancements in BCI Technology

The field of BCI is rapidly evolving. Here are some exciting highlights:

  • Increased Accuracy: Researchers are developing more sophisticated algorithms to better interpret brain signals, leading to more precise control over external devices.
  • Non-invasive advancements: Non-invasive BCIs using EEG are becoming more powerful, offering a less risky alternative to invasive procedures.
  • Expanding Applications: BCIs are being explored for various applications beyond motor control, including neurorehabilitation, communication for locked-in patients, and even sensory restoration.

Recent Updates From Neuralink

Neuralink focuses on creating a fully implantable BCI system. Their device called the "Link," is a chip with tiny threads containing electrodes that are surgically implanted into the brain. This invasive approach allows Neuralink's BCI to potentially record signals with higher resolution and accuracy compared to non-invasive BCIs that rely on EEG. This could enable more complex control and interaction with external devices.

In January 2024, Neuralink achieved a milestone by implanting their device in a human patient for the first time. Neuralink recently released a video of one of their patient playing chess. More recently the patient was able to Post Tweet On X ''Just By Thinking''.


AI talks to AI

One of the key ways humans learn and share knowledge is by understanding instructions and then explaining them to others. This has been a challenge for AI, but researchers have made a breakthrough! They created an AI system that can learn new tasks from instructions and then explain them clearly to another AI, allowing the second AI to perform the task.

Performing a new task without prior training, on the sole basis of verbal or written instructions, is a unique human ability. A team from the University of Geneva (UNIGE) has succeeded in modelling an artificial neural network capable of this cognitive prowess.

Today's AI chatbots: Good with words, not with actions: Current AI chatbots can understand and respond to language by creating text or images. However, they can't seem to grasp how to turn those words into actions in the real world. And forget explaining those actions to another AI! This research is a step towards overcoming that limitation.

Building a talking-doing AI: The researchers built an AI model with a unique ability: understanding instructions and performing actions based on them. They started with a powerful language model called S-Bert, pre-trained with 300 million neurons to grasp language. Then, they "hooked it up" to a smaller network of a few thousand neurons that could translate those instructions into actions.

Teaching the AI to understand and speak: To give their AI the ability to follow instructions and explain them, the researchers took a two-step training approach, all on regular laptops. First, they trained it to mimic Wernicke's area, the brain region that helps us understand language. Then, they trained it to mimic Broca's area, which, influenced by Wernicke's area, allows us to produce speech. Finally, they fed the AI written instructions in English.

What's in Future: Despite the simplicity of the network, it lays the groundwork for significantly more intricate systems. Imagine humanoid robots built with these advanced networks, capable of not only comprehending our instructions but also communicating fluently with each other!

Journal Link - https://www.nature.com/articles/s41593-024-01607-5

AI's Inner Monologue

Another model released last week Quiet-STaR also paved the way for more natural interactions with technology. "Quiet-STaR" is a new AI technique that enhances human-computer interaction by enabling chatbots to simulate human reasoning by considering multiple response options before answering.

Quiet-STaR is the result of collaboration between AI researchers at 美国斯坦福大学 and Notbad AI Inc. The researchers have shared their work in a preprint, explaining their new method and its success when used with existing chatbots.

Source:

The team assessed their algorithm's performance by incorporating it into the open-source Mistral 7B chatbot framework. Both versions were then subjected to a standardized reasoning benchmark. The enhanced chatbot with the algorithm achieved a notably higher score (47.2%) compared to the baseline version (36.3%). This is a significant improvement for an existing model.


Disclosure: Some content in the article was written with the help of Google Gemini.

Thanks for reading. See you next week!

Let's explore the future of technology together!


要查看或添加评论,请登录

Arpit Goliya的更多文章

社区洞察

其他会员也浏览了