The Birth of Neuromorphic Vision: Part 2 - Applications

The Birth of Neuromorphic Vision: Part 2 - Applications

This is the second of a four-part series of short articles about the origins, applications, and future of neuromorphic event-based vision. In the first article, I provided a short overview of the long process behind the development of the Dynamic Vision Sensor (DVS). Here, I describe some of the key milestones on the way to bringing DVS technology to market, and outline some answers to the recurring questions about the technology.

What is DVS?

Unlike a normal camera, DVS does not capture frames (although there are also versions that can do this). Instead, its pixels operate independently and asynchronously to detect lighting changes, which are output as binary up/down signals. This occurs very fast, because only what has changed is being sent out of the camera. The result is much less output data, much faster (delay of microseconds instead of milliseconds). Because the changes detected by the sensor are always relative to the current local lighting, it also has very high dynamic range (typically around 120 dB), allowing it to respond well in scenes containing both very bright light and dark shadows. You can also watch a talk about DVS given by Tobi Delbruck at IBM below.

Cool Demos

As described in the previous article, the creation of the first DVS camera unlocked the imaginations of hundreds of people. Many technology demonstrators were created by labs and companies around the world. A small sample is available on the iniVation website here; you can also find an extensive list of resources on this page of the lab of Davide Scaramuzza. One of the first DVS demos, and still one of the coolest, was the RoboGoalie. It was designed to run on rather weak hardware, even for the time:

  • Low-powered netbook with Java software
  • DVS128, i.e. only 128x128 pixels, connected to netbook via USB 2.0 connection
  • Hobby servo motor also connected via USB

Despite these limitations, RoboGoalie demonstrated extremely fast reactions (<3 ms), beyond anything that had been previously possible on such a low computing budget, on the viscerally universal topic of football.

From Cool Demos to Industry Milestones

I helped to organize the first public industry trade show featuring the DVS in 2013, which built on several years of experience within the institute in presenting the technology at research conferences and workshops. For several years, the most common question was "What is that?", followed by "Does it work for X?". The DVS was so fundamentally new that a collective industry-wide effort was required to explore and understand its implications for computer vision. We have provided cameras to hundreds of organizations across all industries, including robotics, autonomous vehicles, consumer electronics, and aerospace. As of this writing, over 2600 papers on Google Scholar contain the phrase "Dynamic Vision Sensor".

Over the years, interest in neuromorphic vision has ballooned, with many customers and a number of competitors entering the scene. We have progressed steadily to reach a number of industry-first milestones:

  • The original DVS128 camera (and its successor models including the DAVIS and DVXplorer series), together with the open source jAER software (and later our high-performance open-source DV Software), kick-started the whole industry by enabling thousands of developers worldwide to solve problems using DVS.
  • In 2019, our technology partner and investor Samsung launched the world's first consumer electronics product based on DVS - a privacy-aware home IoT device called SmartThings Vision. Samsung also created the first DVS sensor to be qualified for mass production.
  • In 2020, our DVXplorer camera won a CES Best of Innovation award, followed in 2022 by a Red Dot Design Award.
  • In 2021, our customer Western Sydney University sent the first neuromorphic technology into space, together with the Royal Australian Air Force. Later that year, they also sent our first sensor to the International Space Station.
  • In 2022, our partner SynSense announced the availability of development kits for Speck, the first system-on-chip (SoC) combining our DVS with a spiking neural network processor. Tiny in both size and power consumption, it is targeted at smart home appliances and toys.
  • Also in 2022, our customer Machines with Vision achieved the first long-term commercial deployment of DVS in a real-word industrial setting. Its systems incorporating our cameras have accumulated many months of reliable operation under harsh outdoor conditions on high-speed trains in three European countries.
  • In 2023, the first installation of Foveator Track will go live.

Is DVS Useful for My Application?

The big question being asked by every industry is when (and where) DVS technology will go fully mainstream in their business. The answer is complex, and highly dependent on both the technology needs and the business environment of every individual niche. However, the key questions in each case can be summarized as follows:

  • Does the application need to be much faster or more energy-efficient at system level?
  • Does DVS provide a huge advantage on either or both of these things?
  • What does it cost me to get this advantage? Costs could include extra components, reduced performance in other functionality, etc.
  • What are the limitations of the DVS concept - where does it do worse than normal cameras?

The main technical trade-off can be summarized in the figure below. To date, neuromorphic vision sensors offer lower resolution (bigger pixels), in exchange for much higher speed and often lower power consumption. However, the technical sensor characteristics need to be considered together with the whole processing infrastructure: the type of processor used, the data bottlenecks, etc. For example, the ultra-low power consumption of the Speck vision SoC (system-on-chip) mentioned earlier is achieved by connecting a DVS directly to a spiking neural network processor, for which the DVS output is naturally suited.

No alt text provided for this image
Comparison of qualitative characteristics of event-based vision sensors wih frame-based sensors

While the technical considerations are an important part of determining the usability of DVS for applications, the other factors mentioned above are just as important. In the next article, I will dig into these questions for a number of applications, describe some of the technology and application gaps, and preview some of our future plans.

Robert Quinn

Your daily news source with 60K followers & 11 million impressions annually. Stay in the loop with exclusive semiconductor industry insights. INTERESTED IN COLLABORATION? E-MAIL ME at [email protected]

2 年

This technology has been used in a variety of applications, including robotics, autonomous vehicles, consumer electronics, and aerospace. In recent years, there has been growing interest in neuromorphic vision and several companies have entered the market. I look forward to seeing the future of this market.

Jesse Chapman

Verification Engineer

2 年

BrainChip

要查看或添加评论,请登录

Kynan Eng的更多文章

社区洞察

其他会员也浏览了