Visible Sound

Visible Sound

Understanding sound has always been difficult, because it is so intangible. Sound waves, unlike visual waves, can’t be directly observed by the human eye, which makes it tricky to study their behavior. It complicates tasks such as noise evaluation, the development of new sound devices, and improving current acoustic technologies.

NTT has found a way to get past these challenges by making sound waves visible.

First thing: what is sound, anyway? Put simply, sound is pressure fluctuations in the air, traveling in waves. A bit like ripples on a water surface. However, due to the complex nature of sound propagation, which involves reflection, diffraction, and varying densities, until now observing these waves has been nearly impossible. Traditional methods, like microphone arrays, don’t have the spatial resolution needed to capture sound waves in high definition.

That’s where NTT's research on sound comes in. It uses optical sound field imaging combined with deep learning to visualize sound with a clarity we’ve never seen before.

NTT's sound visualization technology uses high-speed cameras, laser light, and AI processing to capture and render sound waves as moving images. It’s able to do that by understanding the “acousto-optic” effect, where sound waves cause slight variations in the density of air, which then affects the speed of light passing through it. By directing a laser beam into the sound field and measuring the minute differences in distance and changes in waves, and how they interfere with each other, NTT is able to record the fluctuations of light caused by sound waves.

That sounds quite straightforward; it’s not! The process involves capturing images at incredibly high speeds, ranging from several thousand to several hundred thousand frames per second, to ensure the sound waves are visualized accurately. And even that’s not enough by itself to give an accurate measurement. Optical noise can degrade the quality of the images. This is where NTT's deep learning model plays a crucial role.

The model, which NTT has trained to consider the physical properties of sound, filters out unnecessary noise from the high-speed camera images. It focuses just on the sound wave components, then produces high-definition visualizations of sound waves. The model’s training involved generating artificial images based on sound's physical properties, then allowing the model to learn and identify the relevant features accurately. Giving not only better resolution, but also greater sensitivity of sound visualization, and making possible the capture of weak sound waves that were previously undetectable.

In the near future, making sound waves visible could help us to design better devices by allowing engineers to see how sound waves interact with different materials and structures. And we could even see a "digital twin of sound," where all sounds in a given space are digitized and analyzed in real-time, improving various sound-related technologies.

In the real world? Imagine using the technology in urban planning to map and mitigate noise pollution, or in the automotive industry to design quieter and more efficient vehicles. In healthcare, it could improve diagnostic tools that rely on sound waves, such as ultrasound machines, by providing clearer and more detailed images.

NTT's work in sound visualization opens up huge potential for innovation and a powerful tool that can transform multiple industries, as well as improving our understanding of the auditory world. Imagine a future where every sound can be captured and analyzed. A future where sound isn’t just heard, but seen and understood in ways we’ve never thought of before.

NTT—Innovating the Future of Sound

For further information, please see this link:

https://group.ntt/en/newsrelease/2024/06/17/240617c.html

If you have any questions on the content of this article, please contact:

NTT Science and Core Technology Laboratory Group

Public Relations

[email protected]

素晴らしいアドバイスです

回复

素晴らしいアドバイスです。

要查看或添加评论,请登录

社区洞察

其他会员也浏览了