What's new with audio technologies? - Part II: microphones and audio processing
Frantz Lohier
CTO (Chief Technology Officer) ? product disrupter & technology enthusiast ? entrepreneur ? inventor ? startup mentor ? book author
In a former edition of this newsletter, I focused on speaker technologies. In this edition, I cover microphones and major sound processing technologies.
How humans perceive sound: 101
At the most basic level, the human ear converts sound waves into electrical signals that your brain processes. Sound waves are characterized by their amplitude, frequency and direction. Your eardrum (or "tympanic membrane") vibrates, the vibrations are conducted through bones and a fluid hearing organ converts them and passes the information to a nerve connected to your brain. The human ear can detect sounds coming from different directions and your brain can focus the sound (this is called the "cocktail party effect"). As you age, your ability to hear high-frequency sounds (high pitch) diminishes.
Microphones also convert sound waves into electrical signals. The sound is then processed using processors and audio algorithms. Let's take a look at these two technologies.
Microphone technologies
There are half a dozen technologies for microphones, all?based on converting the pressure pattern of sound waves into electricity.
The 4 mostly deployed technologies are:
Audio processing: what's new
Once a microphone outputs an electric signal, it is typically amplified and then processed by a computer or specialized processor (like a DSP for specialized Digital Signal Processor).
Typical algorithms include:
领英推荐
I would argue that one of the most interesting audio technologies that has still not fully taken off (apart from the computer gaming industry) is the concept of audio object encoding which was very much discussed in ~2000 during the genesis of the MPEG-4 audio/video encoding standard.
The idea is to pair the recording of sound with geographical information about the localisation of the sound sources and/or rules as to how the sound should be reshaped in case of different objects' placement in a scene (like the speed and direction of a moving sound source). The scene may be composed of a listener moving around or surrounding audio sources also moving.
Virtual games do this frequently now (for example, when wearing a VR headset and turning your head, the sound is reshaped in real-time to balance the volume between your 2 ears to emphasize the effect directionality of an audio environment and give a "special" feeling to the experience).
However, such algorithms are much less common in real-life applications, for example, imaging a car cockpit experience that would amplify alert signals on your car's numerous speakers depending on the directionality of a safety hazard or a home audio theatre system that would track the position of a listener using camera, wireless signals or other means to deliver an optimal audio rending experience in your living room.
Conclusion
Acoustic technologies have evolved quietly over the years with the advent of mature MEMs fabrication technologies and processor and algorithmic technologies. Nowadays, it isn't rare to find multiple microphones, speakers and a combination of sophisticated algorithms like in the automotive sector, videoconferencing or computer gaming/VR applications. Given all these technologies, what's funny is that some audiophiles still prefer old-fashioned vinyls over mp3 audio files for example - vinyl growing steadily at a 3% annual rate in sales. The moral of the story: don't quickly bury old technologies!
#audio #speaker #VR #MPEG-4 #MEMs #microphone #display #activesafety #audioprocessing
I hope you enjoyed this article. As always, feel free to contact me @ [email protected] if you have comments or questions about this article (I am open to providing consulting services). More at www.lohier.com and also my book. Here to subscribe to this newsletter and quickly access former editions.
Thinker, Problem Solver, Solutions Developer, Father & Husband, Student of History
1 年Frantz, that's a great summary! As someone who spent their formative professional years in active noise control, this brought back many memories...and a few nightmares from my past. I am excited to see where technology can take us in recreating the immersive effects in audio as we venture into AR/VR.
Group Electronics Technology Director at Forvia
1 年Nice overview again Frantz ! I would just add that depending on the intended applicatio, there is different performance parameters that comes into consideration: - Microphone directivity: in a car, where the microphone is facing the user, unidirectional microphones are preferred to omnidirectional, the latters could be used in headphones for noise cancellation or voice recording - Microphone dynamic: it's often difficult to find (at least for a reasonable price) a microphone that does provide a good sensitivity AND a strong dynamic, able to withstand powerful sounds without clipping. - Integration: also often overlooked, a proper microphone integration is key to ensure a strong performance, especially in cars where the surrounding parts of a microphone could drive a lot of noise to the microphone if not well damped. All of this explains that even if MEMS microphones offer a lot of interesting performances (size, sensitivity, power consumption) they're not ubiquitous and we still see other technologies remaining.