How the Sound of AI Benefits Our Lives

How the Sound of AI Benefits Our Lives

Have you heard of acoustic AI in all the chatter that surrounds us regarding AI? No, it’s not a way to outperform Ed Sheeran. It is, in fact, an emerging field that combines artificial intelligence (AI), machine learning (ML), acoustic sensor technology, and intelligent audio signal processing methods, that could revolutionise various aspects of our daily lives and build a smart city.

Acoustic analytics is already being used in manufacturing, industrial development, and even healthcare, to detect abnormalities, malfunctions or health issues, for proactive maintenance or disease identification in smart city.?

As it’s comparatively easy for humans to interpret video images, naturally most mainstream analytics solutions have been video based, while acoustic analysis is overlooked. However, with the advent of ML and AI applying in smart city, scientists and engineers are gradually adopting acoustics as a complementary solution to video in enhancing condition monitoring and quality control, such as in environments where visual surveillance may be difficult. For instance, acoustic solutions have been used to detect leaks in underground water pipes

ML and AI make it easier to analyse sound data and recognise sound patterns, enabling some incredibly effective acoustic solutions that could enhance the efficiency, safety, and quality of life in human-centric smart cities. While ML techniques are already being more widely used in acoustic applications, scientists also see the potential of unsupervised deep learning AI models in acoustic analytics.

“AI can enhance the accuracy and efficiency of defect detection and analysis by leveraging advanced machine learning algorithms. By training AI models on large datasets, these tools can learn to recognise complex patterns and identify subtle anomalies in data from acoustics. This can lead to earlier detection of anomalies, reduced false alarms, and more precise localisation of issues, for instance, within the water mains. Additionally, AI has the capability to continuously learn and adapt, allowing for ongoing improvement and optimisation of the analysis process,” says Research Associate Professor Moez Louati of the Department of Civil and Environmental Engineering at the Hong Kong University of Science and Technology , who specialises in the application of acoustic analytics in water supply system monitoring.

“AI and ML are not emerging as separate technologies; rather, they complement and enhance existing technologies in the field of acoustic data analysis by facilitating efficient data processing and supporting improved feature recognition,” continues Professor Louati, who has patented a ML acoustics tool for the water mains. “To fully harness the potential of AI and ML, it is essential to foster partnerships and promote multidisciplinary applications.”

Apart from water pipe monitoring, acoustic AI is also contributing to a number of other applications in a smart city. In industrial settings, acoustic AI is changing the way we conduct maintenance. By detecting unusual sounds in machinery that might indicate impending failure, it allows for proactive maintenance scheduling, reducing equipment downtime and extending lifespan. Companies like IBM are at the forefront of developing AI-powered acoustic monitoring systems for industrial equipment.

In manufacturing and quality control, acoustic AI is becoming a game-changer. It can detect anomalies in manufacturing by analysing acoustic signals, allowing for early intervention and correction. This capability is particularly valuable in applications like weld seam inspection in vehicle production, where precision and reliability are paramount. For example, companies like BMW are using acoustic AI to ensure that each vehicle meets the highest standards of safety and performance.

Furthermore, acoustic AI can be integrated into intelligent transportation systems to monitor and manage traffic flow of a smart city. By analysing sound signals from vehicles and traffic environments, acoustic AI could? reduce congestion, optimise traffic light timings, and improve overall traffic efficiency. Scientists at several universities are currently studying its potential.

The healthcare sector is also benefiting from acoustic AI in the area of respiratory conditions, by analysing cough patterns to help with early detection of potential health issues or track the progression of respiratory diseases. If you’re a bad sleeper and are keen to know more about your sleep patterns, there are AI tools that help with just that! Sleep.ai is one such application that detects teeth grinding and snoring sounds during sleep, helping users identify potential sleep disorders.

In the realm of consumer electronics, acoustic AI is making significant strides in improving hearing aids. Companies like Sonava and its Phonak-branded hearing devices, are using acoustic AI to ?automatically adjust to different sound environments, enhancing speech clarity in noisy settings, and even offering language translation features.?Phonak has been using ML for over 20 years to classify acoustic environments and, now with AI, it can improve sound recognition with an operating system trained with AI-based ML to constantly scan and analyse the sound environment of everyday situations. Its acoustic AI-based hearing aids are found to improve speech understanding by 20% compared with hearing aids which are adjusted manually.

As we begin to learn about acoustic AI, it’s clear this technology still has a long way to go and its potential to support urban development is diverse! As Professor Louati says, “Within smart city development, acoustic AI can contribute to three main applications, namely anomaly detections in water mains, structural integrity, and traffic management.”

We hope acoustic AI will play a key role in creating more efficient and sustainable urban environments for human-centric smart cities. The sound of AI could become the soundtrack of our future, promising a harmonious blend of technology and human-centric design.

#HKUST #acousticai #SmartCity #MTRLab

Sources:

  1. A Survey on Artificial Intelligence-Based Acoustic Source Identification
  2. Guideline on Leak Detection on Underground Communal Services of Housing Estates, October 2017, Water Supplies Department, HKSAR
  3. Sound as a new data source for industry 4.0, 3 May 2021,IBM Blog
  4. How AI is revolutionising Production, 27 November 2023, BMW Group
  5. Smart City Traffic Management: Acoustic-Based Vehicle Detection Using Stacking-Based Ensemble Deep Learning Approach.
  6. Acoustic AI Frameworks: Single-Sound Analysis vs. Continuous Cough Monitoring. 23 June 2024, HYFE.
  7. Audio Analysis With Machine Learning: Building AI-Fueled Sound Detection App, 12 May 2022, Data Science.
  8. Hearing Aids with Artificial Intelligence (AI): Review of Features, Capabilities and Models that Use AI and Machine Learning, 25 April 2024, Hearing Tracker.

Mingles T.

GenAI Solutions | VC & Angel | Accredited CFT | Impact Assessment of SDGs under UNDP | Family Office and Advisor | Corporate Innovation Index CII

3 个月

Insightful! Thank you for sharing. ??

Dhanada Mishra

Managing Director at RaSpect AI || PhD - University of Michigan, Ann Arbor || Experienced in Academic and Corporate Research & Development || Strategic product development and global expansion ||

3 个月

Congratulations ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了