Where have we been? In the lab, refining our process! In all new ventures, it's imperative to analyze your successes, failures, and inefficiencies. Much like a sports team at halftime will scribble on a chalkboard to determine what the next half will look like, we have done the same! What does this mean? It means we are back and better than ever! We look forward to the coming weeks as we roll out new episodes with guests like Colby Buddelmeyer (Mark Levinson | Harman Luxury Audio), Alessandro Carrella, PhD, EMBA (MSI-DFAT), Roald Dietzman (FastDMS), Derrick Knight (Trane Technologies), Katharine Murphy Khulusi (Meyer Sound), and Michael Bahtiarian, INCE Bd. Cert. (Acentechh) to name just a few. We will also be rolling out mini episodes covering a myriad of experiments we've conducted in the MD Acoustics, LLC lab. These experiments explore the science behind sounds we experience daily, often without thinking twice! The sound of a Harley Davidson, a water droplet, card shuffling, a pin drop, and much more.
关于我们
Sound Cave Labs Podcast is a dynamic platform that brings together some of the brightest minds in science, technology, engineering, and mathematics (STEM). Each episode features in-depth conversations with scientists, professors, entrepreneurs, and innovators who are making waves in their fields. We delve into their journeys, research, and challenges they have faced, offering listeners a firsthand look at the fascinating world of STEM. We aim to inspire both peers and the next generation of STEM enthusiasts by showcasing diverse voices and perspectives within the community. By highlighting the achievements and experiences of those working in science and technology, the podcast seeks to break down barriers and demystify the path to a career in STEM. Whether you are a professional, a student exploring future career options, or simply someone with a passion for innovation, the SCL podcast provides a resource for learning, inspiring, and connection in the ever-evolving world of STEM.
- 网站
-
https://soundcavelabs.com/
Sound Cave Labs的外部链接
- 所属行业
- 科技、信息和网络
- 规模
- 11-50 人
- 类型
- 私人持股
- 创立
- 2024
Sound Cave Labs员工
动态
-
Episode 5 | Season 1 is out! https://lnkd.in/gezpRGQZ Be sure to subscribe and like! https://lnkd.in/gScHNiFX J. Taggart Durrant, a PhD student in Aerospace Engineering, is focused on developing new aerospace exploration technologies that can inspire and improve the world. During his undergraduate years, he worked in a research lab supporting NASA - National Aeronautics and Space Administration’s Quiet Supersonic Technology (QUESST) mission, measuring rocket launch acoustics, and reentry sonic booms of SpaceX’s Falcon 9. He also interned at NASA Langley Research Center, where he used machine learning and sonic boom prediction software to help prepare for the X-59’s first flights. Now, at Stanford University, he is entering the field of computational aerospace engineering all while serving a fellowship at NASA - National Aeronautics and Space Administration. He aims to develop computational methods that will aid in designing the next generation of rockets, planes, spacecraft, and hypersonic systems. He strives to support the next generation of aerospace engineers. In an industry that can seem daunting and overwhelming, he is dedicated to providing knowledge, tools, and connections to those who need them. Check out Tagg's newsletter and podcast here: https://lnkd.in/gCfzD2uD
-
NASA - National Aeronautics and Space Administration's future is looking bright with PhD student of Aerospace and Engineering J. Taggart Durrant in the mix! In this teaser, we discuss Computational aerospace engineering, which focuses on using computational methods, simulations, and numerical techniques to solve complex problems in aerospace design and analysis. It involves tools like computational fluid dynamics (CFD) and finite element analysis (FEA) to simulate fluid flow, structural integrity, and aerodynamics in aircraft and spacecraft. Engineers use these methods to optimize designs, improve efficiency, and reduce reliance on physical prototypes by studying the interactions between disciplines such as aerodynamics, thermodynamics, and structural mechanics. This accelerates innovation and enhances the performance of aerospace systems.
-
Beauty is all around us. Often, it's the simple things that humble us when we take a quiet moment to respect them. We can all immediately conjure the sound of a water droplet striking the surface of a calm body of water in our "mind's ear" ...but what's the data behind that unmistakable sound? Stay tuned for our full-length deep dive into the science behind the sound of a water droplet!
-
Episode 4 | Season 1 is out! https://lnkd.in/gwUpx6fV Be sure to subscribe and like! https://lnkd.in/gScHNiFX Matt Green is the Head of Acoustics at Logitech, which is based in Camas, Washington, USA. He has been involved with various innovative audio projects at ?Logitech, Logitech G, Astro, and Ultimate Ears branded audio products, from proof of concept to mass production. His team has been responsible for developing well-regarded products like the ?Logitech G Pro X 2 and A50 X gaming headsets and UE Boom series of portable speakers, utilizing a blend of advanced engineering tools, including anechoic chambers, laser surface scanning, and 3D printing to improve the quality and reliability of their designs. Matt received a BS in Electrical Engineering and an MS in Acoustics from Brigham Young University. He worked in active noise and hearing aid research before transitioning into Acoustical Engineering. He joined Logitech in 2012 and later became the Sr. Manager of Acoustics in 2016, overseeing a team of engineers specializing in different areas of acoustics and audio technology. Matt currently serves as the Head of Acoustics for Logitech.
-
Gaming is serious business! We learn more about Logitech G's Pro X 2 and A50 x gaming headsets. Logitech G's lead Acoustician Matt Green talks to us about his work on some of the most celebrated gaming headsets The Pro X 2 features graphene drivers that provide superior sound clarity and precise audio imaging, making it a strong choice for competitive gamers who need accurate positional audio. It offers multiple connectivity options—Lightspeed wireless, Bluetooth, and wired—and comes with a detachable boom mic integrated with Blue VO!CE technology for professional-grade voice quality. The headset's memory foam ear pads, aluminum frame, and impressive 50-hour battery life make it comfortable for long gaming sessions. The Astro A50 focuses on delivering an immersive audio experience, featuring Dolby Audio and Astro Audio V2 that provide rich bass and a broad soundstage, ideal for action or open-world games. It connects wirelessly via a base station, which also acts as a charging dock, and has a flip-to-mute mic for convenience, although it lacks advanced customization like the G Pro X 2. With up to 15 hours of battery life and a lightweight, adjustable design, the A50 is perfect for gamers prioritizing an enveloping sound experience and straightforward functionality.
-
Episode 3 | Season 1 is here! https://lnkd.in/gM8DESvi Be sure to subscribe! https://lnkd.in/gScHNiFX Dr. Ed Garnero is a professor in the School of Earth and Space Exploration at ?Arizona State University. He is celebrated for his work in Seismology, his ?TEDX?Talk... and his scientific approach to building high-end custom bass guitars? ed garnero specializes in geophysics and seismology, but his journey has shifted to another passion, Lutherie. His "day job," as he calls it, is focused on studying the Earth's interior, from the uppermost mantle down to the core. One of his key contributions includes the study of massive structures deep in the Earth, such as "blobs" found at the core-mantle boundary, which are believed to be chemically distinct from the surrounding material and play a role in volcanic processes. As a Luthier, Dr. Garnero now implements his decades of scientific research into brilliant instruments. Garnero integrates unique design features like resonance chambers into his instruments, which reduce weight and enhance the tonal qualities of the guitars. He enjoys experimenting with tonal qualities and sonic vibrations, inspired by his scientific background and love of music. His basses are known for their custom craftsmanship, often incorporating rare woods and hardware, and have a distinct aesthetic rooted in both practicality and artistic expression. Garnero Guitars: https://garneroguitars.com Garnero Guitars Instagram: https://lnkd.in/gHYk4Zyx Dr. Garnero's Ted Talk: https://lnkd.in/gZybtkdB
-
We spoke with Arizona State University professor ed garnero about the process of using triangulation as a way to identify an earthquake's location. Here's a summary: To identify an earthquake's location, seismologists use a process called triangulation with data from multiple seismographs. When an earthquake occurs, it generates seismic waves that travel through the Earth. These waves are detected by seismographs, which record the arrival of faster P-waves (Primary waves) followed by slower S-waves (Secondary waves). By measuring the time difference between the P-waves and S-waves at each seismograph, scientists can calculate the distance from each station to the earthquake's epicenter. With known distances, seismologists draw circles around each seismograph, with the radius equal to the calculated distance. The point where the circles from at least three different seismographs intersect reveals the earthquake's epicenter. This method provides a reliable way to locate earthquakes by using the timing and speed of seismic waves.
-
With thousands of seismographs all over the earth's surface, how do we analyze the data? We spoke with Arizona State University's School of Earth and Space Exploration Professor of ed garnero to learn more. Here's some of what we learned... Analyzing global seismograph data involves collecting real-time information from seismic networks worldwide, such as the Global Seismographic Network (GSN). Seismometers continuously transmit ground movement data, which is preprocessed to remove noise and isolate earthquake signals. This data undergoes waveform analysis to identify key seismic phases like P-waves and S-waves, enabling the detection of seismic events. Algorithms help in identifying these events and applying Fourier transforms to analyze the frequency content of seismic waves. To determine the earthquake's location, triangulation methods are used based on the arrival times of seismic waves at various stations. Magnitude and energy are estimated using scales like the Moment Magnitude Scale (Mw) and seismic moment calculations. Visualization tools, such as seismograms and shake maps, provide insights into wave propagation and ground motion intensity. Aftershock patterns and fault behavior are studied using models to predict further seismic activity and analyze the earthquake's dynamics. Seismic data is often shared globally to improve earthquake monitoring and prediction. This data is integrated with other geophysical data, such as GPS and satellite imagery, to understand the Earth's structure and assess earthquake risks. Advanced technologies like machine learning and cloud computing are increasingly being used to enhance the analysis, detection, and simulation of seismic events.
-
Episode 2 | Season 1 is here! : https://lnkd.in/gEqWsdC3 Artificial Intelligence or Assistive Intelligence? - DJ Fotsch and Lauren Fotsch Dj and Lauren Fotsch have worked with the likes of ?Apple, ?Netflix, ?Samsung Electronics, and ?Humane? . We covered various topics from marketing to family life, forensic failure analysis to artificial intelligence. The topic of Ai really stuck with us... why do we fear Ai? Could we think of it as a benevolent "Assistive Intelligence" as opposed to a malevolent "Artificial Intelligence? ...and whats the difference? Here's what we came up with... Assistive intelligence: AI systems specifically designed to augment human abilities, providing support in tasks like decision-making, problem-solving, or daily activities without replacing human involvement. It is often collaborative, working alongside humans to enhance productivity and improve accessibility. Artificial intelligence: a broader range of technologies focused on replicating human-like cognitive functions such as learning, reasoning, and self-improvement, potentially functioning independently of human input. While assistive intelligence enhances human capability, general AI aims to create autonomous systems that can perform tasks without human assistance. #Ai #STEM