A Simpler Explaination of High Density Spectral Data
E.C.G in High Spectral Density Data on all pulses and 3D, showing the shape components for heart beats, but more components behind.

A Simpler Explaination of High Density Spectral Data

In the previous post, most people would not understand if I explained it directly "from the uncertainty principle". Let me explain it with high school level mathematics.

High school students will learn Fourier transform (FT), a general theoretical basis for modern data processing (including analysis, transformation, combination, generation, etc.). In actual computer calculations, Fast Fourier Transform (FFT) is used, which is a specialized algorithm for quickly calculating FT data.

Most people are just users, as long as they can get the task and effects done, they don't care how it is done. Our technology can be understood as a specialized algorithm to obtain more detailed information over FT.

FFT, which targets for speed processing, can handle large amounts of data in a short time. In the past, when computer processing power was relatively low, FFT was very important. But in this era, FT calculations can even be applied to real-time video processing (FFT is also the basis of video processing).

Nowadays, we can have processing power spare to review the roughness of FFT in data processing. Just like wavelet processing that has appeared in recent years, it also targets to obtain more data density than FFT. However, our algorithm can obtain more than 10 times the information density. Our precision of data is already on another level.

What changes will this high density of spectral data bring?

FFT data has two key parameters: the time window and its frequency domain data in banks within that time window. No matter what kind of signal it is, when the density data in the frequency domain increases, the signal-to-noise ratio can be improved immediately. Since the width of each frequency bank is narrowed, those signals that were previously acting as the background noise are also reduced accordingly, and the relative signal-to-noise ratio is pulled up. When applying In the scenario of detecting signals, this can greatly improve the sensitivity of the detection.

In terms of time windows, higher density means that smaller time windows can be used. This allows us to capture relatively sharp and short signals. Conventionally, these short signals would require a large amplitude/energy to overcome the background noise and be discovered. The representation of these subtle signals can be very important in research applications and can provide new data sources for research.

We experienced both scenarios to use this technology to achieve some unique results.

Like the title diagram in the beginning, which showed a series of ECG pulses in High Density Spectral Data. Did anyone expect to find more information in ECG other than strong pulses? And the analysis is on EVERY PULSE separately. You can compare it with the conventional spectrum of ECG here, without any time related information


Conventional Spectrum of ECG, without any time related information traded for high detail in frequency domain


要查看或添加评论,请登录

MK LAU的更多文章

社区洞察

其他会员也浏览了