Different technical routes for image sensor to realize HDR

Different technical routes for image sensor to realize HDR


With the development of sensing technology, a single sensor can capture more and more data. The same is true for image sensors, however, even with the most advanced image sensors today, it is still difficult to achieve the dynamic range that the human eye can capture. But in order to restore the real scene as much as possible, we are still trying our best to develop HDR high dynamic range technology.


multiple exposure


As mentioned at the beginning of the article, it is still a bit difficult for modern sensors to capture a large dynamic range in a single frame. In scenes that include both indoors and outdoors, overexposure or blurry shadows are prone to occur. In the past, in order to achieve the so-called high dynamic range, it was necessary to take multiple photos with different exposure levels and then synthesize them, so as to present complete details from dark parts to bright parts.

However, with the advancement of sensor technology and readout speed, many CISs of mobile phones can realize the process of multiple synthesis in the machine. By changing the shutter time, shooting a variety of image signals with different brightness, the multiple images are directly synthesized into one data on the image sensor, thereby realizing a wide-area dynamic range. Different manufacturers also have different names for their synthetic HDR technology, such as Staggered HDR, Sony's DOL-HDR or ON Semiconductor's eHDR, etc., and the number of multiple exposures is also different.

However, this multi-image fusion process requires the introduction of calculations, such as performing gradient correction processing based on the image brightness distribution, adjusting local brightness, suppressing overexposure and dark blurring, and obtaining an image that is close to what the human eye sees. Some image sensors can directly complete this series of processing in the logic chip of the image sensor, so it can also be used for video shooting. If some image sensors do not integrate the HDR synthesis function, more expensive and more powerful ISPs are required, such as Qualcomm's Spectra 580, etc., and the bandwidth requirements will also double due to the number of multiple exposures.

Although the multiple exposures of modern image sensors are advertised almost simultaneously, this is not real simultaneous after all, and it is even more difficult to achieve short exposure times under low-light conditions, which is why many mobile phones are difficult to shoot in low-light conditions. The reason for HDR. But the problem that multiple exposures really need to overcome is the artifacts and color noise caused by different simultaneous exposures and composites.

In automotive image sensors, another problem that this type of solution is prone to is LED flickering. In order to save energy, many LED lighting adopt pulsed (PWM) design, such as headlights, taillights, traffic signs and tunnel lights. The multi-exposure scheme can easily cause the image sensor to capture the lack of LED patterns, which will become a key error for ADAS and autonomous driving. Therefore, the design of automotive image sensors often incorporates LFM (LED flicker suppression) into consideration. .


Multi-channel simultaneous output synthesis


Although the multiple exposure technology can achieve high dynamic range performance in picture photography, it is difficult to output high dynamic range video in the same way for video shooting. Therefore, in the video shooting industry, image sensors generally use a method called dual gain output (DGO) to improve the dynamic range.

For example, some of Canon's C-series cinema cameras and Arri's digital cinema cameras have deployed dual-gain architectures on their respective sensors. This architecture can provide two independent readout paths for each pixel at the same time, each with a different gain magnification, and both are input to the A/D converter of the machine at the same time, thereby synthesizing a high dynamic range video. In this way, problems such as artifacts that may exist in HDR methods such as multiple exposures are solved.

However, the DGO method is not without disadvantages. First of all, its architecture design is equivalent to recording two video streams at the same time, which greatly increases power consumption and poses a challenge to the readout speed of the sensor. Therefore, image sensors working in DGO mode are often difficult. Do high frame rate shooting.

Utilizing a similar design architecture is also the recently released GC13A2 sensor of Gekewei, on which Gekewei adopts its patented DAG HDR technology. GC13A2 can output 12-bit high dynamic range images and 4K 30fps video, and Geke Micro has made a low-power design for it. The scene power consumption of 3-frame synthesis in image shooting is reduced by 50%, and in high dynamic range video recording. The power consumption is reduced by about 30%.

With the wider spread of HDR, the more perfect standards, and the popularization of HDR display devices, it will become the norm for image sensors to fully embrace HDR technology. It involves a lot of image computing technologies, and it is also the focus of a new round of major image sensor manufacturers. However, the improvement of the dynamic range of the single frame output of the image sensor itself should not be slack.

Jim Olofsson

Founder ToSep, co-founder Dr?narstation, Machine Learning Engineer

1 年

Thank you for sharing, I learned something new with the double gain technology

回复

要查看或添加评论,请登录

Echo Lee的更多文章

社区洞察

其他会员也浏览了