Myths about image sensors: Pixels

Myths about image sensors: Pixels

If the number of pixels doubles every two years, while the lens size remains the same, then the pixels will become smaller, resulting in fewer photons being received. (Think of a bucket of water on a rainy day; a smaller bucket collects fewer raindrops). Therefore, the sensor must perform better in terms of sensitivity per area and noise reduction to produce the same quality images in low light conditions ......


Image sensors are becoming increasingly popular, particularly in security, industrial and automotive applications. Many cars are now equipped with at least five or more image-sensor-based cameras. However, image sensor technology differs from standard semiconductor technology in that there are a number of misconceptions.


Moore's Law and image sensors

Some people assume that the famous "Moore's Law" also applies to image sensors. Gordon Moore (founder of Fairchild Semiconductor, which is now part of ON Semiconductor) pointed out that the number of transistors on an integrated circuit (IC) doubles every two years. In order to place twice the number of transistors on a single device, shrinking transistors is the main implementation. This trend has been going on for decades, but the growth in the number of transistors has slowed down in recent years. The increased transistor density has also led to a reduction in the cost per transistor, and as a result many electronic systems are becoming more and more versatile, but without an increase in price.


Image sensors, however, are different because some of their important components do not expand as transistors become smaller. Specifically, the optical components of an image sensor such as photodiodes (which convert incident light into electrical signals) and some analogue components (which convert electrical signals into digital images) cannot be scaled up as simply as digital logic components. In sensors, image capture is primarily analogue, with digital circuits converting digital data from each pixel into images that can be stored, displayed or used in artificial intelligence machine vision.


If the number of pixels doubles every two years, while the size of the lens remains the same, the pixels will become smaller, resulting in fewer photons being received. (Think of a bucket on a rainy day; a smaller bucket collects fewer raindrops). Therefore, the sensor must perform better in terms of sensitivity per area and noise reduction in order to produce the same quality images in low light conditions. Not only does it make no sense to increase the number of pixels if it is not required for the application scenario of the product, it also results in a forced increase in bandwidth and storage space, making other components of the system more expensive.?


Pixel size

Pixel size alone is not sufficient to determine pixel performance. It cannot be assumed that larger pixels necessarily result in better image quality. While pixel performance under different lighting conditions is important, and the larger the pixel, the larger the area to collect light, but this does not necessarily improve image quality. There are multiple other factors that are equally important, including resolution and pixel noise metrics.?


A sensor with smaller pixels may outperform a sensor with larger pixels for the same optical area if increasing the resolution has a greater impact on the application than reducing the individual pixels. It is important to ensure that the ability to receive the number of photons is sufficient to form a high quality image and therefore pixel sensitivity (photoelectric conversion efficiency) and the application environment are important.?


When selecting a sensor for an application, pixel size is a consideration. However, its importance may be overstated; it is only one parameter of many and several others should be given the same careful consideration. When selecting a sensor, the designer must consider all the requirements of the target application and then find the ideal balance of speed, sensitivity, and image quality to achieve the right design solution.


Large and small pixel design?

In many applications, extending the dynamic range as much as possible is valuable to help render shadows and highlights correctly in the final image, but this can be very challenging for image sensors. Some companies have adopted a technique called 'small and large pixel', which aims to solve the challenge of creating more capacity for the photodiode in order to collect electrons before the diode is 'saturated'.?


In the small and large pixel approach, the sensor area dedicated to a single pixel is divided into two parts: the larger photodiode covers most of the area and the smaller photodiode uses the remaining area. The larger photodiode collects more photons and can be easily saturated in bright light conditions. The smaller photodiode can be exposed for longer periods without saturating because the area available for photon collection is smaller. An analogy can be drawn to using a bucket and a water bottle to collect raindrops. A bucket is usually wider at the top than at the bottom and can therefore collect raindrops very efficiently and fill up faster than a water bottle, while a water bottle has a smaller opening and is wider and slower to collect. The use of larger pixels in low light conditions and smaller pixels in bright light conditions creates an extended dynamic range.

图片无替代文字
Fig. 1: Complex semiconductor, from photon to image output

ON Semiconductor solves the above problem by adding an area to individual pixels that allows the extra signal or charge to spill into that area. Imagine we use a bucket to catch raindrops and then have a larger basin to catch the water that spills out of the bucket. The "bucket" signal is easy to read and has high accuracy, so we can achieve good low-light performance, and the larger basin holds all the overflow signal, thus extending the dynamic range. In this way, the entire pixel area is used for low light conditions, without saturation in bright light conditions. Saturation can degrade image quality, for instance by distorting colours and reducing sharpness. ON Semiconductor's Super Exposure technology provides better image quality in high dynamic range scenes for human eye vision and machine vision applications.

图片无替代文字
Figure 2: The XGS 16000 is a 16 megapixel CMOS image sensor


The next time you choose an image sensor for your design, remember that "more and bigger is not always better", at least for pixels.

要查看或添加评论,请登录

Turpanic的更多文章

社区洞察

其他会员也浏览了