CCD vs CMOS: What's the Most Difference

CCD vs CMOS: What's the Most Difference

The most difference between CMOS and CCD is Information reading mode.

As we know the CCD and CMOS sensor as below:

CMOS

图片无替代文字

CCD

图片无替代文字

As you can see, they look like each other, which is normal, because they are both essentially arrays of MOS. The main differences are in the MOS used and the way the information is read.

  • So what is MOS?

MOS (Metal - Oxide - Semiconductor) is a kind of FET (Field Effect tube), And full name MOSFET (Metal - Oxide - Semiconductor Field - Effect Transistor).

What is FET?

In simple terms, FET Field Effect tube (Field Effect Transistor) is a kind of semiconductor electronic components, is a kind of Transistor, is a kind of switch, is a kind of electric field is used to control current switch on and off.

That is to say, it is possible to determine the on-off (1/0) switch of the current based on an input value (1/0), which is equivalent to making an if...Then...And many of these fets together can make more complex judgments. This ability allows fets to form the basis of our information world, including computers and mobile phones, and MOS(FET) is the most common type of FET, which USES a layer of insulating material to separate the input (voltage) from the output (current) so that the current can be controlled by changing the voltage. In addition, IGBT, a power semiconductor device often mentioned in the stock market, is also a FET. The TFT that controls each pixel of the display is also made up of individual MOS.

MOS Construction as below:

图片无替代文字

CMOS Construction as below:

图片无替代文字

CMOS is the most common kind of MOS. Many chips including CPU are based on CMOS, which can realize a negative logic judgment through the combination of basic MOS units. The result is a reversal of the input, high input voltage, low output voltage, low input voltage, high output voltage.

CCD is also made of a MOS, as follows:

图片无替代文字

CCD is not a common MOS device, its main application is camera imaging.Although there was still a debate about whether CCD or CMOS was better in the past few years, with the development of technology in recent years, CMOS technology has basically taken an absolute advantage when the demand is more than 1 million pixels, and the development prospect of CMOS technology is far better than CCD in the visible future.So CCD is on the verge of being eliminated.Be eliminated more save electricity is the main reason of the CMOS, read faster, easy manufacturing, low cost, less noise, color CCD have before deep, high sensitivity advantages along with the development of CMOS technology have disappeared one by one, now the bus trucks in the field of auto-monitoring is still in trial CCD because good wide movement and color if required more than millions of pixels, it COMS performance is superior, the price is higher.

The biggest difference is how the information read mode.

In addition to the difference in construction, the biggest difference between CCD and CMOS Sensor is the different information reading methods.The following figure shows the data reading method of CCD.

图片无替代文字

By applying a voltage to the pixel, the charges in the pixel can be forced into the adjacent pixel one by one.

图片无替代文字

The outermost row is initially empty, receives the charge of the adjacent row of pixels, passes the charge off one by one, converts it into a voltage, and finally goes through the analog-to-digital conversion to form a digital message.This is actually a scanning process.

图片无替代文字

Unlike CMOS, as shown on the right of the figure above, each pixel has a component that converts the charge into a voltage first, making the overall reading efficiency of CMOS very high. In this way, one row at a time is read, and each pixel in the row is summarized into its own column, which is then uniformly output as a digital signal. This structure is very similar to the Active Matrix we explained, so it is also called APS(Active Pixel Sensor). CMOS Sensor is a kind of APS.

It can be seen that the pixel of CMOS is much larger than the component of CCD, and these components are not photosensitive, so in fact, the area of light-sensing of CMOS is smaller than that of CCD of the same volume. But, never mind, the problem has been well solved by placing all the non-sensitive components behind the sensor instead of side by side. This method is called Backside Illumination, and this type of CMOS is called BI CMOS. Another solution is to coat each Pixel with a small lens that concentrates light on the sensor.

图片无替代文字

These 1/3", 1/2.3", 4/3", these are all diagonal lengths in inches.But it's important to note that these numbers are not actually the actual length of the diagonal of the imaging area.Because the CMOS/CCD naming also continues the previous CRT camera naming method.A CRT camera is a device that USES a cathode-ray tube (CRT) to shoot pictures, the equivalent of an old television used as a camera.

All we need to know is that the real area of the imaging device is only about two-thirds of its overall diagonal, so for example, the typical 4/3" DSLR, the actual area of the imaging device is only 4/3 x 2/3 = 8/9 ≈ 0.87 inches. So 1/3 of an apple, for example, is actually 1/3 x 2/3 is 2/9 ≈ 0.22 inches. Of course, it's not completely accurate, because the normal order is to take the diagonal of the actual image area and multiply it by 3/2, and then you get a small number, and then you roughly equal that number to a fraction with integers in the numerator and denominator, and then you use that fraction to name the size of the frame. So 1/3 of an apple is probably just a divisor. However, it is inconvenient to do this, and the rounding is too much to distinguish different products. Therefore, many manufacturers will still multiply by 3/2, but they no longer try to round the whole number, such as 1/1.5".

However, no matter how big the CCD/CMOS is, it is only a two-dimensional plane, like our retinas, but the world we live in is a three-dimensional space. In fact, the information of light received by our brain is missing the information of one dimension. Fortunately, we humans have two eyes. Through the difference of the information obtained by the two eyes, we can calculate the information of the dimension parallel to our vision by software, namely the sense of distance, thus forming a three-dimensional world view. However, a camera has only one eye and cannot form a three-dimensional image. Therefore, we may say that some people are photogenic and some people are not photogenic, which is actually due to the lack of information of a dimension, which is probably the dimension that determines a person's beauty and ugliness. Now, some cameras or mobile phones try to use parallel lenses to imitate human eyes and can calculate and record three-dimensional information, and then reproduce it through 3D display technology or VR technology, so as to give people a feeling of immersive.

图片无替代文字

As an electronic film, the CCD and CMOS receive the photons and form a corresponding charge at each pixel as a record of the light received at that point.

But this information needs to be quantified before it can be converted into digital information and stored.

This time is called with a first Charge Amplifier device put Charge into voltage, and then through a technique called Analog to Digital Converter Analog to Digital Converter device, the device can simulate the quantitative information is converted into a certain precision Digital information later.

But what we want to record is not just one color of light, but three colors of light, R/G/B, the same as the display.

Then, just as a display has R/G/B sub-pixels, CCD/CMOS can also use different sub-pixels to record information about the light at different wavelengths, either by overwriting the Color Filter on the sub-pixel or by using three separate substances sensitive to different wavelengths.

However, either way, the current CCD or CMOS do not use the simple R/G/B side-by-side arrangement, but generally, adopt the Bayer arrangement introduced.

图片无替代文字

Another way is to directly use three imaging devices, such as three CCDS or three CMOS, each responsible for color, first use a prism to divide the incident light into three beams, and then separately shoot to different CCD/CMOS to form the information of three colors.

图片无替代文字

Another way is to directly use three imaging devices, such as three CCDS or three CMOS, each responsible for color, first use a prism to divide the incident light into three beams, and then separately shoot to different CCD/CMOS to form the information of three colors.Obviously, this method is more expensive than the first one. The advantage is that the number of pixels is much higher than a single CCD if the CCD size is the same.In addition, the color purity and image sharpness are much higher than the single CCD structure arranged by Bayer.

This is because the light will lose a lot of energy when passing through the filter. Secondly, one of the problems in Bayer arrangement is that each pixel as a point on a two-dimensional plane cannot record all the information of that position, but only one of the R/G/B corresponding to that pixel.

And the 3 CCD structure has 3 CCDS, so every pixel position can record all R/G/B information.

At present, most cameras' CCD/CMOS nominal pixel value actually counts each color point as a pixel, but this point cannot be counted as a complete information point, which is actually a kind of "virtual standard" behavior, at least this nominal way does not correspond to the pixel nominal way in the display industry.

Like the original image down here.

图片无替代文字

If we use Bayer, we will get three images, red, green and blue.

图片无替代文字

Each pixel has only one of three colors, which do not coincide with each other.

So if you just put the three pictures together, you get an image like this.

图片无替代文字

This is far from the original image above, isn't it like a Mosaic? We need to fill in the missing two colors for each pixel, a process called Mosaic Demosaicing or Debayering.

But how to complete the missing color information, the answer is "guess."But it's not a shot in the dark, it's a guess based on the color information of the surrounding pixels, as shown below.

Since there is only one color for each pixel and two other colors are missing, color information of other surrounding pixels can be used as a reference, and then approximate R/G/B values should be calculated by certain algorithms and inserted back into this pixel, which is called Interpolation.

So after this calculation, each point has three values of R over G over B, and then finally we put the graph together, and that's what it looks like down here.

图片无替代文字

Attention, and the original image, the reconstruction of the edges of the graph obviously less sharp, more obscure, color change because of the sharp edge was more severe, but if use difference calculation, because must reference pixel neighboring pixel color values of the other, can't show this severe sudden color change, so will be better than the original image blur a bit.

To make the image sharper, you can use the above 3 CCD method, increase the number and density of pixels, or use other innovative ways of recording, such as a technology called Foveon X3, which was first developed by the American company Foveon and later acquired by Japan's Sigma. The technique works much like the color film, with each pixel vertically divided into three layers, each sensitive to different wavelengths of light.

图片无替代文字

Of course, if we really use 25 primary colors, the cost of the whole industry will be greatly increased.

Therefore, I personally think it is more efficient and accurate to simulate the characteristics of human L/M/S cones at the recording stage to record information to the maximum extent possible, and then finally transmit the same information to the human eye again.

However, the current CCD or CMOS is actually developed according to this idea. Instead of simply recording the information of R/G/B wavelength, each pixel records the information of long/medium/short wavelength, that is, L/M/S. For example, the Nikon D700 camera spectral response diagram below is actually more like the human eye response.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了