中文ENGLISH

Location:Home > Solutions > Article

Article

Technology Evolution and Debugging of CMOS Image Sensor
发布日期:2012-8-8 作者:Wang Xingjun
Foreword: Image sensor of Complementary Metal-Oxide Semiconductor (CMOS) has been widely used to the fields of mobile phone, notebook computer and laptop, digital camera, video game machine, toy, medical equipment, automobile, security equipment and industrial equipment; its market share is refreshed continuously thanks to its low power consumption, low cost, high integration level, high non-defective rate and other such advantages. This paper discusses CMOS image sensor with regard to technology evolution, technology comparison with CCD and application in the security field.
 
Keywords: CMOS, CCD, HD IP camera
 
1. Technology evolution of CMOS image sensor
CMOS image sensor emerged in the early 1990s; image sensing effect of CMOS image sensor at that time is barely satisfactory, so it is not used on a large scale. CMOS image sensor at that time is called “passive pixel image sensor architecture” customarily. In the mid-1990s, Jet Propulsion Laboratory of National Aeronautics and Space Administration raised “active pixel image sensor architecture” which realizes large-scale commercial application of CMOS image sensor indeed.
 
Different from passive pixel architecture, active architecture adds signal amplifier and relevant clutter control circuit to each image sensing pixel and then reads out the amplified signal line by line. In this way, such problem that passive pixel architecture is easily interfered by the clutter when transmitting original signal not amplified can be solved to improve image quality substantially.
 
At present, development directions of CMOS image sensor focus on higher sensitivity, stronger noise suppression ability and smaller size. Relevant technologies are created and developed continuously along these directions.
 
1.1 Back-illuminated structure:
Projection direction of the incident light of CMOS image sensor is front-illuminate structure generally, as shown in the figure. The incident light will pass through the metal wire and the transistor on the surface of silicon substrate before reaching the photodiode in the pixel and part of the lights are reflected, which obstructs the lighting process on the on-chip lens and reduces the sensitivity, so SONY modifies front-illuminated structure to back-illuminated structure and moves the metallic circuit and the transistor to the other side of the silicon substrate, thus the incident light reaches the photodiode directly through the microlens and the color filter to increase amount of the incident light into each pixel greatly so as to improve the sensitivity of the sensor. Now some large international companies such as Sony, Canon and Nikon have begun to use CMOS sensor with back-illuminated structure in the field of consumer electronics.
 
Fig. 1 Comparison between front-illuminated structure and back-illuminated structure
 
The pixel circuit generally adopts 3 transistors; although useful signals are amplified, the clutter is also amplified therewith. Canon Company adopts 4 transistors, as shown in the figure. The unbalanced signals from the amplifiers of different pixels are enhanced due to fixed pattern noise; this is the cause why the noise may appear at the same pixel even if pictures of different subjects are taken at different period of time of one day. To remove such noises, Canon’s second generation on-chip noise reduction circuit can read fixed patternnoise amount and then eliminates the noise to provide noiseless optical signal.
 
Fig. 2 On-chip noise reduction technology
 
1.3 Dark current suppression technology:
Dark current is produced by microcrystalline defect or leakage current on CMOS and pixel charge is accumulated and increased in the case of long exposure or temperature rise, which causes CMOS to produce noise. For this reason, Canon Company adopts the architecture of “buried photodiode” to reduce probability of noise.
 
 
1.4 Improvement of process technology:
Process capability of CMOS is improved continuously with rapid development of large-scale integrated circuit and the integration level of CMOS sensor is higher increasingly from micron, submicron to deep submicron. In 2011, ST Microelectronics and IBM announced that they signed a technology collaboration agreement to jointly collaborate on the development of CMOS process technology of 32 nm and 22 nm, which indicates the arrival of CMOS nanometer process times.
 
2. Technology comparison between CMOS image sensor and CCD
 
CMOS image sensor mainly has the following advantages relative to CCD based on essential characteristics:
 
High read rate of image signal and fast response speed. Increasingly higher SOC of CMOS sensor and improvement of manufacturing process of large-scale integrated circuit ensure most of signal processing circuit can be integrated on the same chip, signal transmission distance is shortened and both parasitic parameter and transmission delay can be improved substantially; therefore, CMOS sensor can realize high-speed signal read-out more easily. At present, signal read-out rate of CCD does not exceed 70Mpixels/s and that of CMOS can reach 1000Mpixels/s generally.
 
Low power consumption: CCD requires many driving voltages (-7.5V-15V) to drive charge transfer, so its power consumption is difficult to control. CMOS sensor benefits from the process of large-scale integrated circuit, so only one power supply that is low voltage mostly is required for driving and its power consumption is only 1/8-1/10 of that of CCD.
 
Free of vertical smearing phenomenon. Vertical smearing means that a vertical bright band can appear on the picture when shooting luminous objects of high brightness such as illuminating lamp and sun. On traditional CCD, if electric signal produced by the illumination exceeds the capacity of the diode (vertical memory), charge in the diode (vertical memory) can overflow to generate vertical smearing phenomenon. Such phenomenon cannot be eliminated thoroughly up to now because of its own mechanism; however, it won’t appear on CMOS image sensor due to special imaging structure of CMOS.
 

 

 

High integration level: CMOS image sensor can integrate other function chips on CMOS chip easily for it adopts the process of large-scale integrated circuit. Presently, mainstream CMOS sensors generally integrate video signal processing eMule and still picture signal processing circuit in the interior; built-in image processing algorithm carries out image preprocessing that includes a series of processing such as defect correction, elimination of FPN noise, color difference value, image sharpening difference value, diaphragm correction and Gamma correction and finally output image signal in the format of row RGB or BT.656/601, this is difficult to be realized on CCD. The advantage of high integration level enables CMOS image sensor to be widely used for small hand-held mobile device.

 

 

However, CMOS also has its intrinsic disadvantages compared to CCD sensor, such as low sensitivity and large noise point of fixed pattern. In fact, CCD sensor has the absolute advantages in high-end application field, such as single lens reflex camera, machine vision, onboard camera and medical imaging equipment; but application of CMOS sensor in these fields will become wide with technological progress and improvement of manufacturing process of integrated circuit.
 
3. Application of CMOS image sensor in HD IP camera
3.1 News of the industry
Most of traditional analog cameras adopt CCD image sensor, but CMOS sensor is occupying this market by means of low power consumption, high integration level, low cost and other advantages with the rapid development of HD IP camera. It is estimated that most of IP cameras in the market have used CMOS sensor as front-end image acquisition device presently. Major CMOS Sensor suppliers in the security field include Micron Aptina, Omnivision, SONY, Panasonic, Samsung, Pixelplus and other companies. As the acknowledged leader in CCD field, SONY has begun to attach importance to technological development of CMOS sensor in recent years and successfully developed the back-illuminated CMOS device with a signal to noise ratio of +8dB. In 2009, Omnivision launched wide dynamic and low illumination CMOS sensor with a resolution of 1080P/30 frames, which propels IP camera to enter real-time 1080P times.
 
The following figure is the functional block diagram of Micron Aptina Sensor that is used by the author as the example mostly:
 
Fig. 3 Functional block diagram of Sensor
 
Sensor debugging is mainly divided into two parts, namely earlyinitialization and late image quality debugging. The purpose of the initialization is to make Sensor work normally and stably to produce images; late image debugging is used to further adjust the images to make them reach perfect condition.Sensor initialization step mainly includes power on, external control signal configuration and internal register configuration, etc.; image quality debugging includes definition, white balance, chromaticity, picture uniformity, gray scale reproduction and low illumination performance, etc.
 
Sensor initialization
 
3.2.1 Power on
Power on means the provision of corresponding voltage to the pin of power supplies including core voltage and interface voltage of Sensor; for typical voltage value of Aptina Sensor is 3.3V, 2.8V and 1.8V generally, special attention shall be paid to power on time sequence of various voltages during power on operation; otherwise, the driver may be loaded abnormally and the system works unsteadily; one of the causes for the abnormality is that crystal oscillator does not reach steady state yet after starting of oscillation; at this moment, Sensor core has begun to work and the system works abnormally because it cannot receive accurate and steady reference clock signal. Some Sensors are insensitive to requirements for power on time sequence and power on can be conducted simultaneously.
 
3.2.2 External control signal configuration
 
 
It mainly includes configuration of reference clock signal, line field synchronous signal and Reset signal of the hardware. If these signals are abnormal, images may not be displayed normally as shown in the figure below:
 
Fig. 4 Serious color cast of image caused by abnormal control signal configuration
 
 
Driver loading of Sensor means various data necessary for writing system initialization in the Sensor through bus interface (such as IIC and SPI, etc.), which is realized by control register mainly. Internal register of Sensor is shown in the following figure:
 
Fig. 5 Schematic diagram of internal register of Sensor
 
Core Registers are related to core control of Sensor and Image Flow Processor is mainly the register of control algorithm. Image Flow Processor includes two groups of registers, namely ColorPipelineRegisters and CameraControlRegisters, the former is used to control output data and the latter centralizes major image control algorithm, such as AE, AWB, image defect correction and CameraControlSequencer, etc.
 
Debugging of image quality:
 
 
Debugging of image definition is finished in virtue of ISO12233 target:
 
Fig. 6 ISO12233 target
 
Make the entire image contour fit the outer edge of the target during debugging, adjust the focal length, read maximum number of lines in the center, number of horizontal and vertical lines and number of lines on the edge. Camera center is the position with maximum definition and the definition decreases towards the edge, so the edge definition is an important indicator for judging image edge distortion condition
 
3.3.4 White balance
 
White balance is one of important contents of image debugging. When the camera is shooting white object, output voltages of three primary colors R, G and B must be equal such that standard white can reappear on the fluorescent screen; this condition is called camera’s white balance. Actually, three primary color signals output by the camera are not only related to its own spectral response, but also spectral power distribution of light source of irradiated object, namely color temperature of the light source. If amplitudes of three primary color signals in the white object that is shot when the color temperature of the light source is 6,500K are consistent, the voltage of red primary color will rise and the voltage of blue primary color will fall definitely after the light source is substituted by the light source with color temperature of 3,200K; at this moment, adjust gains of red and blue channels to equalize output voltages to make white reappear and such adjustment is called white balance adjustment. At present, most of cameras have automatic white balance (AWB) algorithm that can meet requirements of general scene; if automatic white balance cannot meet the requirements, it is needed to manually configure relevant register to finish white balance adjustment, which is manual white balance (MWB).
 
3.3.5 Picture uniformity
The problem of picture uniformity is resulted from the lens; for exposure of middle pixel of the image is always more sufficient that that of the edge, the case shown in the below figure will appear:
 
Fig. 7 Abnormal image uniformity
 
The problem of image uniformity can be solved by debugging Lens Shading value. The debugged image is shown in the below figure:
 
Fig. 8 Adjusted image uniformity
 
3.3.6 Image debugging under the environment of low illumination
There are mainly two means for controlling image brightness at low illumination, one is AE target and the other is increase of gray scale gain value. In general, use of AE target to increase the brightness can restore image color to the maximum, but this method is limited by frame rate, so it is required to increase the brightness value by adjusting gray scale gain value when the frame rate reaches the limit. Gray scale gain value is positively related to image noise and image noise point can increase if the gain is too large, so this method is seldom used generally.
 
Power frequency interference is caused by stroboflash of indoor fluorescent lamp. Image form of power frequency interference is water corrugated flash. In China, frequency of mains voltage is 50Hz and voltage curve is sinusoid, then the energy curve is the absolute value of the voltage curve, namely the flash frequency of the fluorescent lamp is 100Hz. Different from CCD, exposure mode of CMOS sensor is line exposure, so the exposure time must be integral multiple of 1/100s in order to avoid power frequency interference. Aptina’s Sensor provides the register for inhibiting power frequency flash and corresponding API, which can eliminate the power frequency interference.

 

Conclusion: CMOS image sensor has won most of market shares of HD IP camera market at present by virtue of its characteristics including low power consumption, high integration level, low cost and quickly improved imaging quality. It will accomplish a great deal in the security field along with transition of analog monitoring times to digital monitoring times and rapid development of large-scale integrated circuit technology.

Home | About HanBang | Dynamic DNS | News | Solutions | Contact Us

Copyright © 2012 hbgk.net ALL Rights Reserved   Jing ICP 05045504-1   400 Service Hotline:400-6865116