What aspects of good image quality should I look for?
Unlike traditional analog cameras, digital network cameras are equipped with the processing power not only to capture and present images, but also to digitally manage and compress them for network transport. Image quality can vary considerably and is dependent on the choice of optics and image sensor, the available processing power and the level of sophistication of the algorithms in the processing chip. To summarise, look specifically at:
The type of image sensor
There are two types: CCD (charged coupled device) and CMOS (complementory metal oxide semiconductor). CCD sensors are produced using a technology developed specifically for the camera industry, while CMOS sensors are produced by the same technology used for the chips used in computers.
Low light capabilities
A camera might deliver reasonable image quality in bright light conditions, but it may be unsuitable for typical indoor situations.
Lens replacement
A high quality lens can deliver better images. Most professional level cameras use a so-called C or CS mount, and some lenses feature auto-iris control for improving the dynamic range.
Image resolution
Higher resolution means more detail, and as cameras now deploy megapixel sensors that make it possible to capture even more detail, analog CCTV cameras - which are bound to resolutions used in TV standards - are being surpassed.
Backlight compensation
While a camera's automatic exposure control tries to get the lightness of an image to appear as the human eye would see a scene, it can be easily fooled. Backlight compensation strives to ignore small areas of high illumination, just as if they were not present at all. With backlight compensation, the image from the example above would have the same exposure regardless of whether the flashlight was present or not. The resulting image enables the person to be visually seen and identified. Without backlight compensation, the image would be too dark, and identification would be impossible.
The ability to correctly capture moving objects.
Another key feature to look for is progressive scan. That the camera has progressive scan means that images do not suffer from the 'saw' effect that hampers interlaced video technologies. The interlace mode is used in TVs and traditional analog CCTV cameras in order to enhance the image frequency in moving images.
Additional image enhancements
The functions that drive these reside in the chip that handles the image processing, and affects colour, sharpness, exposure and the white balance.
File size and bandwidth requirements
Digital cameras use image compression. There is a trade off here between high quality images and compressed images that require much less bandwidth. The JPEG standard is used to achieve the highest possible quality, and MPEG is optimised for lower bandwidth requirements.
Progressive scan vs. interlace video
Today, two different techniques are available to render the video: interlaced scanning and progressive scanning. Which technique is selected will depend on the application and purpose of the video system, and particularly whether the system is required to capture moving objects and to allow viewing of details within a moving image.
Interlaced scanning
Interlaced scan-based images use techniques developed for cathode ray tube (CRT)-based TV monitor displays, made up of 576 visible horizontal lines across a standard TV screen. Interlacing divides these into odd and even lines and then alternately refreshes them at 30 frames per second. The slight delay between odd and even line refreshes creates some distortion or 'jaggedness'. This is because only half the lines keeps up with the moving image while the other half waits to be refreshed.
Interlaced scanning has served the analog camera, television and VHS video world very well for many years, and is still the most suitable for certain applications. However, now that display technology is changing with the advent of liquid crystal display (LCD), thin film transistor (TFT)-based monitors, DVDs and digital cameras, an alternative method of bringing the image to the screen, known as progressive scanning, has been created.
Progressive scanning
Progressive scanning, as opposed to interlaced, scans the entire picture line by line every sixteenth of a second. In other words, captured images are not split into separate fields like in interlaced scanning. Computer monitors do not need interlace to show the picture on the screen. It puts them on one line at a time in perfect order ie, 1, 2, 3, 4, 5, 6, 7 etc, so there is virtually no 'flickering' effect. As such, in a surveillance application, it can be critical in viewing detail within a moving image such as a person running away. However, a high quality monitor is required to get the best out of this type of scan.
Compression standards
Without effective compression, most local area networks (LANs) transporting video data would grind to a halt within minutes. Digital video is always compressed in order to speed up transmission and to save space on hard disks. That is why selection of the right compression format is a crucial consideration.
Image and video compression can be done either in a lossless or lossy approach. In lossless compression, each and every pixel is kept unchanged resulting in an identical image after decompression. The downside is that the compression ratio, ie the data reduction, is very limited. A well-known lossless compression format is GIF (graphics interchange format). Since the compression ratio is so limited, these formats are impractical for use in network video solutions where large amounts of images need to be stored and transmitted. Therefore, several lossy compression methods and standards have been developed. The fundamental idea is to reduce things that appear invisible to the human eye and by doing so, tremendously increase the compression ratio.
Compression methods also involve two different approaches to compression standards: still image compression and video compression.
Still image compression standards
All still image compression standards are focused only on one single picture at a time. The most well known and widespread standard is JPEG.
JPEG
This is short for Joint Photographic Experts Group international — a good and very popular standard for still images that is supported by many modern programs. With JPEG, decompression and viewing can be done from standard Web browsers.
JPEG compression can be done at different user-defined compression levels, which determine how much an image is to be compressed. The compression level selected is directly related to the image quality requested.
Besides the compression level, the image itself also has an impact on the resulting compression ratio. For example, a white wall may produce a relatively small image file (and a higher compression ratio), while the same compression level applied on a very complex and patterned scene will produce a larger file size, with a lower compression ratio.
JPEG2000
Another still image compression standard is JPEG2000, which was developed by the same group that also developed JPEG. Its main target is for use in medical applications and for still image photography. At low compression ratios, it performs similar to JPEG but at really high compression ratios it performs slightly better than JPEG. The downside is that support for JPEG2000 in Web browsers and image displaying and processing applications is still very limited.
Video compression standards
Motion JPEG
Motion JPEG offers video as a sequence of JPEG images. Motion JPEG is the most commonly used standard in network video systems. A network camera, like a digital still picture camera, captures individual images and compresses them into JPEG format. The network camera can capture and compress, for example, 30 such individual images per second (30 fps — frames per second), and then make them available as a continuous flow of images over a network to a viewing station. At a frame rate of about 16 fps and above, the viewer perceives full motion video. We refer to this method as Motion JPEG. As each individual image is a complete JPEG compressed image, they all have the same guaranteed quality, determined by the compression level chosen for the network camera or video server.
H.263
The H.263 compression technique targets a fixed bit rate video transmission. The downside of having a fixed bit rate is that when an object moves, the quality of the image decreases. H.263 was originally designed for video conferencing applications and not for surveillance where details are more crucial than fixed bit rate.
MPEG
One of the best-known audio and video streaming techniques is the standard called MPEG (initiated by the Motion Picture Experts Group in the late 1980s). This section focuses on the video part of the MPEG video standards.
MPEG’s basic principle is to compare two compressed images to be transmitted over the network. The first compressed image is used as a reference frame, and only parts of the following images that differ from the reference image are sent. The network viewing station then reconstructs all images based on the reference image and the ‘difference data’.
Despite higher complexity, applying MPEG video compression leads to lower data volumes being transmitted across the network than is the case with Motion JPEG.
Advanced video coding (AVC)
The two groups behind H.263 and MPEG recently joined together to form the next generation video compression standard. H.264, MPEG-4 part 10 and AVC all refer to this new standard. It is expected that within the next years Advanced Video Coding will replace the currently used H.263 and MPEG-4.
Conclusion
For most applications, Motion JPEG is a natural choice, balancing the needs for low bit rate and video quality. MPEG-4 has the advantage of saving disk space and transmission bandwidth but raises the demands on the viewing station. In a pure storing or viewing application, MPEG-4 might be the preferred choice. But if there are also needs for some analysis of what really happened, Motion JPEG is preferred.
Resolution
Resolution in an analog or digital world is similar, but there are some important differences in how it is defined. In analog video, the image consists of lines, or TV lines, since analog video technology is derived from the television industry. In a digital system, the picture is made up of pixels (picture elements). The resolution of digital cameras is measured by the number of effective pixels on the image sensor chip.
NTSC and PAL resolutions
In North America and Japan, the NTSC standard (National Television System Committee) is the predominant analog video standard, while in Europe the PAL standard (phase alternation by line) is used. Both standards originate from the television industry. NTSC has a resolution of 480 horizontal lines, and a frame rate of 30 fps. PAL has a higher resolution with 576 horizontal lines, but a lower frame rate of 25 fps. The total amount of information per second is the same in both standards.
VGA resolution
With the introduction of network cameras, 100% digital systems can be designed. This renders the limitations of NTSC and PAL irrelevant. Several new resolutions derived from the computer industry have been introduced, providing better flexibility and moreover, they are worldwide standards.
VGA is an abbreviation of video graphics array, a graphics display system for PCs originally developed by IBM. The resolution is defined at 640 x 480 pixels, a very similar size to NTSC and PAL. The VGA resolution is normally better suited for network cameras since the video in most cases will be shown on computer screens, with resolutions in VGA or multiples of VGA. Quarter VGA (QVGA) with a resolution of 320 x 240 pixels is also a commonly used format, very similar in size to CIF. QVGA is sometimes called SIF (standard interchange format) resolution, which can be easily confused with CIF.
Other VGA-based resolutions are XVGA (1024 x 768 pixels) and 1280 x 960 pixels, four times VGA, providing megapixel resolution.
MPEG resolution
MPEG resolution usually means one of the following resolutions:
* 704x480 pixels (TV NTSC).
* 704x576 pixels (TV PAL).
* 720x480 pixels (DVD-Video NTSC).
* 720x576 pixels (DVD-Video PAL)
Day and night functionality
Certain environments or situations restrict the use of artificial light, making infrared (IR) cameras particularly useful. These include low-light surveillance applications, where light conditions are less than optimal, as well as discreet and covert surveillance situations. Infrared-sensitive cameras, which can make use of invisible infrared light, can be applied, for instance, in a residential area late at night without disturbing residents. They are also useful when the surveillance cameras should not be evident.
Light perception
Light is a form of radiation wave energy that exists in a spectrum. The human eye can see, however, only a portion (between wavelengths of ~400–700 nanometres or nm). Below blue, just outside the range humans can see, is ultraviolet light, and above red is infrared light.
Infrared energy (light) is emitted by all objects: humans, animals and grass, for instance. Warmer objects such as people and animals stand out from typically cooler backgrounds. In low light conditions, for example at night, the human eye cannot perceive colour and hue — only black, white and shades of grey.
How does the day and night functionality or IR-cut filter work?
While the human eye can only register light between the blue and red spectrum, a colour camera’s image sensor can detect more. The image sensor can sense long-wave infrared radiation and thus ‘see’ infrared light. Allowing infrared to hit the image sensor during daylight, however, will distort colours as humans see them. This is why all colour cameras are equipped with an IR-cut filter — an optical piece of glass that is placed between the lens and the image sensor — to remove IR light and to render colour images that humans are used to.
As illumination is reduced and the image darkens, the IR-cut filter in a day and night camera can be removed automatically to enable the camera to make use of IR light so that it can ‘see’ even in a very dark environment. To avoid colour distortions, the camera often switches to black and white mode and is thus able to generate high quality black and white images.
Source: Axis Communications’ Technical Guide to Network Video.
For more information contact Roy Alves, Axis Communications SA, +27 (0)11 548 6780, [email protected], www.axis.com
Tel: | +27 11 548 6780 |
Email: | [email protected] |
www: | www.axis.com |
Articles: | More information and articles about Axis Communications SA |
© Technews Publishing (Pty) Ltd. | All Rights Reserved.