television: Development of the Television Camera and Receiver

Development of the Television Camera and Receiver

V. K. Zworykin's Iconoscope (1923) was the first successful camera tube in wide use. Its functioning involved many fundamental principles common to all television image pickup devices. The face of the iconoscope consisted of a thin sheet of mica upon which thousands of microscopic globules of a photosensitive silver-cesium compound had been deposited. Backed with a metallic conductor, this expanse of mica became a mosaic of tiny photoelectric cells and capacitors. The differing light intensities of various points of a scene caused the cells of the mosaic to emit varying quantities of electrons, leaving the cells with positive charges proportionate to the number of electrons lost. An electron gun, or scanner, passed its beam across the cells. As it did so, the charge was released, causing an electrical signal to appear on the back of the mosaic, which was connected externally to an amplifier. The strength of the signal was proportional to the amount of charge released. The iconoscope provided good resolution, but required very high light levels and needed constant manual correction.

The Orthicon and Image-Orthicon camera tubes improved on the Iconoscope. They used light-sensitive granules deposited on an insulator and low-velocity scanning. These could be used with lower light levels than required by the Iconoscope, and did not require the constant manual manipulation. The Vidicon was the first successful television camera tube to use a photoconductive surface to derive a video signal.

Solid-state imaging devices were first demonstrated in the 1960s. Video cameras using semiconductor charge-coupled devices (CCDs) began development in the 1970s, and began replacing tube-based cameras in the mid-1980s. Each picture element (pixel) in a CCD stores a charge that is determined by the illumination incident on it. At the end of the exposure interval, the charge is transferred to a storage register and the CCD is freed up for the next exposure. The charges in the storage register are transferred to the output stage serially during that time. By the mid-1990s, CCD-based television cameras had replaced tube-based cameras, but at the same time development was proceeding on a different solid-state technology, the complementary metal-oxide semiconductor (CMOS) image sensor. CMOS technology also is used for computer integrated circuits and random-access memory (RAM), and CMOS image sensors are less expensive to manufacture than CCDs. In general a CMOS image sensor operates similarly to a CCD, but additional processing occurs at each pixel and the pixels transfer their output more quickly and in a digital format. Although CMOS-based cameras initially were inferior for high-quality uses compared to CCD-based ones, steady improvements in CMOS techonology led by the 2010s to its replacing CCDs in many television and video cameras. High-end 3CCD and 3CMOS video cameras use three sensors, one each for red, green, and blue, for improved color image quality.

In the television receiver, the original image is reconstructed. In television receivers using cathode-ray tubes, this was done essentially by reversing the operation of the video camera. The final 483- or 480-line interlaced image was displayed on the face of the tube, where an electron beam scanned the fluorescent face, or screen, line for line with the pickup scanning. The fluorescent deposit on the tube's inside face glowed when hit by the electrons, and the visual image was reproduced. In a television set with a liquid crystal display (LCD, also called LED LCD or LED if light-emitting diode backlighting is used), which also recreates the image line by line, control signals are sent to the lines and columns formed by the hundreds of thousands of pixels in the display, and each pixel, or picture element, is connected to a switching device. In high-definition televisions, 720 or 1080 display lines are used, with 1080 typically now standard, and the scanning may be progressive (noninterlaced, 720 and 1080) or interlaced (1080 only), typically at 30 frames per second. Progressive scanning in general produces a picture with less flicker and better reproduction of motion, particularly on larger screens. An ultra high-definition, or 4K, display uses 2160 lines for the image. Other devices in the receiver extract the crucial synchronization information from the signal, demodulate (separate the information signal from the carrier wave) it, and, in the case of a digital signal, demultiplex, decrypt, and decode it.

Sections in this article:

The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2024, Columbia University Press. All rights reserved.

See more Encyclopedia articles on: Electrical Engineering