Appendix A. Video Basics

Computer graphics and video differ in a number of ways; understanding the differences can help you produce better results with the VL and your Silicon Graphics video option. This appendix introduces some of the important terms and concepts used in conjunction with video. For more detail about a particular term, see the Glossary included in this guide.

Video differs from computer graphics in these ways:

Interlacing

Most video signals are interlaced: each time the video screen is refreshed, only every other one of the horizontal lines are drawn. On the next refresh, the alternate lines are drawn. That is, each frame is composed of two fields.

During one screen refresh, the video monitor draws the first field, which contains all the odd-numbered lines; during the next refresh, it draws the second field, which contains all the even-numbered lines. Therefore, two refresh cycles are required to draw one frame.

The display rate of interlaced video signals can be measured either in terms of field rate or refresh rate, or in terms of frame rate, which equals half of the field rate, because each frame contains two fields.

Figure A-1 shows a frame and its two fields for NTSC, the broadcast standard used in North America and some other parts of the world, and PAL, the broadcast standard used in much of Europe and elsewhere.

Figure A-1. Fields and Frame


In contrast, the Silicon Graphics workstation monitor is typically noninterlaced: it draws every line each time it refreshes the screen. Refresh rates vary, depending on the type of monitor your Silicon Graphics workstation has. The video output capability of the graphics subsystem for some Silicon Graphics workstation models supports interlaced monitor formats, including component RGB at 525 and 625 lines per frame.

Broadcast Standards

Broadcast standards, or video timing formats, are ways of encoding video information for broadcast to television receivers. These standards are also used to describe the display capabilities of video monitors and are thus also called video timing formats or video output formats (VOFs). The three broadcast standards are:

NTSC 

Named after the National Television Systems Committee, which developed it, this standard is used in all of North and South America, except Brazil, and in much of East Asia.

PAL  

(Phase Alternated by Line) This standard is used in western Europe, including the United Kingdom but excluding France, and in East Asia, including Australia.

SECAM  

(Sequentiel Couleur avec Memoire) This standard is used in France, eastern Europe, the Near East and Mideast, and parts of Africa and the Caribbean.


Note: NTSC implementations can vary slightly by country; PAL and SECAM implementations can vary considerably.

NTSC employs a total of 525 horizontal lines per frame, with two fields per frame of 262.5 lines each. Each field refreshes at 60 Hz (actually 59.94 Hz). NTSC encodes brightness, color, and synchronizing information in one signal.

PAL employs a total of 625 horizontal lines per frame, with two fields per frame of 312.5 lines per frame. Each field refreshes at 50 Hz. PAL encodes brightness, color, and synchronizing information in one signal also, but in a different way from NTSC.

SECAM transmits the same number of lines at the same rate as PAL, but transmits each color difference signal on alternate lines, using the frequency modulation of the subcarrier.

These numbers of horizontal lines—525 and 625, respectively—are a shorthand description of what actually happens. For NTSC, the first (odd) field starts with a whole line and ends with a half line; the second (even) field starts with a half line and ends with a whole line. Each NTSC field contains 242.5 active lines and 20 lines of vertical blanking.

Similarly, for PAL, the first (even) field starts with a half line and ends with a whole line; the second (odd) field starts with a whole line and ends with a half line. Each PAL field contains 287.5 active lines and 25 lines of vertical blanking.

In each case, the numbers 525 and 625 refer to transmitted lines; the active video lines are fewer—typically, 486 for NTSC and 576 for PAL. The remaining lines are used for delimiting frame boundaries and for synchronization and other information.

To minimize frame flickering and reduce the bandwidth of the video signal, the active video lines are interlaced, as explained earlier in this chapter.

NTSC and PAL can be recorded digitally; these recording techniques are referred to as D2 525 (digital NTSC) and D2 625 (digital PAL).

Color Encoding

Color-encoding methods are:

  • RGB (component)

  • YUV (component)

  • YIQ (component)

  • YC (separate luminance (Y) and chrominance (C)), YC-358, YC-443,
    S-Video

  • composite video

RGB

RGB is the color-encoding method used by most graphics computers, as well as some professional-quality video cameras. The three colors red, green, and blue are generated separately; each is carried on a separate wire.

YUV

YUV, a form of which is used by the PAL video standard and by Betacam® and D1 cameras and VCRs, is also a component color-encoding method, but in a different way from RGB. In this case, brightness, or luminance, is carried on a signal known as Y. Color is carried on the color difference signals, U and V, which are B-Y and R-Y respectively.

The YUV matrix multiplier derives colors from RGB via the following formula:

Y = .299R + .587 G + .114 B
CR = R-Y
CB = B-Y

in which Y represents luminance and R-Y and B-Y represent the color difference signals used by this format. In this system, which is sometimes referred to as Y/R-Y/B-Y, R-Y corresponds to CR and V, and B-Y corresponds to CB and U. R-Y and B-Y are obtained by subtracting luminance (Y) from the red (R) and blue (B) camera signals, respectively. CR, CB, V, and U are derived through different normalization methods, depending on the video format used. The U and V signals are sometimes subsampled by a factor of 2 and then carried on the same signal, which is known as 4:2:2.

YUV component color encoding can be recorded digitally, according to the CCIR 601 standard; this recording technique is referred to as D1.

YIQ

YIQ color encoding, which is typically used by the NTSC video format, encodes color onto two signals called I and Q (for intermodulation and quadrature, respectively). These two signals have different phase modulation in NTSC transmission. Unlike the U and V components of YUV, I and Q are carried on different bandwidths.

The YIQ formula is as follows:

Y = .299 R + .587 G + .114 B (the same as for YUV)
I = .596 R - .275 G - .321 B
Q = .212 R - .523 G + .311 B

YC, YC-358, YC-443, or S-Video

YC, a two-wire signal, results when I and Q are combined into one signal, called chrominance (C). Chrominance is a quadrature phase amplitude-modulated signal. In the NTSC broadcast standard, U is the 0-degree modulation and V is at 90 degrees. In the PAL broadcast standard, the V component is modulated at +/- 90 degrees line-to-line for the active picture and +/- 135 degrees for the reference burst.

YC-358 is the most common NTSC version of this luminance/chrominance format; YC-443 is the most common PAL version. These formats are also known as S-Video; S-Video is one of the formats used for S-VHS[tm] videotape recorders.

Composite Video

The composite color-encoding schemes combine the brightness and color signals into one signal for broadcast. NTSC and PAL both combine brightness and color but use different methods.

Figure A-2 shows the relationships between color-encoding methods and video formats.

Figure A-2. Relationships Between Color-encoding Methods and Video Formats


Video Signals

The video signal, whatever the broadcast standard being used, carries other information besides video (luminance and chrominance) and audio. For example, horizontal and vertical synchronization information is required, as well as a color phase reference, which is called color sync burst. Figure A-3 shows a composite video signal waveform.

Figure A-3. Composite Video Waveform


Videotape Formats

Videotape recorders are available for analog and digital recording in various formats. They are further classified by performance level or market: consumer, professional, and broadcast. In addition, during postproduction (editing, including addition of graphics), the original footage can be transferred to digital media; digital videotape formats are available for composite and component video formats. There are no official standards for videotape classifications.

Table A-1 summarizes the formats.

Table A-1. Tape Formats and Video Formats

Electronics

Consumer

Professional

Broadcast

Postproduction

Analog

VHS cassette (composite)

U-Matic[tm] (SP) cassette, 3/4-inch (composite)

Type C reel-to-reel, 1-inch (composite)

 

 

S-VHS
(YC, composite)

 

Type B (Europe) (composite)

 

 

S-Video (YC-358)

S-Video (YC-358)

 

 

 

Beta (composite)

8 mm
(composite)

Hi-8mm[tm]
(YC, composite)




Hi-8mm (YC)

Betacam
(component)

Betacam SP
(YUV, YIQ,
composite)

MII[tm]
(YUV, YIQ, composite)

 

Digital

 

 

 

D1 525 (YUV)

D1 625 (YUV)

D2 525 (NTSC)

D2 625 (PAL)

Although the VL and other software for Silicon Graphics video options do not distinguish between videotape formats, you need to know what kind of connector your video equipment uses. For example, the Galileo board has composite and S-Video connectors.

Most home VCRs use composite connectors. S-Video, on the other hand, carries the color and brightness components of the picture on separate wires; hence, S-Video connectors are also called Y/C connectors. Most S-VHS and Hi-8mm VCRs feature S-Video connectors.


Note: For definitions of video terms, consult the Glossary at the end of this guide.