Chapter 2. Creating Video Programs With the Video Library

VL calls let you perform video teleconferencing on platforms that support it, blend computer-generated graphics with frames from videotape or any video source, and output the input video source to the graphics monitor, to a video device such as a VCR, or both.

This chapter explains

The chapter concludes with example code illustrating a simple screen application and frame grabs (video to memory, memory to video, and continuous frame capture).

To run VL, you must

The client library is /usr/lib/libvl.so. The header files for the VL are in
/usr/include/dmedia. The header file for the VL, vl.h, contains the main definition of the VL API and controls. The header file for Indigo2 Video for Indigo2 IMPACT is dev_ev1.h (linked to /usr/include/dmedia/vl_ev1.h).


Note: When building a VL-based program, you must add -lvl to the linking command.


Opening a Connection to the Video Daemon and Setting Up a Data Path

Preliminary procedures required to create the data path are:

  • opening the device

  • specifying nodes on the data path

  • creating and setting up the data path

Each procedure is explained separately.

Opening a Connection to the Video Daemon

The first thing a VL application must do is open the device with vlOpenVideo(). Its function prototype is:

VLServer vlOpenVideo(const char *sName) 

where sName is the name of the server to which to connect; set it to a NULL string for the local server. For example:

vlSvr = vlOpenVideo("")

Specifying Nodes on the Data Path

Use vlGetNode() to specify nodes; this call returns the node's handle. Its function prototype is:

VLNode vlGetNode(VLServer vlSvr, int type, int kind, int number) 

where:

VLNode 

is a handle for the node, used when setting controls or setting up paths

vlSvr 

names the server (as returned by vlOpenVideo())

type 

specifies the type of node:

  • VL_SRC: source

  • VL_DRN: drain

  • VL_DEVICE: device for device-global controls


    Note: If you are using VL_DEVICE, the kind should be set to 0.


  • VL_INTERNAL: internal node, such as the blend node

kind 

specifies the kind of node:

  • VL_VIDEO: connection to a video device; for example, a video tape deck or camera

  • VL_MEM: region of workstation memory

  • VL_SCREEN: workstation screen

  • VL_BLENDER: a blender node


    Note: The use of VL_BLENDER is explained in Chapter 5, "Blending, Keying, and Transitions," later in this guide.


  • VL_ANY: use any available node

number 

is the number of the node in cases of two or more identical nodes, such as two video source nodes

To use the default node kind, use VL_ANY.

nodehandle = vlGetNode(vlSvr, VL_SRC, VL_VIDEO, VL_ANY);

To discover which node the default is, use the control VL_DEFAULT_SOURCE after getting the node handle the normal way. The default video source is maintained by the VL. For example:

vlGetControl(vlSvr, path, VL_ANY, VL_DEFAULT_SOURCE, &ctrlval);
nodehandle = vlGetNode(vlSvr, VL_SRC, VL_VIDEO,
     ctrlval.intVal);

In the second line above, the last argument is a struct that gets the value.

Creating and Setting Up the Data Path

Once nodes are specified, use VL calls to:

  • create the path

  • get the device ID

  • add nodes (optional step)

  • set up the data path

  • specify the path-related events to be captured

Creating the Path

Use vlCreatePath() to create the data path. Its function prototype is:

VLPath vlCreatePath(VLServer vlSvr, VLDev vlDev
    VLNode src, VLNode drn) 

This code fragment creates a path if the device is unknown:

if ((path = vlCreatePath(vlSvr, VL_ANY, src, drn)) < 0) {
    vlPerror(_progName);
    exit(1);
}

This code fragment creates a path that uses a device specified by parsing a devlist:

if ((path = vlCreatePath(vlSvr, devlist[devicenum].dev, src,
    drn)) < 0) {
    vlPerror(_progName);
    exit(1);
}


Note: If the path contains one or more invalid nodes, vlCreatePath() returns VLBadNode.


Getting the Device ID

If you specify VL_ANY as the device when you create the path, use vlGetDevice() to discover the device ID selected. Its function prototype is:

VLDev vlGetDevice(VLServer vlSvr, VLPath path)

For example:

devicenum = vlGetDevice(vlSvr, path);
deviceName = devlist.devices[devicenum].name;
printf("Device is: %s/n", deviceName);

Adding a Node

For this optional step, use vlAddNode(). Its function prototype is:

int vlAddNode(VLServer vlSvr, VLPath vlPath, VLNodeId node)

where:

vlSvr 

names the server to which the path is connected

vlPath 

is the path as defined with vlCreatePath()

node 

is the node ID

This example fragment adds a source node and a blend node:

vlAddNode(vlSvr, vlPath, src_vid);
vlAddNode(vlSvr, vlPath, blend_node);

Setting Up the Data Path

Use vlSetupPaths() to set up the data path. Its function prototype is:

int vlSetupPaths(VLServer vlSvr, VLPathList paths,
      u_int count, VLUsageType ctrlusage,
      VLUsageType streamusage) 

where:

vlSvr 

names the server to which the path is connected

paths 

specifies a list of paths you are setting up

count 

specifies the number of paths in the path list

ctrlusage 

specifies usage for path controls:

  • VL_SHARE: other paths can set controls on this node; this control is the desired setting for other paths, including vcp, to work


    Note: When using VL_SHARE, pay attention to events. If another user has changed a control, a VLControlChanged event occurs.


  • VL_READ_ONLY: controls cannot be set, but can only be read; for example, this control can be used to monitor controls

  • VL_LOCK: prevents other paths from setting controls on this path; controls cannot be used by another path

  • VL_DONE_USING: the resources are no longer required; the application releases this set of paths for other applications to acquire

streamusage 

specifies usage for the data:

  • VL_SHARE: transfers can be preempted by other users; paths contend for ownership


    Note: When using VL_SHARE, pay attention to events. If another user has taken over the device, a VLStreamPreempted event occurs.


  • VL_READ_ONLY: the path cannot perform transfers, but other resources are not locked; set this value to use the path for controls

  • VL_LOCK: prevents other paths that share data transfer resources with this path from transferring; existing paths that share resources with this path will be preempted

  • VL_DONE_USING: the resources are no longer required; the application releases this set of paths for other applications to acquire

This example fragment sets up a path with shared controls and a locked stream:

if (vlSetupPaths(vlSvr, (VLPathList)&path, 1, VL_SHARE,
    VL_LOCK) < 0)
{
    vlPerror(_progName);
    exit(1);
}

Specifying the Path-Related Events to Be Captured

Use vlSelectEvents() to specify the events you want to receive. Its function prototype is:

int vlSelectEvents(VLServer vlSvr, VLPath path, VLEventMask eventmask)

where:

vlSvr 

names the server to which the path is connected.

path 

specifies the data path.

eventmask 

specifies the event mask; Table 2-1 lists the possibilities.

Table 2-1 lists and describes the VL event masks

Table 2-1. VL Event Masks

Symbol

Meaning

VLStreamBusyMask

Stream is locked

VLStreamPreemptedMask

Stream was grabbed by another application

VLAdvanceMissedMask

Time was already reached

VLSyncLostMask

Irregular or interrupted signal

VLSequenceLostMask

Field or frame dropped

VLControlChangedMask

A control has changed

VLControlRangeChangedMask

A control range has changed

VLControlPreemptedMask

Control of a node has been preempted, typically by another user setting VL_LOCK on a path that was previously set with VL_SHARE

VLControlAvailableMask

Access is now available

VLTransferCompleteMask

Transfer of field or frame complete

VLTransferFailedMask

Error; transfer terminated; perform cleanup at this point, including vlEndTransfer()

VLEvenVerticalRetraceMask

Vertical retrace event, even field

VLOddVerticalRetraceMask

Vertical retrace event, odd field

VLFrameVerticalRetraceMask

Frame vertical retrace event

VLDeviceEventMask

Device-specific event, such as a trigger

VLDefaultSourceMask

Default source changed

For example:

vlSelectEvents(vlSvr, path, VLTransferCompleteMask); 

Event masks can be Or'ed; for example:

vlSelectEvents(vlSvr, path, VLTransferCompleteMask |
      VLTransferFailedMask);

Setting Parameters for Data Transfer to or From Memory

Transferring data to or from memory requires creating a ring buffer; its size is determined by the size of the frame data you are transferring.

To set frame data size and to convert from one video format to another, apply controls to the nodes. The use of source node controls and drain node controls is explained separately in this section.

Setting Source Node Controls for Data Transfer

Important data transfer controls for source nodes are summarized in
Table 2-2. They should be set in the order in which they appear in the table.

Table 2-2. Data Transfer Controls for Source Nodes

Control

Values

Basic Use

VL_MUXSWITCH

See Table 2-3

Determines physical input for path

VL_TIMING

Default: timing produced by active signal

VL_TIMING_525_SQ_PIX
VL_TIMING_625_SQ_PIX

Set or get video timing

For Betacam, MII, composite tape formats:
Analog: 12.27 MHz, 646 x 486
Analog: 14.75 MHz, 768 x 576

VL_SIZE

Coordinates

Set or get active unmodified video area

VL_SYNC_SOURCE

Composite 1: set 0
Composite 2: set 2

 

The use of VL_MUXSWITCH and VL_TIMING is explained in further detail in the following sections.

Using VL_MUXSWITCH

Use VL_MUXSWITCH to switch between physical inputs on a single path. Table 2-3 summarizes values to set.

Table 2-3. VL_MUXSWITCH Values

Connector

Value

Y/C (RCA jacks)

0

Y/C (S-Video connector)

1

Composite input 1
Composite input 2

3
4

For the VL, the default source is the one that is plugged in and selected.

Using VL_TIMING

Timing type expresses the timing of video presented to a source or drain. Table 2-4 summarizes dimensions for VL_TIMING.

Table 2-4. Dimensions for Timing Choices


Timing

Maximum Width

Maximum Height

VL_TIMING_525_SQ_PIX (12.27 MHz)

640

486

VL_TIMING_625_SQ_PIX (14.75 MHz)

768

576


Setting Drain Node Controls for Data Transfer

Important data transfer controls for drain nodes are summarized in
Table 2-5. They should be set in the order in which they appear in the table.

Table 2-5. Data Transfer Controls for Drain Nodes

Control

Basic Use

Video Nodes

Memory Nodes

Screen Nodes

VL_FORMAT

Video format on the physical connector

See "Using VL_FORMAT" in this chapter

 

 

VL_TIMING

Video timing

See Table 2-2 for values

Not applicable

Not applicable

VL_CAP_TYPE

Setting type of field(s) or frame(s) to capture; see "Interlacing" in Appendix A

Not applicable

VL_CAPTURE_NONINTERLEAVED
VL_CAPTURE_INTERLEAVED
VL_CAPTURE_EVEN_FIELDS
VL_CAPTURE_ODD_FIELDS
VL-CAPTURE_FIELDS

Not applicable

VL_PACKING

Pixel packing (conversion) format

Not applicable

Changes pixel format of captured data; see Table 2-6 for values

Not applicable

VL_ZOOM

Decimation or zoom factor (fraction) for screen:
1/1, 1/2, 1/3, 1/4, 1/5,
1/6, 1/7, 1/8, 2/1, 4/1

Not applicable

Decimation or zoom: 1/1, 1/4, 1/8

Decimation or zoom: resizes data to remain within limits

VL_SIZE

Clipping size

Full size of video; read only

Clipped size

Clipped size

VL_OFFSET

Position within larger area

Position of active region

Offset relative to video offset

Pan within the video

VL_ORIGIN

Position within video

Not applicable

Not applicable

Screen position of first pixel displayed

VL_WINDOW

Setting window ID for video in a window

Not applicable

Not applicable

Window ID

VL_RATE

Field or frame transfer speed

Depends on capture type as specified by VL_CAP_TYPE

Not applicable

Not applicable

These controls are highly interdependent, so the order in which they are set is important. In most cases, the value being set takes precedence over other values that were previously set.


Note: VL_PACKING must be set first. Note that changes in one parameter may change the values of other parameters set earlier; for example, clipped size may change if VL_PACKING is set after VL_SIZE.

To determine default values, use vlGetControl() to query the values on the video source or drain node before setting controls. The initial offset of the video node is the first active line of video.

Similarly, the initial size value on the video source or drain node is the full size of active video being captured by the hardware, beginning at the default offset. Because some hardware can capture more than the size given by the video node, this value should be treated as a default size.

For all these controls, it pays to track return codes. If the value returned is VLValueOutOfRange, the value set is not what you requested.

To specify the controls, use vlSetControl(), for which the function prototype is:

int vlSetControl(VLServer vlSvr, VLPath vlPath, VLNode node, 
      VLControlType type, VLControlValue * value) 

The use of VL_FORMAT, VL_PACKING, VL_ZOOM, VL_SIZE, VL_OFFSET, VL_RATE, and VL_CAP_TYPE is explained in more detail in the following sections.

Using VL_FORMAT

To specify video input and output formats of the video signal on the physical connector, use VL_FORMAT. For Indigo2 Video for Indigo2 IMPACT, the native format is YUV 4:2:2; this format is always fastest. The Indigo2 Video option also supports 32-bit RGB.


Note: To convert formats, use VL_PACKING, which is explained in the next section.


Using VL_PACKING

To convert a video output format to another in memory, use the control VL_PACKING. Packing type expresses the packing in memory of the video data at the source or drain.

Packing types are summarized in Table 2-6, which shows the most significant byte (MSB) on the left. An X means don't care; this bit is not used.

Table 2-6. Packing Types and Their Sizes and Formats

Type

Size

Format
MSB ------------------------------------------------------------------LSB

VL_PACKING_RGB_332_P

8-bit word

BBGGGRRR (four pixels packed into a 32-bit word)

VL_PACKING_RGBA_8

32-bit word

AAAAAAAA BBBBBBBB GGGGGGGG RRRRRRRR

VL_PACKING_RGB_8

24-bit word

XXXXXXXX BBBBBBBB GGGGGGGG RRRRRRRR

VL_PACKING_Y_8_P

8-bit word

YYYYYYYY (four pixels packed into a 32-bit word)

VL_PACKING_YVYU_422_8
(native format)

32-bit word

UUUUUUUU YYYYYYYY VVVVVVVV YYYYYYYY



Note: The packing names follow the naming conventions used by the IRIS Graphics Library; other libraries such as the OpenGL may use different names.

For example:

VLControlValue val;

val.intVal = VL_PACKING_RGB;
vlSetControl(vlSvr, path, memdrn, VL_PACKING, &val);

Using VL_ZOOM

VL_ZOOM controls the expansion or decimation of the video image. Values greater than one expand the video; values less than one perform decimation. Figure 2-1 illustrates zooming and decimation.

Figure 2-1. Zoom and Decimation


VL_ZOOM takes a nonzero fraction as its argument; do not use negative values. For example, this fragment captures half-size decimation video to memory:

val.fractVal.numerator = 1;
val.fractVal.denominator = 2;
if (vlSetControl(server, memory_path, memory_drain_node, VL_ZOOM, &val)){
   vlPerror("Unable to set zoom");
   exit(1);
}


Note: For a source, zooming takes place before blending; for a drain, blending takes place before zooming.

This fragment captures half-size decimation video to memory, with clipping to 320 × 243 (NTSC size minus overscan).

val.fractVal.numerator = 1;
val.fractVal.denominator = 2;
if (vlSetControl(server, memory_path, memory_drain_node,
VL_ZOOM, &val))
{
    vlPerror("Unable to set zoom");
    exit(1);
}
val.xyVal.x = 320;
val.xyVal.y = 243;
if (vlSetControl(server, memory_path, memory_drain_node,
VL_SIZE, &val))
{
    vlPerror("Unable to set size");
    exit(1);
}

This fragment captures xsize×ysize video with as much decimation as possible, assuming the size is smaller than the video stream.

if (vlGetControl(server, memory_path, video_source, VL_SIZE, &val))
{
   vlPerror("Unable to get size");
   exit(1);
}
if (val.xyVal.x/xsize < val.xyVal.y/ysize)
   zoom_denom = (val.xyVal.x + xsize - 1)/xsize;
else
   zoom_denom = (val.xyVal.y + ysize - 1)/ysize;
val.fractVal.numerator = 1;
val.fractVal.denominator = zoom_denom;
if (vlSetControl(server, memory_path, memory_drain_node, VL_ZOOM, &val))
{
   /* allow this error to fall through */
   vlPerror("Unable to set zoom");
}
val.xyVal.x = xsize;
val.xyVal.y = ysize;
if (vlSetControl(server, memory_path, memory_drain_node,
VL_SIZE, &val))
{
   vlPerror("Unable to set size");
   exit(1);
}

Using VL_SIZE

VL_SIZE controls how much of the image sent to the drain is used, that is, how much clipping takes place. This control operates on the zoomed image; for example, when the image is zoomed to half size, the limits on the size control change by a factor of 2. Figure 2-2 illustrates clipping.

Figure 2-2. Clipping an Image


For example, to display PAL video in a 320 × 243 space, clip the image to that size, as shown in the following fragment:

VLControlValue value;

value.xyval.x=320;
value.xyval.y=243;
vlSetControl(vlSvr, path, drn, VL_SIZE, &value); 


Note: Because this control is device-dependent and interacts with other controls, always check the error returns. For example, if offset is set before size and an error is returned, set size before offset.


Using VL_OFFSET

VL_OFFSET puts the upper left corner of the video data at a specific position; it sets the beginning position for the clipping performed by VL_SIZE. The values you enter are relative to the origin.

VL_OFFSET operates on the unzoomed image; it does not change if the zoom factor is changed.

This example places the data ten pixels down and ten pixels in from the left:

VLControlValue value;

value.xyval.x=10; 
value.xyval.y=10; 
vlSetControl(vlSvr, path, drn, VL_OFFSET, &value); 

To capture the blanking region, set offset to a negative value.

Using VL_RATE and VL_CAP_TYPE

VL_RATE determines the data transfer rate by field or frame, depending on the capture type as specified by VL_CAP_TYPE, as shown in Table 2-7.

Table 2-7. VL_RATE Values (Items per Second)

VL_CAP_TYPE Value

VL_RATE Value

VL_CAPTURE_NONINTERLEAVED only

NTSC: 10, 12, 20, 24, 30, 36, 40, 48, 50, 60

PAL: 5, 10, 15, 20, 25

VL_CAPTURE_INTERLEAVED,
VL_CAPTURE_EVEN_FIELDS,
VL_CAPTURE_ODD_FIELDS, and
VL_CAPTURE_FIELDS

NTSC: 5, 6, 10, 12, 15, 18, 20, 24, 25, 30

PAL: 10, 20, 30, 40, 50

Figure 2-3 shows the relationships between the source and drain zoom, size, offset, and origin.

Figure 2-3. Zoom, Size, Offset, and Origin


Displaying Video Data Onscreen

To set up a window for live video, follow these steps, as outlined in the example program simplev2s.c.

  1. Open an X display window; for example:

    if (!(dpy = XOpenDisplay("")))
        exit(1);

  2. Connect to the video daemon; for example:

    if (!(vlSvr = vlOpenVideo("")))
         exit(1);

  3. Create a window to show the video; for example:

    vwin = XCreateSimpleWindow(dpy, RootWindow(dpy, 0), 10,
        10, 640, 486, 0,
        BlackPixel(dpy,DefaultScreen(dpy)),
        BlackPixel(dpy, DefaultScreen(dpy));
    XMapWindow(dpy, vwin);
    XFlush(dpy);

  4. Create a source node on a video device and a drain node on the screen; for example:

    src = vlGetNode(vlSvr, VL_SRC, VL_VIDEO, VL_ANY);
    drn = vlGetNode(vlSvr, VL_DRN, VL_SCREEN, VL_ANY);

  5. Create a path on the first device that supports it; for example:

    if ((path = vlCreatePath(vlSvr, VL_ANY, src, drn)) < 0)
        exit(1);

  6. Set up the hardware for the path and define the path use; for example:

    vlSetupPaths(vlSvr, (VLPathList)&path, 1, VL_SHARE,
        VL_SHARE); 

  7. Set the X window to be the drain; for example:

    val.intVal = vwin;
    vlSetControl(vlSvr, path, drn, VL_WINDOW, &val);

  8. Get X and VL into the same coordinate system; for example:

    XTranslateCoordinates(dpy, vwin, RootWindow(dpy,
        DefaultScreen(dpy)), 0, 0,&x, &y, &dummyWin);

  9. Set the live video to the same location and size as the window; for example:

    val.xyVal.x = x;
    val.xyVal.y = y;
    vlSetControl(vlSvr, path, drn, VL_ORIGIN, &val);

    XGetGeometry(dpy, vwin, &dummyWin, &x, &y, &w, &h, &bw,
       &d);
    val.xyVal.x = w;
    val.xyVal.y = h;
    vlSetControl(vlSvr, path, drn, VL_SIZE, &val);

  10. Begin the data transfer:

    vlBeginTransfer(vlSvr, path, 0, NULL);

  11. Wait until the user finishes; for example:

    printf("Press return to exit.\n");
    c = getc(stdin);

  12. End the data transfer, clean up, and exit:

    vlEndTransfer(vlSvr, path);
    vlDestroyPath(vlSvr, path);
    vlCloseVideo(vlSvr);

Transferring Video Data to and From Devices

The processes for data transfer are:

  • creating a buffer for the frames (for transfers involving memory)

  • registering the ring buffer with the path (for transfers involving memory)

  • starting data transfer

  • reading data from the buffer (for transfers involving memory)

Each process is explained separately.

Creating a Buffer for the Frames

Once you have specified frame parameters in a transfer involving memory (or have determined to use the defaults), create a buffer for the frames.

Like other libraries in the IRIX digital media development environment, the VL uses ring buffers. Ring buffers provide a way to read and write varying sizes of frames of data. A frame of data consists of the actual frame data and an information structure describing the underlying data, including device-specific information.

When a ring buffer is created, constraints are specified that control the total size of the data segment and the number of information buffers to allocate.

A head and a tail flag are automatically set in a ring buffer so that the latest frame can be accessed. A sector is locked down if it is not called; that is, it remains locked until it is read. When the ring buffer is written to and all sectors are occupied, data transfer stops. The sector last written to remains locked down until it is released.

The ring buffer can accommodate data of varying sizes. You can specify a ring buffer at a fixed size, or you can determine the size of the data in the buffer. To determine frame data size, use vlGetTransferSize(). Its function prototype is:

long vlGetTransferSize(VLServer vlSvr, VLPath path)

For example:

transfersize = vlGetTransferSize(vlSvr, path); 

where transfersize is the size of the data in bytes.

To create a ring buffer for the frame data, use vlCreateBuffer(). Its function prototype is:

VLBuffer vlCreateBuffer(VLServer vlSvr, VLPath path,
      VLNode node, int numFrames)

where:

VLBuffer 

is the handle of the buffer to be created

vlSvr 

names the server to which the path is connected

path 

specifies the data path

node 

specifies the memory node containing data to transfer to or from the ring buffer

numFrames 

specifies the number of frames in the buffer

For example:

buf = vlCreateBuffer(vlSvr, path, src, 1); 

Registering the Ring Buffer

Use vlRegisterBuffer() to register the ring buffer with the data path. Its function prototype is:

int vlRegisterBuffer(VLServer vlSvr, VLPath path,
     VLNode memnodeid, VLBuffer buffer)

where:

vlSvr 

names the server to which the path is connected

path 

specifies the data path

memnodeid 

specifies the memory node ID

buffer 

specifies the ring buffer handle

For example:

vlRegisterBuffer(vlSvr, path, drn, Buffer);

Starting Data Transfer

To begin data transfer, use vlBeginTransfer(). Its function prototype is:

int vlBeginTransfer(VLServer vlSvr, VLPath path, int count,
      VLTransferDescriptor* xferDesc) 

where:

vlSvr 

names the server to which the path is connected

path 

specifies the data path

count 

specifies the number of transfer descriptors

xferDesc 

specifies a transfer descriptor

Tailor the data transfer by means of transfer descriptors. The transfer descriptors are:

xferDesc.mode 

Transfer method:

  • VL_TRANSFER_MODE_DISCRETE: a specified number of frames are transferred (burst mode)

  • VL_TRANSFER_MODE_CONTINUOUS (default): frames are transferred continuously, beginning immediately or after a trigger event occurs (such as a frame coincidence pulse), and continues until transfer is terminated with vlEndTransfer()

  • VL_TRANSFER_MODE_AUTOTRIGGER: frame transfer takes place each time a trigger event occurs; this mode is a repeating version of VL_TRANSFER_MODE_DISCRETE

xferDesc.count 

Number of frames to transfer; if mode is VL_TRANSFER_MODE_CONTINUOUS, this value is ignored.

xferDesc.delay 

Number of frames from the trigger at which data transfer begins.

xferDesc.trigger 

Set of events to trigger on; an event mask. This transfer descriptor is always required. VLTriggerImmediate specifies that transfer begins immediately, with no pause for a trigger event. VLDeviceEvent specifies an external trigger.

This example fragment transfers the entire contents of the buffer immediately.

xferDesc.mode = VL_TRANSFER_MODE_DISCRETE;

xferDesc.count = imageCount;
xferDesc.delay = 0;
xferDesc.trigger = VLTriggerImmediate;

This fragment shows the default descriptor, which is the same as passing in a null for the descriptor pointer. Transfer begins immediately; count is ignored.

xferDesc.mode = VL_TRANSFER_MODE_CONTINUOUS;

xferDesc.count = 0;
xferDesc.delay = 0;
xferDesc.trigger = VLTriggerImmediate;

Reading Data From the Buffer

If your application uses a buffer, use various VL calls for reading frames, getting pointers to active buffers, freeing buffers, and other operations. Table 2-8 lists the buffer-related calls.

Table 2-8. Buffer-Related Calls

Call

Purpose

vlGetNextValid()

Returns a handle on the next valid frame of data

vlGetLatestValid()

Reads only the most current frame in the buffer, discarding the rest

vlPutValid()

Puts a frame into the valid list (memory to video)

vlPutFree()

Puts a valid frame back into the free list (video to memory)

vlGetNextFree()

Gets a free buffer into which to write data (memory to video)

vlBufferDone()

Informs you if the buffer has been vacated

vlBufferReset()

Resets the buffer so that it can be used again

Figure 2-4 illustrates the difference between vlGetNextValid() and vlGetLatestValid(), and their interaction with vlPutFree().

Figure 2-4. vlGetNextValid(), vlGetLatestValid(), and vlPutFree()


Table 2-9 lists the calls that extract information from a buffer.

Table 2-9. Calls for Extracting Data From a Buffer

Call

Purpose

vlGetActiveRegion()

Gets a pointer to the data region of the buffer (video to memory); called after vlGetNextValid() and vlGetLatestValid()

vlGetDMediaInfo()

Gets a pointer to the DMediaInfo structure associated with a frame; this structure contains timestamp and field count information

vlGetImageInfo()

Gets a pointer to the DMImageInfo structure associated with a frame; this structure contains image size information



Caution: None of these calls has count or block arguments; appropriate calls in the application must deal with a NULL return in cases of no data being returned.

In summary, for video-to-memory transfer use:

buffer = vlCreateBuffer(vlSvr, path, memnode1);
vlRegisterBuffer(vlSvr, path, memnode1, buffer); 
vlBeginTransfer(vlSvr, path, 0, NULL); 
info = vlGetNextValid(vlSvr, buffer);
/* OR vlGetLatestValid(vlSvr, buffer); */
dataptr = vlGetActiveRegion(vlSvr, buffer, info); 

/* use data for application */
…
vlPutFree(vlSvr, buffer); 

For memory-to-video transfer, use:

buffer = vlCreateBuffer(vlSvr, path, memnode1);
vlRegisterBuffer(vlSvr, path, memnode1, buffer); 
vlBeginTransfer(vlSvr, path, 0, NULL); 
buffer = vlGetNextFree(vlSvr, buffer, bufsize); 
/* fill buffer with data */
…
vlPutValid(vlSvr, buffer); 

These calls are explained in separate sections.

Reading the Frames to Memory From the Buffer

Use vlGetNextValid() to read all the frames in the buffer or get a valid frame of data. Its function prototype is:

VLInfoPtr vlGetNextValid(VLServer vlSvr, VLBuffer vlBuffer)

Use vlGetLatestValid() to read only the most current frame in the buffer, discarding the rest. Its function prototype is:

VLInfoPtr vlGetLatestValid(VLServer vlSvr, VLBuffer vlBuffer) 

After removing interesting data, return the buffer for use with vlPutFree() (video to memory). Its function prototype is:

int vlPutFree(VLServer vlSvr, VLBuffer vlBuffer)

Sending Frames From Memory to Video

Use vlGetNextFree() to get a free buffer to which to write data. Its function prototype is:

VLInfoPtr vlGetNextFree(VLServer vlSvr, VLBuffer vlBuffer,
      int size)

After filling the buffer with the data you want to send to video output, use vlPutValid() to put a frame into the valid list for output to video (memory to video). Its function prototype is:

int vlPutValid(VLServer vlSvr, VLBuffer vlBuffer)


Caution: These calls do not have count or block arguments; appropriate calls in the application must deal with a NULL return in cases of no data being returned.


Getting DMediaInfo and Image Data From the Buffer

Use vlGetActiveRegion() to get a pointer to the active buffer. Its function prototype is:

void * vlGetActiveRegion(VLServer vlSvr, VLBuffer vlBuffer,
     VLInfoPtr ptr)

Use vlGetDMediaInfo() to get a pointer to the DMediaInfo structure associated with a frame. This structure contains timestamp and field count information. The function prototype for this call is:

DMediaInfo * vlGetDMediaInfo(VLServer vlSvr,
   VLBuffer vlBuffer, VLInfoPtr ptr)

Use vlGetImageInfo() to get a pointer to the DMImageInfo structure associated with a frame. This structure contains image size information. The function prototype for this call is:

DMImageInfo * vlGetImageInfo(VLServer vlSvr,
   VLBuffer vlBuffer, VLInfoPtr ptr)

Ending Data Transfer

To end data transfer, use vlEndTransfer(). Its function prototype is:

int vlEndTransfer(VLServer vlSvr, VLPath path) 

To accomplish the necessary cleanup to exit gracefully, use:

  • for transfer involving memory: vlDeregisterBuffer(), vlDestroyPath(), vlDestroyBuffer()

  • for all transfers: vlCloseVideo()

The function prototype for vlDeregisterBuffer() is:

int vlDeregisterBuffer(VLServer vlSvr, VLPath path,
    VLNode memnodeid, VLBuffer ringbufhandle) 

where:

vlSvr 

is the server handle

path 

is the path handle

memnodeid 

is the memory node ID

ringbufhandle 

is the ring buffer handle

The function prototypes for vlDestroyPath(), vlDestroyBuffer() and vlCloseVideo() are, respectively:

int vlDestroyPath(VLServer vlSvr, VLPath path)
int vlDestroyBuffer(VLServer vlSvr, VLBuffer vlBuffer) 
int vlCloseVideo(VLServer vlSvr)

This example ends a data transfer that used a buffer:

vlEndTransfer(vlSvr, path);
vlDeregisterBuffer(vlSvr, path, memnodeid, buffer);
vlDestroyPath(vlSvr, path);
vlDestroyBuffer(vlSvr, buffer);
vlCloseVideo(vlSvr);

Example Programs

The directory /usr/people/4Dgifts/examples/dmedia/video/vl includes a number of example programs. These programs illustrate how to create simple video applications; for example:

  • a simple screen application: simplev2s.c

    This program shows how to send live video to the screen.

  • a video-to-memory frame grab: simplegrab.c

    This program demonstrates video frame grabbing.

  • a memory-to-video frame output simplem2v.c

    This program sends a frame to the video output.

  • a continuous frame capture: simpleccapt.c

    This program demonstrates continuous frame capture.


Note: To simplify the code, these examples do not check returns. The programmer should, however, always check returns.

See Chapter 4 for a description of eventex.c and Chapter 5 for descriptions of simpleblend.c and simplewipe.c.

The directory /usr/people/4Dgifts/examples.OpenGL contains three example OpenGL programs:

  • contcapt.c: continuous capture using buffering and sproc

  • mtov.c: uses the Silicon Graphics Movie Library to play a movie on the selected video port

  • vidtomem.c: captures an incoming video stream to memory

Note that these programs differ from the programs with the same names in
/usr/people/4Dgifts/examples/dmedia/video/vl.