Chapter 10. Video Extensions

Chapter 6, “Resource Control Extensions,” discusses a set of GLX extensions that can be used to control resources. This chapter provides information on a second set of GLX extension, extensions that support video functionality. You learn about

SGI_swap_control—The Swap Control Extension

The swap control extension, SGI_swap_control, allows applications to display frames at a regular rate, provided the time required to draw each frame can be bounded. The extension allows an application to set a minimum period for buffer swaps, counted in display retrace periods. (This is similar to the IRIS GL swapinterval().)

To set the buffer swap interval, call glXSwapIntervalSGI(), which has the following prototype:

int glXSwapIntervalSGI( int interval )

Specify the minimum number of retraces between buffer swaps in the interval parameter. For example, a value of 2 means that the color buffer is swapped at most every other display retrace. The new swap interval takes effect on the first execution of glXSwapBuffers() after the execution of glXSwapIntervalSGI().

glXSwapIntervalSGI() affects only buffer swaps for the GLX write drawable for the current context. Note that glXSwapBuffers() may be called with a drawable parameter that is not the current GLX drawable; in this case glXSwapIntervalSGI(), has no effect on that buffer swap.

New Functions

glXSwapIntervalSGI

SGI_video_sync—The Video Synchronization Extension

The video synchronization extension, SGI_video_sync, allows an application to synchronize drawing with the vertical retrace of a monitor or, more generically, to the boundary between to video frames. (In the case of an interlaced monitor, the synchronization is actually with the field rate instead). Using the video synchronization extension, an application can put itself to sleep until a counter corresponding to the number of screen refreshes reaches a desired value. This enables and application to synchronize itself with the start of a new video frame. The application can also query the current value of the counter.

The system maintains a video sync counter (an unsigned 32-bit integer) for each screen in a system. The counter is incremented upon each vertical retrace.

The counter runs as long as the graphics subsystem is running; it is initialized by the /usr/gfx/gfxinit command.


Note: A process can query or sleep on the counter only when a direct context is current; otherwise, an error code is returned. See the reference page for more information.


Using the Video Sync Extension

To use the video sync extension, follow these steps:

  1. Create a rendering context and make it current.

  2. Call glXGetVideoSyncSGI() to obtain the value of the vertical retrace counter.

  3. Call glXWaitVideoSyncSGI() to put the current process to sleep until the specified retrace counter:

    int glXWaitVideoSyncSGI( int divisor, int remainder, unsigned int *count )
    

    where

    • glXWaitVideoSyncSGI() puts the calling process to sleep until the value of the vertical retrace counter (count) modulo divisor equals remainder.

    • count is a pointer to the variable that receives the value of the vertical retrace counter when the calling process wakes up.

New Functions

glXGetVideoSyncSGI, glXWaitVideoSyncSGI

SGIX_swap_barrier—The Swap Barrier Extension

The swap barrier extension, SGIX_swap_barrier, allows applications to synchronize the buffer swaps of different swap groups. For information on swap groups, see “SGIX_swap_group—The Swap Group Extension”.

Why Use the Swap Barrier Extension?

The swap barrier extension is useful for synchronizing buffer swaps of different swap groups, that is, on different machines.

For example, two Onyx InfiniteReality systems may be working together to generate a single visual experience. The first Onyx system may be generating an “out the window view” while the second Onyx system may be generating a sensor display. The swap group extension would work well if the two InfiniteReality graphics pipelines were in the same system, but a swap group can not span two Onyx systems. Even though the two displays are driven by independent systems, you still want the swaps to be synchronized.

The swap barrier solution requires the user to connect a physical coaxial cable to the “Swap Ready” port of each InfiniteReality pipeline. The multiple pipelines should also be genlocked together (synchronizing their video refresh rates). Genlocking a system means synchronizing it with another video signal serving as a master timing source.

The OpenGL swap barrier functionality requires special hardware support and is currently supported only on InfiniteReality graphics.

Note that most users of the swap barrier extension will likely use the extension through the IRIS Performer API and not call the OpenGL GLX extension directly.

Using the Swap Barrier Extension

A swap group is bound to a swap_barrier. The buffer swaps of each swap group using that barrier will wait until every swap group using that barrier is ready to swap (where readiness is defined in “Buffer Swap Conditions”). All buffer swaps of all groups using that barrier will take place concurrently when every group is ready.

The set of swap groups using the swap barrier include not only all swap groups on the calling application's system, but also any swap groups set up by other systems that have been cabled together by their graphics pipeline “Swap Ready” ports. This extension extends the set of conditions that must be met before a buffer swap can take place.

Applications call glXBindSwapBarriersSGIX(), which has the following prototype:

void glXBindSwapBarrierSGIX(Display *dpy, GLXDrawable drawable, int barrier)

glXBindSwapBarriersSGIX() binds the swap group that contains drawable to barrier. Subsequent buffer swaps for that group will be subject to this binding until the group is unbound from barrier. If barrier is zero, the group is unbound from its current barrier, if any.

To find out how many swap barriers a graphics pipeline (an X screen) supports, applications call glXQueryMaxSwapbarriersSGIX(), which has the following syntax:

Bool glXQueryMaxSwapBarriersSGIX (Display *dpy, int screen, int max)

glXQueryMaxSwapBarriersSGIX() returns in max the maximum number of barriers supported by an implementation on screen.

glXQueryMaxSwapBarriersSGIX() returns GL_TRUE if it success and GL_FALSE if it fails. If it fails, max is unchanged.

While the swap barrier extension has the capability to support multiple swap barriers per graphics pipeline, InfiniteReality (the only graphics hardware currently supporting the swap barrier extension) provides only one swap barrier.

Buffer Swap Conditions

Before a buffer swap can take place when a swap barrier is used, some new conditions must be satisfied. The conditions are defined in terms of when a drawable is ready to swap and when a group is ready to swap.

  • Any GLX drawable that is not a window is always ready.

  • When a window is unmapped, it is always ready.

  • When a window is mapped, it is ready when both of the following are true:

    • A buffer swap command has been issued for it.

    • Its swap interval has elapsed.

  • A group is ready when all windows in the group are ready.

  • Before a buffer swap for a window can take place, all of the following must be satisfied:

    • The window is ready.

    • If the window belongs to a group, the group is ready.

    • If the window belongs to a group and that group is bound to a barrier, all groups using that barrier are ready.

Buffer swaps for all windows in a swap group will take place concurrently after the conditions are satisfied for every window in the group.

Buffer swaps for all groups using a barrier will take place concurrently after the conditions are satisfied for every window of every group using the barrier, if and only if the vertical retraces of the screens of all the groups are synchronized (genlocked). If they are not synchronized, there is no guarantee of concurrency between groups.

Both glXBindSwapBarrierSGIX() and glXQueryMaxSwapBarrierSGIX() are part of the X stream.

New Functions

glBindSwapBarrierSGIX, glQueryMaxSwapBarriersSGIX

SGIX_swap_group—The Swap Group Extension

The swap group extension, SGIX_swap_group, allows applications to synchronize the buffer swaps of a group of GLX drawables.The application creates a swap group and adds drawables to the swap group. After the group has been established, buffer swaps to members of the swap group will take place concurrently.

In effect, this extension extends the set of conditions that must be met before a buffer swap can take place.

Why Use the Swap Group Extension?

Synchronizing the swapping of multiple drawables ensures that buffer swaps among multiple windows (potentially on different screens) swap at exactly the same time.

Consider the following example:

render(left_window);
render(right_window);
glXSwapBuffers(left_window);
glXSwapBuffers(right_window);

The left_window and right_window are on two different screens (different monitors) but are meant to generate a single logical scene (split across the two screens). While the programmer intends for the two swaps to happen simultaneously, the two glXSwapBuffers() calls are distinct requests, and buffer swaps are tied to the monitor's rate of vertical refresh. Most of the time, the two glXSwapBuffers() calls will swap both windows at the next monitor vertical refresh. But because the two glXSwapBuffers() calls are not atomic, it is possible that:

  • the first glXSwapBuffers() call may execute just before a vertical refresh, allowing left_window to swap immediately,

  • the second glXSwapBuffers() call is made after the vertical refresh, forcing right_window to wait a full vertical refresh (typically a 1/60th or1/72th of a second).

Someone watching the results in the two windows would very briefly see the new left_window contents, but alongside the old right_window contents. This “stutter” between the two window swaps is always annoying and at times simply unacceptable.

The swap group extension allows applications to “tie together” the swapping of multiple windows.

By joining left_window and right_window into a swap group, IRIX ensures that the windows swap together atomically. This could be done during initialization by calling

glXJoinSwapGroupSGIX(dpy, left_window, right_window);

Subsequent windows can also be added to the swap group. For example, if there was also a middle window, it could be added to the swap group by calling

glXJoinSwapGroupSGIX(dpy, middle_window, right_window);

Swap Group Details

The only routine added by the swap group extension is glXJoinSwapGroupSGIX(), which has following prototype:

void glXJoinSwapGroupSGIX(Display *dpy, GLXDrawable drawable, 
                         GLXDrawable member) 

Applications can call glXJoinSwapGroupSGIX() to add drawable to the swap group containing member as a member. If drawable is already a member of a different group, it is implicitly removed from that group first. If member is None, drawable is removed from the swap group that it belongs to, if any.

Applications can reference a swap group by naming any drawable in the group; there is no other way to refer to a group.

Before a buffer swap can take place, a set of conditions must be satisfied. Both the drawable and the group must be ready, satisfying the following conditions:

  • GLX drawables, except windows, are always ready to swap.

  • When a window is unmapped, it is always ready.

  • When a window is mapped, it is ready when bothof the following are true:

    • A buffer swap command has been issued for it.

    • Its swap interval has elapsed.

A group is ready if all windows in the group are ready.

glXJoinSwapGroupSGIX() is part of the X stream. Note that a swap group is limited to GLX drawables managed by a single X server. If you have to synchronize buffer swaps between monitors on different machines, you need the swap barrier extension (see “SGIX_swap_barrier—The Swap Barrier Extension”).

New Function

glJoinSwapGroupSGIX

SGIX_video_resize—The Video Resize Extension

The video resize extension, SGIX_video_resize, is an extension to GLX that allows the frame buffer to be dynamically resized to the output resolution of the video channel when glXSwapBuffers is called for the window that is bound to the video channel. The video resize extension can also be used to minify (reduce in size) a frame buffer image for display on a video output channel (such as NTSC or PAL broadcast video). For example, a 1280 x 1024 computer-generated scene could be minified for output to the InfiniteReality NTSC/PAL encoder channel. InfiniteReality performs bilinear filtering of the minified channel for reasonable quality.

As a result, an application can draw into a smaller viewport and spend less time performing pixel fill operations. The reduced size viewport is then magnified up to the video output resolution using the SGIX_video_resize extension.

In addition to the magnify and minify resizing capabilities, the video resize extension allows 2D panning. By overrendering at swap rates and panning at video refresh rates, it is possible to perform video refresh (frame) synchronous updates.

Controlling When the Video Resize Update Occurs

Whether frame synchronous or swap synchronous update is used is set by calling glXChannelRectSyncSGIX(), which has the following prototype:

int glXChannelRectSyncSGIX (Display *dpy, int screen,int channel,
                           GLenum synctype);

The synctype parameter can be either GLX_SYNC_FRAME_SGIX or GLX_SYNC_SWAP_SGIX.

The extension can control fill-rate requirements for real-time visualization applications or to support a larger number of video output channels on a system with limited framebuffer memory.


Note: This extension is an SGIX (experimental) extension. The interface or other aspects of the extension may change. The extension is currently implemented only on InfiniteReality systems.


Using the Video Resize Extension

To use the video resize extensions, follow these steps:

  1. Open the display and create a window.

  2. Call glXBindChannelToWindowSGIX() to associate a channel with an X window so that when the X window is destroyed, the channel input area can revert to the default channel resolution.

    The other reason for this binding is that the bound channel updates only when a swap takes place on the associated X window (assuming swap sync updates—see “Controlling When the Video Resize Update Occurs”).

    The function has the following prototype:

    int glXBindChannelToWindowSGIX( Display *display, int screen,
                                    int channel, Window window )
    

    where

    • display specifies the connection to the X server.

    • screen specifies the screen of the X server.

    • channel specifies the video channel number.

    • window specifies the window that is to be bound to channel. Note that InfiniteReality supports multiple output channels (two or eight depending on the Display Generator board type). Each channel can be independently dynamically resized.

  3. Call glXQueryChannelDeltasSGIX() to retrieve the precision constraints for any frame buffer area that is to be resized to match the video resolution. In effect, glXQueryChannelDeltasSGIX() returns the resolution at which one can place and size a video input area.

    The function has the following prototype:

    int glXQueryChannelDeltasSGIX( Display *display, int screen, int channel,
                                  int *dx, int *dy, int *dw, int *dh ) 
    

    where

    • display specifies the connection to the X server.

    • screen specifies the screen of the X server.

    • channel specifies the video channel number.

    • dx, dy, dw, dh are precision deltas for the origin and size of the area specified by glXChannelRectSGIX()

  4. Call XSGIvcQueryChannelInfo() (an interface to the Silicon Graphics X video control X extension) to determine the default size of the channel.

  5. Open an X window, preferably with no borders.

  6. Start a loop in which you perform the following activities:

    • Determine the area that will be drawn, based on performance requirements. If the application is fill limited, make the area smaller. You can make a rough estimate of the fill rate required for a frame by timing the actual rendering time in milliseconds. On InfiniteReality, the SGIX_ir_instrument1 OpenGL extension can be used to query the pipeline performance to better estimate the fill rate.

    • Call glViewPort(), providing the width and height, to set the OpenGL viewport (the rectangular region of the screen where the window is drawn). Base this viewport on the information returned by glXQueryChannelDeltasSGIX().

    • Call glXChannelRectSGIX() to set the input video rectangle that will take effect the next swap or next frame (based on glXChannelRectSyncSGIX() setting.) The coordinates of the input video rectangle are those of the viewport just set up for drawing. This function has the following prototype:

      int glXChannelRectSGIX( Display *display, int screen, 
                             int channel, Window window) 
      

      where

      display specifies the connection to the X server

      screen specifies the screen of the X server.

      channel specifies the video channel number.

      x, y, w, h are the origin and size of the area of the window that will be converted to the output resolution of the video channel. (x,y) is relative to the bottom left corner of the channel specified by the current video combination.

    • Draw the scene.

    • Call glXSwapBuffers() for the window in question.

Example

The following example, from the glxChannelRectSGIX reference page, illustrates how to use the extension.

Example 10-1. Video Resize Extension Example


XSGIvcChannelInfo   *pChanInfo = NULL;

... open display and screen ...
glXBindChannelToWindowSGIX( display,screen,channel,window );
glXQueryChannelDeltasSGIX( display,screen,channel, &dx,&dy,&dw,&dh );

XSGIvcQueryChannelInfo( display, screen, channel, &pChanInfo );

X = pChanInfo->source.x;
Y = pChanInfo->source.y;
W = pChanInfo->source.width;
H = pChanInfo->source.height;

... open an X window (preferably with no borders so will not get ...
... moved by window manager) at location X,Y,W,H (X coord system) ...

while( ... )
{
    ...determine area(width,height) that will be drawn based on... 
    ...requirements. Make area smaller if application is fill limited..

    w =  width - ( width % dw );
    h =  height - ( height % dh );

    glViewport( 0,0,w,h );

    glXChannelRectSGIX( display,screen,channel, 0,0,w,h );

    ... draw scene ...

    glXSwapBuffers( display,window );
}

New Functions

glXBindChannelToWindowSGIX, glXChannelRectSGIX, glXChannelRectSyncSGIX, glXQueryChannelRectSGIX

SGIX_video_source—The Video Source Extension

The video source extension, SGIX_video_source, lets you source pixel data from a video stream to the OpenGL renderer. The video source extension is available only for system configurations that have direct hardware paths from the video hardware to the graphics accelerator. On other systems, you need to transfer video data to host memory and then call glDrawPixels() or glTex{Sub}Image() to transfer data to the framebuffer, to texture memory, or to a DMPbuffer (see “SGIX_pbuffer—The Pixel Buffer Extension”).

The video source extension introduces a new type of GLXDrawable—GLXVideoSourceSGIX—that is associated with the drain node of a Video Library (VL) path. A GLXVideoSourceSGIX drawable can be used only as the read parameter to glXMakeCurrentReadSGI() to indicate that pixel data should be read from the specified video source instead of the framebuffer.


Note: This extension is an SGIX (experimental) extension. The interface may change, or it may not be supported in future releases.

The remainder of this section presents two examples: Example 10-2 demonstrates the video to graphics capability of the Sirius video board using OpenGL. Example 10-3 is a code fragment for how to use the video source extension to load video into texture memory.

Example 10-2. Use of the Video Source Extension


/*
 * vidtogfx.c
 *  This VL program demonstrates the Sirius Video board video->graphics
 *  ability using OpenGL.
 *  The video arrives as fields of an interlaced format.  It is 
 *  displayed either by interlacing the previous and the current 
 *  field or by pixel-zooming the field in Y by 2.
 */
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <vl/vl.h>
#include <vl/dev_sirius.h>
#include <GL/glx.h>
#include "xwindow.h"
#include <X11/keysym.h>

/* Video path variables */
VLServer svr;
VLPath path;
VLNode src;
VLNode drn;
/* Video frame size info */
VLControlValue size;

int F1_is_first;                /* Which field is first */

/* OpenGL/X variables */
Display *dpy;
Window window;
GLXVideoSourceSGIX glxVideoSource;
GLXContext ctx;
GLboolean interlace = GL_FALSE;
/*
 * function prototypes
 */
void usage(char *, int);
void InitGfx(int, char **);
void GrabField(int);
void UpdateTiming(void);
void cleanup(void);
void ProcessVideoEvents(void);
static void loop(void);
int
main(int argc, char **argv)
{
    int         c, insrc = VL_ANY;
    int         device = VL_ANY;
    short       dev, val;
    /* open connection to VL server */

    if (!(svr = vlOpenVideo(""))) {
        printf("couldn't open connection to VL server\n");
        exit(EXIT_FAILURE);
    }

    /* Get the Video input */
    src = vlGetNode(svr, VL_SRC, VL_VIDEO, insrc);
    /* Get the first Graphics output */
    drn = vlGetNode(svr, VL_DRN, VL_GFX, 0);

    /* Create path   */
    path = vlCreatePath(svr, device, src, drn);
    if (path < 0) {
        vlPerror("vlCreatePath");
        exit(EXIT_FAILURE);
    }
    /* Setup path */
    if (vlSetupPaths(svr, (VLPathList)&path, 1, VL_SHARE, 
                           VL_SHARE) < 0) {
        vlPerror("vlSetupPaths");
        exit(EXIT_FAILURE);
    }
    UpdateTiming();
    if (vlSelectEvents(svr, path,VLStreamPreemptedMask |
                            VLControlChangedMask ) < 0) {
            vlPerror("Select Events");
            exit(EXIT_FAILURE);
    }
    /* Open the GL window for gfx transfers */
    InitGfx(argc, argv);
    /* Begin Transfers */
    vlBeginTransfer(svr, path, 0, NULL);
    /* The following sequence grabs each field and displays it in
     * the GL window.
     */
    loop();
}
void
loop()
{
  XEvent event;
  KeySym key;
  XComposeStatus compose;
  GLboolean clearNeeded = GL_FALSE;

  while (GL_TRUE) {
    /* Process X events */
    while(XPending(dpy)) {
      XNextEvent(dpy, &event);
      /* Don't really need to handle expose as video is coming at
       * refresh speed.
       */
      if (event.type == case KeyPress) {
        XLookupString(&event.xkey, NULL, 0, &key, NULL);
        switch (key) {
         case XK_Escape:
          exit(EXIT_SUCCESS);
         case XK_i:
          if (hasInterlace) {
            interlace = !interlace;
            if (!interlace) {
              if (!glXMakeCurrentReadSGI(dpy, window,
                                         glxVideoSource, ctx)) {
                fprintf(stderr,
                        "Can't make current to video\n");
                exit(EXIT_FAILURE);
              }
            } else if (!glXMakeCurrent(dpy, window, ctx)) {
              fprintf(stderr,
                      "Can't make window current to context\n");
              exit(EXIT_FAILURE);
            }
            printf("Interlace is %s\n", interlace ? "On" : "Off");
            /* Clear both buffers */
            glClear(GL_COLOR_BUFFER_BIT);
            glXSwapBuffers(dpy, window);
            glClear(GL_COLOR_BUFFER_BIT);
            glXSwapBuffers(dpy, window);
            glRasterPos2f(0, size.xyVal.y - 1);
          } else {
            printf("Graphics interlacing is not supported\n");
          }
          break;
        }
      }
    }
    ProcessVideoEvents();
    GrabField(0);
    glXSwapBuffers(dpy, window);
    GrabField(1);
    glXSwapBuffers(dpy, window);
  }
}

/*
 * Open an X window of appropriate size and create context.
 */
void
InitGfx(int argc, char **argv)
{
  int i;
  XSizeHints hints;
  int visualAttr[] = {GLX_RGBA, GLX_DOUBLEBUFFER, GLX_RED_SIZE, 12,
                      GLX_GREEN_SIZE, 12, GLX_BLUE_SIZE, 12,
                      None};
  const char *extensions;

  /* Set hints so window size is exactly as the video frame size */
  hints.x = 50; hints.y = 0;
  hints.min_aspect.x = hints.max_aspect.x = size.xyVal.x;
  hints.min_aspect.y = hints.max_aspect.y = size.xyVal.y;
  hints.min_width = size.xyVal.x;
  hints.max_width = size.xyVal.x;
  hints.base_width = hints.width = size.xyVal.x;
  hints.min_height = size.xyVal.y;
  hints.max_height = size.xyVal.y;
  hints.base_height = hints.height = size.xyVal.y;
  hints.flags = USSize | PAspect | USPosition | PMinSize | PMaxSize;
  createWindowAndContext(&dpy, &window, &ctx, 50, 0, size.xyVal.x,
                  size.xyVal.y, GL_FALSE, &hints, visualAttr, argv[0]);
    
  /* Verify that MakeCurrentRead and VideoSource are supported */
  ....
  glxVideoSource = glXCreateGLXVideoSourceSGIX(dpy, 0, svr, path,
                                               VL_GFX, drn);
  if (glxVideoSource == NULL) {
    fprintf(stderr, "Can't create glxVideoSource\n");
    exit(EXIT_FAILURE);
  }
  if (!glXMakeCurrentReadSGI(dpy, window, glxVideoSource, ctx)) {
    fprintf(stderr, "Can't make current to video\n");
    exit(EXIT_FAILURE);
  }
  /* Set up the viewport according to the video frame size */
  glLoadIdentity();
  glViewport(0, 0, size.xyVal.x, size.xyVal.y);
  glOrtho(0, size.xyVal.x, 0, size.xyVal.y, -1, 1);
  /* Video is top to bottom */
  glPixelZoom(1, -2);
  glRasterPos2f(0, size.xyVal.y - 1);
  glReadBuffer(GL_FRONT);
  /* Check for interlace extension. */
  hasInterlace = ... /* Interlace is supported or not */
}
/*
 * Grab a field. A parameter of  1 = odd Field, 0 = Even Field.
 * Use the global F1_is_first variable to determine how to
 * interleave the fields.
 */
void
GrabField(int odd_field)
{
  /* copy pixels from front to back buffer */
  if (interlace) {
    /* Restore zoom and transfer mode */
    glRasterPos2i(0, 0);
    glPixelZoom(1, 1);
    glCopyPixels(0, 0, size.xyVal.x, size.xyVal.y, GL_COLOR);

    /* Copy the field from Sirius Video to GFX subsystem */
    if (!glXMakeCurrentReadSGI(dpy, window, glxVideoSource, ctx)) {
      fprintf(stderr, "Can't make current to video\n");
      exit(EXIT_FAILURE);
    }
    if (odd_field) {
      if (F1_is_first) {
        /* F1 dominant, so odd field is first. */
        glRasterPos2f(0, size.xyVal.y - 1);
      } else {
        /* F2 dominant, so even field is first. */
        glRasterPos2f(0, size.xyVal.y - 2);
      }
    } else {
      if (F1_is_first) {
        /* F1 dominant, so odd field is first. */
        glRasterPos2f(0, size.xyVal.y - 2);
      } else {
        /* F2 dominant, so even field is first. */
        glRasterPos2f(0, size.xyVal.y - 1);
      }
    }
#ifdef GL_SGIX_interlace
    if (hasInterlace)
      glEnable(GL_INTERLACE_SGIX);
#endif
    /* video is upside down relative to graphics */
    glPixelZoom(1, -1);
    glCopyPixels(0, 0, size.xyVal.x, size.xyVal.y/2, GL_COLOR);
    if (!glXMakeCurrent(dpy, window, ctx)) {
      fprintf(stderr, "Can't make current to original window\n");
      exit(EXIT_FAILURE);
    }
#ifdef GL_SGIX_interlace
    if (hasInterlace)
      glDisable(GL_INTERLACE_SGIX);
#endif
  } else { 
    /* Not deinterlacing */
    glPixelZoom(1, -2);
    if (!odd_field) {
      if (!F1_is_first) {
        /* F1 dominant, so odd field is first. */
        glRasterPos2f(0, size.xyVal.y - 1);
      } else {
        /* F2 dominant, so even field is first. */
        glRasterPos2f(0, size.xyVal.y - 2);
      }
    } else {
      if (!F1_is_first) {
        /* F1 dominant, so odd field is first. */
        glRasterPos2f(0, size.xyVal.y - 2);
      } else {
        /* F2 dominant, so even field is first. */
        glRasterPos2f(0, size.xyVal.y - 1);
      }
    }
    
glCopyPixels(0, 0, size.xyVal.x, size.xyVal.y/2, GL_COLOR);
  }
}

/*
 * Get video timing info.
 */
void
UpdateTiming(void)
{
  int is_525;
  VLControlValue timing, dominance;

  /* Get the timing on selected input node */
  if (vlGetControl(svr, path, src, VL_TIMING, &timing) <0) {
    vlPerror("VlGetControl:TIMING");
    exit(EXIT_FAILURE);
  }
  /* Set the GFX Drain to the same timing as input src */
  if (vlSetControl(svr, path, drn, VL_TIMING, &timing) <0) {
    vlPerror("VlSetControl:TIMING");
    exit(EXIT_FAILURE);
  }
  if (vlGetControl(svr, path, drn, VL_SIZE, &size) <0) {
    vlPerror("VlGetControl");
    exit(EXIT_FAILURE);
  }
  /*
   * Read the video source's field dominance control setting and 
   * timing, then set a variable to indicate which field has the first 
   * line, so that we know how to interleave fields to frames.
   */
  if (vlGetControl(svr, path, src,
                   VL_SIR_FIELD_DOMINANCE, &dominance) < 0) {
    vlPerror("GetControl(VL_SIR_FIELD_DOMINANCE) on video source 
                                                        failed");
    exit(EXIT_FAILURE);
  }

  is_525 = ( (timing.intVal == VL_TIMING_525_SQ_PIX) ||
             (timing.intVal == VL_TIMING_525_CCIR601) );

  switch (dominance.intVal) {
    case SIR_F1_IS_DOMINANT:

      if (is_525) {
        F1_is_first = 0;
      } else {
        F1_is_first = 1;
      }
      break;
    case SIR_F2_IS_DOMINANT:
      if (is_525) {
        F1_is_first = 1;
      } else {
        F1_is_first = 0;
      }
      break;
  }
}

void
cleanup(void)
{
  vlEndTransfer(svr, path);
  vlDestroyPath(svr, path);
  vlCloseVideo(svr);
  exit(EXIT_SUCCESS);
}

void
ProcessVideoEvents(void)
{
  VLEvent ev;

  if (vlCheckEvent(svr, VLControlChangedMask|
                   VLStreamPreemptedMask, &ev) == -1) {
    return;
  }
  switch(ev.reason) {
    case VLStreamPreempted:
      cleanup();
      exit(EXIT_SUCCESS);
    case VLControlChanged:
      switch(ev.vlcontrolchanged.type) {
        case VL_TIMING:
        case VL_SIZE:
        case VL_SIR_FIELD_DOMINANCE:
          UpdateTiming();
          /* change the gl window size */
          XResizeWindow(dpy, window, size.xyVal.x, size.xyVal.y);
          glXWaitX();
          glLoadIdentity();
          glViewport(0, 0, size.xyVal.x, size.xyVal.y );
          glOrtho(0, size.xyVal.x, 0, size.xyVal.y, -1, 1);
          break;
        default:
          break;
      }
      break;
    default:
      break;
  }
}

Example 10-3. Loading Video Into Texture Memory


Display *dpy;
Window win;
GLXContext cx;
VLControlValue size, texctl;
int tex_width, tex_height;
VLServer svr;
VLPath path;
VLNode src, drn;

static void init_video_texturing(void)
{
    GLXVideoSourceSGIX videosource;
    GLenum intfmt;
    int scrn;
    float s_scale, t_scale;

    /* set video drain to texture memory */
    drn = vlGetNode(svr, VL_DRN, VL_TEXTURE, 0);

    /* assume svr, src, and path have been initialized as usual */

    /* get the active video area */
    if (vlGetControl(svr, path, src, VL_SIZE, &size) < 0) {
        vlPerror("vlGetControl");
    }
    /* use a texture size that will hold all of the video area */
    /* for simplicity, this handles only 1024x512 or 1024x1024 */

    tex_width = 1024;
    if (size.xyVal.y > 512) {
        tex_height = 1024;
    } else {
        tex_height = 512;
    }
    /* Set up a texture matrix so that texture coords in 0 to 1    */
    /* range will map to the active video area.  We want           */
    /* s' = s * s_scale                                            */
    /* t' = (1-t) * t_scale  (because video is upside down).       */
    s_scale = size.xyVal.x / (float)tex_width;
    t_scale = size.xyVal.y / (float)tex_height;
    glMatrixMode(GL_TEXTURE);
    glLoadIdentity();
    glScalef(s_scale, -t_scale, 1);
    glTranslatef(0, t_scale, 0);

    /* choose video packing mode */
    texctl.intVal = SIR_TEX_PACK_RGBA_8;
    if (vlSetControl(svr, path, drn, VL_PACKING, &texctl) <0) {
        vlPerror("VlSetControl");
    }
    /* choose internal texture format; must match video packing mode */
    intfmt = GL_RGBA8_EXT;

    glEnable(GL_TEXTURE_2D);
    /* use a non-mipmap minification filter */
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    /* use NULL texture image, so no image has to be sent from host */
    glTexImage2D(GL_TEXTURE_2D, 0, intfmt, tex_width, tex_height, 0,
                 GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    
    if ((videosource = glXCreateGLXVideoSourceSGIX(dpy, scrn, svr,
                                     path, VL_TEXTURE, drn)) == None) {
        fprintf(stderr, "can't create video source\n");
        exit(1);
    }
    glXMakeCurrentReadSGI(dpy, win, videosource, cx);
}

static void draw(void)
{
    /* load video into texture memory */
    glCopyTexSubImage2DEXT(GL_TEXTURE_2D, 0, 0, 0, 0, 0,
                           size.xyVal.x, size.xyVal.y);

    /* draw the video frame */
    glBegin(GL_POLYGON);
    glTexCoord2f(0,0); glVertex2f(0, 0);
    glTexCoord2f(1,0); glVertex2f(size.xyVal.x, 0);
    glTexCoord2f(1,1); glVertex2f(size.xyVal.x, size.xyVal.y);
    glTexCoord2f(0,1); glVertex2f(0, size.xyVal.y);
    glEnd();
    
}

New Functions

glXCreateGLXVideoSourceSGIX, glXDestroyGLXVideoSourceSGIX