live video and performer

New Message Reply Date view Thread view Subject view Author view

Eric Rose (erose++at++ecrc.de)
Mon, 11 Nov 1996 12:33:06 +0100 (MET)


Has anyone out there attempted to integrate video (Sirius using
GLXVideoSourceSGIX) and Performer? I'm working on an RE2.

The main problem is with the interlacing and the frame rate. Ideally
I'd like to have Performer running at 50Hz (We're PAL over here)
doing video interlacing on every field, so that the video comes out
frame-accurate, rather than drawing two interlaced fields together at
once at 25Hz (which means that technically one field is 1/2 frame
behind). Any idea if Performer will go that fast with the overhead of
copying the pixels?

For implementation, I'm putting my own GL code before the pfDraw()
that turns off all of the options that slow down pixel transfers, then
I call glXMakeCurrentReadSGIX() and use glCopyPixels to copy the
(non-interlaced) scan-lines over into the framebuffer (using the
GL_INTERLACE_SGIX extension to write to every other line). However,
the second video field causes problems. If I immediately perform
another glCopyPixels from the video read context (which I do now),
will that wait until the next field is ready (thus incurring a hidden
time overhead inside of the draw loop)? The solution appears to be
able to store the previous frame somewhere (perhaps in a P-buffer?)
and perform one pbuffer copy and one video pixel-blit every frame.
However, writing from a GLXVideoSourceSGIX to a P-buffer using
glXMakeCurrentReadSGIX and glCopyPixels fails (BadMatch).

There doesn't seem to be a lot of documentation on how the
GLXVideoSourceSGIX deals with interlacing: whether it guarantees that
two reads will produce the even and odd fields, or whether any such
synchronization is there at all. Example programs seem to assume that
the reads extract field pairs on successive reads -- but is there some
sort of time-penalty involved? (Right now my Performer program appears
to be running *much* slower that a simple OpenGL glx-video demo).

I'd appreciate any thoughts on the subject. I would prefer not to
resort to video texturing since I need to write in the video pixels
without changing the Z-buffer (!!). Perhaps there is a way around this
problem, although I'd prefer just to blit the pixels right out on the
screen without going through any possible changes that the texturing
hardware might make to my incoming video stream.

-Eric

-- 
Eric Rose				 	http://www.ecrc.de/staff/erose/
Fraunhofer Projektgruppe fuer Augmented Reality im
Zentrum fuer Grafische Datenverarbeitung (ZGDv) eV     Email:	erose++at++ecrc.de
European Computer-Industry Research Centre (ECRC) GmbH Phone:	+49-89-92699-201
Arabellastrasse 17, D-81925 Munich		       FAX:	+49-89-92699-170
=======================================================================
List Archives, FAQ, FTP:  http://www.sgi.com/Technology/Performer/
            Submissions:  info-performer++at++sgi.com
        Admin. requests:  info-performer-request++at++sgi.com

New Message Reply Date view Thread view Subject view Author view

This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:53:55 PDT

This message has been cleansed for anti-spam protection. Replace '++at++' in any mail addresses with the '@' symbol.