Angus Dorbie (dorbie++at++bitch.reading.sgi.com)
Mon, 3 Feb 1997 12:50:22 +0000
Yes!, an O2 would be great at this, try running the video distort demo
if you need convincing but I think you'll have to encode to a lower
resolution format like pal. Perhaps you don't need to do this if your
image generator is an Infinite Reality.
Here's a suggestion:
When performing distortion correction on an iR you render the primary
channel or channels to the backbuffer then read to texture memory then
draw the textured distortion mesh to the backbuffer then swap to display
the final image.
If you use different areas of the framebuffer for the rendering of
the primary channels and the distortion mesh you could send both the
distorted and undistorted images to separate video outputs on the
display generator.
So, just to recap what you might want to do:
You create a vof which outputs a video signal from the framebuffer
for the undistorted channel, and another video signal from the part
of the framebuffer containing the distorted image.
The first video gets fed to the instructors station, the second goes
to the projector in the dome.
You render the undistorted image to the framebuffer where the instructors
video is taken from.
You read this image into texture memory.
You draw the distortion mesh with the visual texture to the area of
the framebuffer where the dome video is taken from.
The viability of this depends entirely on the type of distortion you
are performing in the I.G.
Cheers,
Angus.
=======================================================================
List Archives, FAQ, FTP: http://www.sgi.com/Technology/Performer/
Submissions: info-performer++at++sgi.com
Admin. requests: info-performer-request++at++sgi.com
This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:54:34 PDT