Re: Inverse Distortion Correction

New Message Reply Date view Thread view Subject view Author view

Angus Dorbie (dorbie++at++bitch.reading.sgi.com)
Mon, 3 Feb 1997 12:50:22 +0000


On Feb 3, 6:53am, WILLIAM_MARINELLI++at++ntsc.navy.mil wrote:
> Subject: Inverse Distortion Correction
>
> I seriously doubt this has anything to do with performer but I have
> alot of respect for the participants here, many of whom have helped me
> here a number of times. I just started your Monday with flattery,
> you're welcome.
>
> We have some dome simulators in which the distortion correction is
> done in the image generator.
>
> There is some interest in having the out-the-window scene (5 channels)
> Tee'd off to a sort of ops center. In that case, "flat" video would be
> great. I know, why not do the distortion correction in the projectors
> instead of the IG? Unfortunately, we may have painted ourselves into a
> corner on that one.
>
> Do you suppose it is possible to get something like an O2 and write a
> program that (1) reads a video signal from a distortion corrected
> image and then (2) loads the frame buffer with the corresponding
> uncorrected image?

Yes!, an O2 would be great at this, try running the video distort demo
if you need convincing but I think you'll have to encode to a lower
resolution format like pal. Perhaps you don't need to do this if your
image generator is an Infinite Reality.

Here's a suggestion:

When performing distortion correction on an iR you render the primary
channel or channels to the backbuffer then read to texture memory then
draw the textured distortion mesh to the backbuffer then swap to display
the final image.

If you use different areas of the framebuffer for the rendering of
the primary channels and the distortion mesh you could send both the
distorted and undistorted images to separate video outputs on the
display generator.

So, just to recap what you might want to do:

You create a vof which outputs a video signal from the framebuffer
for the undistorted channel, and another video signal from the part
of the framebuffer containing the distorted image.

The first video gets fed to the instructors station, the second goes
to the projector in the dome.

You render the undistorted image to the framebuffer where the instructors
video is taken from.
You read this image into texture memory.
You draw the distortion mesh with the visual texture to the area of
the framebuffer where the dome video is taken from.

The viability of this depends entirely on the type of distortion you
are performing in the I.G.

Cheers,
Angus.
=======================================================================
List Archives, FAQ, FTP: http://www.sgi.com/Technology/Performer/
            Submissions: info-performer++at++sgi.com
        Admin. requests: info-performer-request++at++sgi.com


New Message Reply Date view Thread view Subject view Author view

This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:54:34 PDT

This message has been cleansed for anti-spam protection. Replace '++at++' in any mail addresses with the '@' symbol.