Javier Castellar (javier++at++sixty)
Wed, 3 Apr 1996 11:22:44 -0800
The current API is looks like:
pfuDistortion *pfuNewDistortion (pfChannel *, int mode, pfGeoSet *);
void pfuDeleteDistortion (pfuDistortion *);
void pfuCorrectDistortion (pfuDistortion *);
and is going to be available right after the Performer 2.1 release [iR support
for performer]. You should expect a posting on this mailing list soon.
The main aproach is to render first a planar projection on a pbuffer and then
perform a framebuffer to texture copy. After this copy we render again over a
undistorted mesh (with the proper isoFOVs). This mesh have to be provided by
the user and you can undistort the following effects in ones:
a) Perspective distortion (from planar to spherical isoFOVs)
b) Offset distortion
c) Optical distortion.
It is important to expose here the following restrictions:
1) The limits in resolution for the corrected channel will depend on:
1.1.- frame rate
1.2.- scene depth complexity
1.3.- number of RMs
=> you should expect resolutions less than 1088x810 at 60hz with four RM6
at a decent depth complexity. (single channel per pipe)
2) There is a minimum trade-off of mesh resolution vs. scene geometry,
1.1.- For an static mesh is neglectable
1.2.- For a dynamic mesh depends if you are fill or geometry limited.
3) There is a trade-off between corrected resolution vs. fill rate
4) There is NO ADDITIONAL LATENCY, all is done in the same frame time.
5) We make full use of the BEF FIFO in iR which in other words means that we
save most of the fill on the last pass.
You should expend one pipe per distorted channel if you are running at 60hz.
The spirit of the main design of iR is to change the way in which visual
simulation has been solving classic problems on a COT system and open new
possibilities and markets for our integrators. We beleive that although this
distortion correction system is very decent and generic the main battle is to
use more pixels, this is why it is a pfu and not a pf.
The aproach in which you have and Area Of Interest + a background channel is
fine for us (a two pipe iR can solve the problem with one pipe for the dome and
another for the AOI + instrumentation) but we think that now we have available
much more pixels and fill rate than ever, which let us to go into more channels
without distortion correction versus fewer channel with d.c.
Off course you can find requeriments in which the number of channels will be
too high to be implemented in the projection system and then i agree that you
will need to go to a distortion corrected channel configuration.
Anyway, the display system intergrators are becoming with nice solutions in
multichannel domes (6 channel and 8 channel configurations) that provide us a
nice arena to play.
Anyway is up to you, intergrators and developers, to decide wisely if you
should use or not the AOI+bakcground aproach or an multichannel dome, but
keeping in mind the trade offs related with your choice (resolution,
integration problems ... etc)
Hope this helps.
-Javier
-- ************************************************************************* * Javier Castellar Arribas * Email: javier++at++asd.sgi.com * * * Vmail: 3-1589 * * Member of Technical Staff * Phone: (415)-933-1589 | 933-2108 (lab) * * Applied Engineering * Fax: (415)-964-8671 * * Advanced Systems Division * MailStop: 8L-800 * ************************************************************************* * Silicon Graphics Inc. * * 2011 N. Shoreline Boulevard, * * Mountain View, California 94043-1386, USA * ************************************************************************* "Violence is the last refuge of the incompetant" Hari Seldon
This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:52:40 PDT