Re: float precision

New Message Reply Date view Thread view Subject view Author view

Rob Jenkins (robj++at++quid)
Wed, 27 Nov 1996 12:04:16 -0800


On Nov 27, 6:09pm, BOCCARA Michael wrote:
> Subject: float precision
>
> [ plain text
> Encoded with "quoted-printable" ] :

>
> Hi Performer's,
>
> I have a quite general question.
> What would you propose to access to double precision for graphics
concern ?
>
> We are making space simulations with Performer 2.0 on an Onyx RE2.
> According to the Performer Programming Guide, the graphic subsystem does
only
> support floating precision, not double.
> The problem is that in space simulations, we have very important
differences
> of scale between object, for example between a spacecraft and the Earth.
> So if we want to be able to see both a flying spacecraft in Space, and
the
> Earth, we have to set a very high ratio between Near and Far clipping
planes,
> that causes flickering in float precision.
> Moreover, we have the same problem of precision concerning 3D
positionning of
> objects. Indeed, we need a precision of about 0.1 meter for an object
which
> is positionned at about 10000 kilometers from the Earth's center. We
dont
> have this precision with float variables, and so the result is a kind of
> shiverring of objects.
> The solution would be to work in double precision for graphics system,
which
> does not seem to be permitted.
>
> Does someone have a trick to play with those problem of floating/double
> precision ?
>
> Does the Infinite Reality graphics support double precision ?
>
> Thanks for your replies,
> Kind regards,
>
> Michael Boccara
> Software Engineer
> Aerospatiale, France
> List Archives, FAQ, FTP: http://www.sgi.com/Technology/Performer/
> Submissions: info-performer++at++sgi.com
> Admin. requests: info-performer-request++at++sgi.com
>-- End of excerpt from BOCCARA Michael

Michael Jones posted a discussion on the subject of verge large coordinate
systems and single precision a while back, I'll paste it at the end.

On the subject of near:far clipping plane ratios: As you obviously realise the
zbuffer ( in a perspective view ) has resolution that is inversely proportional
to Z, ie the further awy from the eye you get the less resolution in Z you
have. Keeping a low ratio between near and far helps spread this resolution
better. Ideally near:far should ideally be kept <= 1000. The easiest way to do
that is by pushing near out as far as possible ( changes to far don't effect
the ratio so much ). It may be worth experimenting with dymanic near and far
values. Do some course evaluation each frame to work out how far you can push
the near plane out. In some cases use of the Polygon Offset extension may be
appropriate. This problem is reduced on Infinite Reality by a Z buffer with
resolution spread more uniformly across it's depth. The Reference Plane
extension is supported on Infinite Reality too, see the man page for
glReferencePlaneSGIX(3G).

Cheers
Rob
-- mtj's posting ----
----( DP and SP are double and single precision respectively----------

:When will Performer provide access to double precision in both
:modeling and matrix operations similar to those in OpenGL (glRotated,
:glTranslated, glLoadMatrisd, glMultMatrixd, etc.)?

Careful here. OpenGL lets you pass double-precision values into the
functions, but all implementations that I'm aware of immediately
convert them to single-precision floating point before taking any
further steps, such as translations, matrix stack operations, etc.
To be clear, the results from using SP and DP would be exactly the
same on every machine that IRIS Performer operates on, were we to
support double precision.

Given that, there is one argument for and one against direct Performer
support for double precision. For: an application may use the geoset
data for it's own purposes and need the extra precision. This is a
good argument. Against: we'd get half of the effective bus bandwidth
since we'd send 8-bytes rather than 4 for each vertex, and the GFX
subsystem would be busy tossing out the extra bytes as it's first
task. This would mean lower performance. Our choice: single-precision,
lower data traffic, and greater performance until such time as the
machine would really use the extra precision.

:We are developing a "real" world simulation system using a geocentric
:coordinate system and, therefore, require the accuracy of double precision.

I'd agree with "desire", but not with "require". The reason is...

:Currently, we are applying an offset to our terrain database in order to
:gain a few decimal places in floating point precision.

...that this approach works well when carefully implemented. What you
need are some non-performer DP values:
   a DP eyepoint location,
   a DP global origin, and,
   a DP local origin for each big chunk of your world.
(The maximum chunksize should be limited to about 1/2 the dynamic range
of SP given your chosen model coordinate system epsilon).

In operation (method A):
   put the eye wherever you want (DP).
   put the global origin at the eye (DP).
   set the performer eye to 0,0,0. (SP is fine for this ;-)
   for each chunk
      compute chunkOffset as chunkOrigin - globalOrigin (DP)
      (this is the happy part ... even though the two numbers
      above are DP, the difference fits into a SP number nicely)
      set performer DCS for chunk to chunkOffset (SP)
   pfFrame();

Alternate method (saves one DCS, under the eye). Make sure you *really*
understand method A before thinking about this.
   put the eye wherever you want (DP).
   figure out which chunk is under the eye -- the "localChunk".
   set the global origin to the localChunk's origin. (DP)
   compute eyeOffset as eyepoint - globalOrigin;
      (though these subtrahends are DP, their difference is
      representable in SP)
   set the performer eye to eyeOffset
   for each chunk
      compute chunkOffset as chunkOrigin - globalOrigin (DP)
      (this is the happy part ... even though the two numbers
      above are DP, the difference fits into a SP number nicely)
      set performer DCS for chunk to chunkOffset (SP)
   pfFrame();

Once it all makes sense to you, implement method B if there is much
complexity in your database. The advantage of B is that there's no
matrix (or just an identity matrix) in the performer scene graph above
it, so that libpr geoset bounding box culling will be performed. This
can be a big win in the local area when many cultural features are
present and geodes contain many geosets.

Someday, perhaps Performer will provide double precision values
for the eyepoint and global offset in the pfChannel, and either a
special pfOrigin node or a new semantic for DCS nodes to make the
implementation of methods A and B automatic. Until then, the code
outlined above is identical to what we'd do--just as efficient and
probably better since it makes the relationships between the chunk
dynamic ranges and SP/DP precisions more exposed and thus likely
more widely understood.

Michael "Think Globally, Offset Locally" Jones

P.S. the case for DP geoset data so that CAD modelling tools, FEA
     programs, and such have access to the high-precision data
     that they need is a good one and not one that's lost on us.

---------------------

-- 
________________________________________________________________
Rob Jenkins mailto:robj++at++csd.sgi.com
Silicon Graphics, Mtn View, California, USA
=======================================================================
List Archives, FAQ, FTP:  http://www.sgi.com/Technology/Performer/
            Submissions:  info-performer++at++sgi.com
        Admin. requests:  info-performer-request++at++sgi.com

New Message Reply Date view Thread view Subject view Author view

This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:54:01 PDT

This message has been cleansed for anti-spam protection. Replace '++at++' in any mail addresses with the '@' symbol.