Re: Need a Math Lesson

New Message Reply Date view Thread view Subject view Author view

Rob Jenkins (robj++at++quid)
Thu, 5 Dec 1996 15:42:44 -0800


On Dec 5, 3:43pm, Jude Anthony wrote:
> Subject: Need a Math Lesson
> Hi again everybody.
>
> I'm still working on the object size problem. As it turns out,
> object sizes are the same in our perfly and Vega applications, and
> they're both off a little bit. They show up around 97% of their
> intended size, with error getting worse the farther out the object
> gets.
>

If one of the values you use relies on information in the Z buffer in some way
then that might explain this behaviour, in a perspective viewing frustum the
resolution of the Z buffer is inversely proportional to the distance in Z from
the eye. The *ratio* between near and far clipping controls the extent of this
non-linearity, keeping near:far as small as possible ( rule of thumb say <=
1000 ) helps. Note - a small increase in the near value can reduce that ratio a
lot so if you push the near clip out further does the error get less ?

> I've already been burned once by a bad calculation method, which made
> us believe object sizes were as bad as 64% of the intended value. So
> I'm wondering if I'm doing this correctly.
>
> Our old method worked by comparing the angle (twice the inverse
> tangent of the ratio of the object's length to the distance from the
> observer) to the known field of view, 45 degrees. This turned out
> not to generate linear values (twice the distance didn't give half
> the size), so I finally rejected it.
>
> Our new method takes advantage of different ratios. Like this:
> L2
> \===============/ =
> \ / |
> \ / |
> \ / |
> \ L1 / | Range
> \-----/ - |
> \ / | Eye |
> \ / | Range |
> O - =
>
>
> So that L1 is the size of our screen, L2 is the length of the object,
> the EyeRange is the distance from our actual eyepoint to the screen,
> and Range is the distance from our (performer) eyepoint to the
> object. When we try to decide how large an object should be on the
> screen, we calculate:
> L1 / EyeRange == L2 / Range
> and keep L1 as an unknown.
>
> This is the model that gives us the much smaller errors (objects
> around 97% of calculated size). At least it's linear. Is this the
> correct way to find the size of object on the screen, or am I missing
> something?
>

A perspective frustum with a given FOV will look distorted if the eye isn't a
certian distance from the screen - do you choose you eye distance based on that
or is it arbitary ? The OpenGL Programming Guide discusses this around P92 (
this is online, installed from gl_dev.books if you don't have hard copy ) so
you may be suffering because of that maybe.

I guess there's also the issue of screen resolution, ulitmately an object can
only be drawn to the nearest pixel, say a line in the distance is 2 pixels
wide, you move further away and it then gets drawn 1 pixel wide but you
probably haven't doubled the distance away from it. You ought to be able to get
pretty good calc of how big an object *should* look though if you know it's
distance from you accurately but that value may always be a bit different from
how big it is - I'm sure others have tackled this and may post more specific
thoughts.

Cheers
Rob

-- 
________________________________________________________________
Rob Jenkins mailto:robj++at++csd.sgi.com
Silicon Graphics, Mtn View, California, USA
=======================================================================
List Archives, FAQ, FTP:  http://www.sgi.com/Technology/Performer/
            Submissions:  info-performer++at++sgi.com
        Admin. requests:  info-performer-request++at++sgi.com

New Message Reply Date view Thread view Subject view Author view

This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:54:05 PDT

This message has been cleansed for anti-spam protection. Replace '++at++' in any mail addresses with the '@' symbol.