Jude Anthony (jude++at++p3.enzian.com)
Thu, 5 Dec 1996 15:43:06 EST
I'm still working on the object size problem. As it turns out,
object sizes are the same in our perfly and Vega applications, and
they're both off a little bit. They show up around 97% of their
intended size, with error getting worse the farther out the object
gets.
I've already been burned once by a bad calculation method, which made
us believe object sizes were as bad as 64% of the intended value. So
I'm wondering if I'm doing this correctly.
Our old method worked by comparing the angle (twice the inverse
tangent of the ratio of the object's length to the distance from the
observer) to the known field of view, 45 degrees. This turned out
not to generate linear values (twice the distance didn't give half
the size), so I finally rejected it.
Our new method takes advantage of different ratios. Like this:
L2
\===============/ =
\ / |
\ / |
\ / |
\ L1 / | Range
\-----/ - |
\ / | Eye |
\ / | Range |
O - =
So that L1 is the size of our screen, L2 is the length of the object,
the EyeRange is the distance from our actual eyepoint to the screen,
and Range is the distance from our (performer) eyepoint to the
object. When we try to decide how large an object should be on the
screen, we calculate:
L1 / EyeRange == L2 / Range
and keep L1 as an unknown.
This is the model that gives us the much smaller errors (objects
around 97% of calculated size). At least it's linear. Is this the
correct way to find the size of object on the screen, or am I missing
something?
Thanks in advance (again),
Jude Anthony
jude++at++p3.enzian.com
=======================================================================
List Archives, FAQ, FTP: http://www.sgi.com/Technology/Performer/
Submissions: info-performer++at++sgi.com
Admin. requests: info-performer-request++at++sgi.com
This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:54:05 PDT