Robert Stein (rstein++at++ncsa.uiuc.edu)
Tue, 03 Aug 1999 11:49:45 -0500
I have an application that draws adaptive terrain... and I'm trying to
optimize it for performance on our Onyx2 IR2 system... When I display the
pfStats data there appears a line like the following:
msecs: latency=199.6 app=0.5 cull=5.6 draw=57.0 compute=0.0 dbase=0.0
I understand what all the numbers are except for the one for "latency"
where does this number come from, and how can I reduce it?
Currently the scenegraph is set up such that for the terrain each pfGeoSet
has one (1) tristrip length=128. Each pfGeoSet also gets put into its own
pfGeode, and they are all assigned the same pfGeoState... These pfGeodes
are put into a pfGroup node and the whole this is stuck into the scene...
This is obviously a braindead way of structuring a scenegraph... but other
than increasing the culling cost (which not high as seen above) and
intersecion testing (my datastructure for the terrain handles the
intersection testing quickly) are there other places where this is costing
me? How should such a scene graph be constructed for optimal performance?
Does strip length matter? Num strips per GeoSet? GeoSets per Geode? etc...
Thanks in advance for your help... Also while I'm at it let me sound the
collective YIPPEE for SGI's descision to port Performer to the Linux
platform! :)
Sincerely,
Robert Stein
Robert J. Stein
National Center For Supercomputing Applications
405 N. Matthews, Urbana, IL
(217) 244-7584
rstein++at++ncsa.uiuc.edu
This archive was generated by hypermail 2.0b2 on Tue Aug 03 1999 - 09:49:50 PDT