From: Praveen Bhaniramka (praveenb++at++sgi.com)
Date: 09/09/2004 06:14:54
Hi Bram, (CC'ing info-volumizer)
Hope you find the following info useful (IOW, hope the following does not
confuse you even more!) -
(Terminology used in subquent text - Multi-pipe system => System with
multiple graphics cards)
> -multipipe
OpenGL Multipipe (http://www.sgi.com/products/software/multipipe/) - OMP is
useful for running "multi-pipe unaware" applications on multi-pipe systems.
This is completely application transparent.
> -multipipe/sdk (which seems to be something different that simply an sdk
> for the aforementioned multipipe???)
OpenGL Multipipe SDK (http://www.sgi.com/products/software/multipipe/sdk) -
MPK is useful for creating "multi-pipe aware" applications so that they
would run on a multi-pipe system. This is NOT application transparent, hence
you need to modify your application to include MPK calls and link with the
MPK run-time library.
> -monster mode
Monster mode or Database decomposition - When you divide your data set
across multiple graphics pipes to generate partial images on each of the
pipes and then composite these results to generate the final output image.
See figure on http://www.sgi.com/products/software/volumizer/techsum.html
for an example.
> -scalable graphics
The concept of using multiple graphics pipes to do parallel rendering so
that your application's performance improves as you add more graphics pipes
to the system. In order to scale, your application would need to "decompose
the rendering process" among the graphics pipes, so that each pipe only does
part of the rendering. The output of these "decomposed" tasks are "partial
images" generated on each of the pipes, which are then "merged/composited"
to generate the final image. Monster mode decomposition mentioned above is
_one possible_ technique to acheive performance scaling. Others are 2D
(screen) decomposition, DPLEX (time decomposition), etc.
> -compositors
A piece of hardware which implements the compositing operation mentioned
above (the process of taking partial images and putting them back together)
by taking the DVI output from the graphics pipes directly. Alternatively,
the application, could do a glReadPixels to grab the partial images from the
pipes and then do a glDrawPixels to do the final composition. This form of
"software composition" is implemented inside MPK mentioned above.
> The reason I'm asking:
> We have this onyx4, 8 pipes, 4 compositors, which we want to use
> with vizserver.
>
> I need maximum performance to drive a single-windowed volume vis app.
> (volumizer based).
>
> Do I deploy the the compositors for this, and do dynamic screen space
> division? Would I need to cascade 3 compositors as 4->1 + 4->1 ===> 1 ?
> Preferably I would do this without the compositors, as we use compositing
> in stereo-mode for 4 cave screens.
This is really dependent on what your application is limited by (=> the size
of the data set AND size of the output image). volview is a sample
application which ships with Volumizer. It implements both 2D decomposition
(dynamic screen tiling) and DB decomposition (aka monster mode) using MPK.
2D decomposition would only help if your application is pixel fill limited
(since it divides the number of pixels generated on each pipe). Typically,
volume rendering applications are pixel-fill limited. DB decomposition
however would help with pixel fill limitation AND help scaling texture
memory. e.g. if your data set exceeds 256 MB, it will NOT fit on one
graphics cards' local memory. Hence you will need to do DB decomposition in
order to get maximum performance out of the system. If the data fits on the
local graphics' memory, you might as well use 2D decomposition only, since
it is much easier to implement. DB decomposition requires a lot more work
since you need to divide the data and then composite and blend the images in
a back-to-front sorted order.
The compositor does NOT support DB decomposition. Also, I am not sure how
you would Vizserve the DVI output from the compositor unless you have
special hardware to grab the DVI output. You can implement both 2D and DB
decompositions using software composition techniques mentioned above. If
your application is written using MPK, the composition step comes
transparently since MPK does all the work there. If not, you will have to
implement the glReadPixels/glDrawPixels part of the compositing pipeline
yourself.
Let me know if you have any further questions regarding this.
- Praveen
This archive was generated by hypermail 2b29 : Thu Sep 09 2004 - 06:14:57 PDT