Yair Kurzion (yair++at++polygon.engr.sgi.com)
Mon, 29 Nov 1999 12:50:15 -0800 (PST)
> I know that when multiprocessing, the culling process has it's own copy of
> the tree.
> But which tree???
All processes downstream from APP (one CULL process per pipe and the ISECT
process), have copies of all the pfUpdatable's that APP creates. A pfNode is a
pfUpdatable so each process downstream from APP, has a copy of all the nodes in
the system. Performer does not consider scene graph connectivity (what node
lives on what pfScene) when cloning a node from the APP process to a downstream
process.
> Let's say that I'm loading 3 databases, and have 3 channels - one for each
> graphic pipe. Lets say that each channel points
> in it's scene to only one of the databases, will the Cull on each pipe have
> a copy of the BIG tree (tree with the 3
> databases) or only the part of the tree that the channel uses?
> From what I've tested the cull has a copy of the BIG tree. Am I correct?
Yes.
> When exactly is the tree copied to the cull?
In the first call to pfSync after you create the new nodes: A new node of the
same type is created, and the contents of the APP node gets copies to the new
node.
> If I'm correct - is there a way that I can reduce this big waste of memory?
> (because if this is correct and lets say that the
> size of each database is approximately n I have 12n (3 for the app and 3 for
> each Cull) instead of 6n (3 for the app and 1
> for each Cull).
Performer manages a single node-registry (aka pfBuffer) in the APP process.
It doesn't support a per-pipe pfBuffer. I don't think the API provides a way
around this.
-yair
--
\_________ \_____ \__ \__ \_____ Yair Kurzion
\_________ \_____ \__ \__ \_____ yair++at++sgi.com
\__ \__ \____\__ \__ http://reality.sgi.com/yair
\__ \__ \__ Work: (650) 933-6502
\__ \__ \__ Home: (408) 226-9771
\__ \__ \__
This archive was generated by hypermail 2.0b2 on Mon Nov 29 1999 - 12:50:16 PST