Shared Memory
Jeff Brickley (jbrickley++at++lmwsmr.lesc.lockheed.com)
Wed, 02 Oct 1996 15:46:18 -0700
I have a very large structure of shared memory used between processors since
I share a great deal of data between application and draw processes running
on separate processors. As I have just been tasked to add some more data
that will need to be shared between application processes and draw
processes, I have begun to worry about just how much data I'm sharing and
what overhead is associated with it.... I just passed 240K of shared data
(allocated with pfMalloc) and I am heading quickly towards 300K. What is my
transfer overhead between CPU's with this shared structure?
I am using an Infinite Reality with 16 R10000 CPUs and Performer 2.1.
My application uses a minimum of 5 app processes on different CPUs (only
one does model placement and other geometry operations -- 4+ handle higher
math for calculating model placement) with dual pipe (which I gather is
handled with one draw process per pipe?). The 5 app processes handle 60% of
the data between each other and the draw processes transfer the remaining to
the app process. Right now, this is one allocation transferring the bulk of
EVERYTHING to ALL processes which is highly inefficient, but out of lack of
time, I have never changed it. Should I worry about transferring ~300K of
data around 8 to 16 processors?
Jeffry J. Brickley
3D Systems Programmer
Lockheed Martin
White Sands Missile Range
=======================================================================
List Archives, FAQ, FTP: http://www.sgi.com/Technology/Performer/
Submissions: info-performer++at++sgi.com
Admin. requests: info-performer-request++at++sgi.com
This archive was generated by hypermail 2.0b2
on Mon Aug 10 1998 - 17:53:42 PDT