Re: Memory management question

New Message Reply Date view Thread view Subject view Author view

Sharon Clay (src++at++rose.asd.sgi.com)
Tue, 30 Apr 1996 01:02:26 -0700


+>---- On Apr 19, 6:20pm, Michael T. Jones wrote:
> Subject: Re: Memory management question
->From guest++at++holodeck.csd.sgi.com Fri Apr 19 19:00:34 1996
->From: "Michael T. Jones" <mtj++at++babar>
->Date: Fri, 19 Apr 1996 18:20:59 -0700
->In-Reply-To: Stephen_Gifford++at++MAPS.CS.CMU.EDU
-> "Re: Memory management question" (Apr 19, 4:26pm)
->X-Mailer: Z-Mail (3.2.3 08feb96 MediaMail)
->To: Stephen_Gifford++at++MAPS.CS.CMU.EDU, info-performer++at++sgi.sgi.com
->Subject: Re: Memory management question
->
->On Apr 19, 4:26pm, Stephen_Gifford++at++MAPS.CS.CMU.EDU wrote:
->> Subject: Re: Memory management question
->
->> The pfMultiprocess mode has been PFMP_APPCULLDRAW (0) through
->> all of these attempts. Since we're not actually drawing the geometry,
->> only using pf functions to manipulate it, pfDraw is never being
->> called. The program is simply running out of shared memory after
->> growing larger and larger in a linear fashion.
->
->Have you tried libdmalloc as a diagnostic approach?
->
->> This looks to me like a core leak of some kind. I'm curious
->> if anyone else has had success with similar work: processing 200+MB
->> of data in a piece by piece fashion (iterating over a rectangular grid
->> of load modules read in from disk, for example). In particular,
->> anything where a Performer related core leak would cause a real
->> problem.
->
->We don't have known memory leaks, so there is no quick answer of the
->"just get patch nnn and it will be ok" type. We have done extensive
->testing and feel that this should not be a leakage situation from in
->the Performer libraries themselves.

I will second the comment that we hve no known memory leaks. However,
memory usage can be seen due to several factors:

In IRIX 5.3 there was a couple of IRIX bugs that cause cause IRIX itself
    to grow. If you are having memory management problems, the first
    thing to do is to run bloatview (for IRIX5.3) or gmemusage (IRIX6.2)
    and look to see who is actually growing - your app or IRIX.
    Then, click on your process to see the full display and find out if
    it is the swap area (which is where we put the shared memory arena)
    or the sbreak (heap) area of a single process that is growing.

Performer almost always allocates data from the arena. If that seems to
be growing it may be due to
1) fragmentation
2) the amount of space to hold data as you bring in new files and
        slowly asynch delete others may be much more than you expect and
        grow in jumps, but may not actually grow without bound.

Things to try first:
    o set some options to malloc to improve compaction of memory allocation
        to reduce fragmentation:
        amallopt(M_MXCHK, 1000000, pfGetSharedArena());
        amallopt(M_FREEHD,1,pfGetSharedArena());
    o track the growth of the arena to isolate out when it is growing.
        amallinfo will give you back size info of the arena.

src.

-- 
-----{-----{---++at++   -----{----{---++at++   -----{----{---++at++   -----{----{---++at++
Sharon Rose Clay (Fischler) - Silicon Graphics, Advanced Systems Dev.
src++at++sgi.com  (415) 933 - 1002  FAX: (415) 965 - 2658  MS 8U-590
-----{-----{---++at++   -----{----{---++at++   -----{----{---++at++   -----{----{---++at++

New Message Reply Date view Thread view Subject view Author view

This archive was generated by hypermail 2.0b2 on Mon Aug 10 1998 - 17:52:49 PDT

This message has been cleansed for anti-spam protection. Replace '++at++' in any mail addresses with the '@' symbol.