From: Ong Tze Lin (tzelin++at++sgi.com)
Date: 04/20/2003 19:31:19
Its most likely a known issue about allocating large shared mem arenas, this
is due to the DSO lib mapping. Basically, the way some libs are mapped into
memory, they leave no contiguous memory segment large enough for sizes ~1GB.
One way to get around that is, indeed, to use 64-bit perfly => >2GB address
space to choose from.
Otherwise, there is a tool that ships with Performer that does the remaps.
I forget the name but its something like "remapLibs". Originally
contributed by Ran Yakir, its a script which remaps each lib one by one and
save it to some new subdirectory. Then you have to set LD_LIBRARY_PATH
appropriately to use these new, remapped libs.
You should check the actual arena size being created. Set PFNFYLEVEL to
about 5, then check for the allocation messages... make sure you're getting
what you ask for- when pf can't allocate a certain size of arena, it will
recursively halve the request size until it finds a slot that will fit.
Cheers,
Tze Lin
--
Ong Tze Lin | Principal Consultant | SGI ICON Region
(D) +65 6771 0219 (F) +65 6779 3650 (H) +65 9832 7125
89 Science Park Drive #03-06 Singapore 118 261
> -----Original Message-----
> From: Bram Stolk [mailto:b.stolk++at++chello.nl]
> Sent: Friday, April 18, 2003 4:56 PM
> To: V. Sundararajan
> Cc: info-performer++at++sgi.com
> Subject: Re: [info-performer] Loading of inventor data using perfly
>
>
> Hi Sundar,
>
> Last time I dealt with iv exports from Catia, it turned
> out that the exported file contained 5 levels of detail.
> If you manage to remove the highest LOD, you can probably
> cut datasizes down by a half. Alternatively, only keeping
> the highest one, also tends to halve the datasize.
>
> In my case, I opted to keep a medium level of detail, for
> an even bigger data reduction. Having said this, I must
> warn you that the lower LODs from our Catia exports showed
> mangled geometry though, due to incorrect mesh simplification.
> We did all this 3 years ago, and I do not know which Catia
> versions we dealt with.
>
> Bram
>
> On Thu, 17 Apr 2003 22:44:05 +0530
> "V. Sundararajan" <sundar++at++sgi.com> wrote:
>
> > Hello,
> >
> > A customer has his aircraft data created using Catia and
> has converted to
> > Inventor and pfb formats. the data is spread over 1025
> files and the approx
> > number of triangles is about 500 million.
> >
> > The customer has an Onyx 3200 with IR3 graphics (8GB Memory
> and 16GB Swap),
> > tried to load with perfly_n32 and was not able to load the
> whole data into
> > memory. While loading perfly gave the error to increase
> PFSHAREDSIZE and
> > PFSEMASIZE. The customer set the respective values to
> 2,000,000,000 bytes
> > and 1,000,000 bytes (default was 262144). Even then perfly
> did not load up
> > all the data and asked for an increase of PFSHAREDSIZE from
> 976,562K. It
> > did not allocate a size of 2,000,000,000 bytes.
> > The customer was able to load his data with perfly_n64. He
> was able to load
> > the data and view it using perfly, but the camera operations were
> > comparatively slow.
> >
> > Could anyone throw light on what the performer environment
> variable setting
> > should be. There is also a variable to set the shared memory base.
> >
> > Many Thanks in advance,
> >
> > Regards,
> > Sundar
> >
> >
> --------------------------------------------------------------
> ---------
> > List Archives, Info, FAQ: http://www.sgi.com/software/performer/
> > Open Development Project: http://oss.sgi.com/projects/performer/
> > Submissions: info-performer++at++sgi.com
> > Admin. requests: info-performer-request++at++sgi.com
> >
> --------------------------------------------------------------
> ---------
> >
>
> --------------------------------------------------------------
> ---------
> List Archives, Info, FAQ: http://www.sgi.com/software/performer/
> Open Development Project: http://oss.sgi.com/projects/performer/
> Submissions: info-performer++at++sgi.com
> Admin. requests: info-performer-request++at++sgi.com
> --------------------------------------------------------------
> ---------
>
This archive was generated by hypermail 2b29 : Sun Apr 20 2003 - 19:30:36 PDT