xfs
[Top] [All Lists]

Re: How do you calculate optimal amount of RAM required for XFS filesys

To: Adrian Head <ahead@xxxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
Subject: Re: How do you calculate optimal amount of RAM required for XFS filesystems?
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Wed, 05 Dec 2001 08:47:34 +0100
In-reply-to: <200112050753.fB57rbo14480@xxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
At 16:53 5-12-2001 +1000, Adrian Head wrote:
I have been lurking on this list quick some time now; but I have never seen a
discussion regarding how to calculate the optimum amount of RAM required for
XFS based filesystems.

There was a thread a very long time ago (cannot find it now) where it was
stated that leaps of RAM was required but didn't specify the quantity of
heaps ;-)  For memory it also discussed the caching of filesystem data for
performance.  But I don't think the interaction between amount of RAM and XFS
was discussed fully.

Some of the older code needed more ram because the relied on getting it when it asked for it (during recovery for example).

These days 12MB is a minimum requirement to boot a XFS box and perform recovery as well.

If you have more then this you just start caching data and everything feels faster. If you have a webserver you could use 256 MB or more to make sure that the speed stays up by caching a lot of the static data. If your working static working set is about 1 GB of data, 512 MB ram should be able to cache a lot of the requests. Also take into account that certain programs run a _lot_ faster when they can also fit in memory.

Does anyone have experience on this subject?

Say - how much RAM is required for a 80G or 120G XFS file system?

32MB would work just fine and more is better.

Does having many different sized XFS partitions change anything?

No. Well some small things might add up, but I think you don't see that untill the TB range.

For example I have 1x 40G, 1x80G, 1x120G XFS partitions on the same
fileserver with 384M RAM.  Is this enough?

If people are reading _and_ writing this data at random over all the disks then it is more then enough. If there is more data read then written, buying more ram is a sane thing to do. If this is for a internet server behind a 64Kbit line that won't help the people leeching from it :-)

A politically correct answer would be "it depends".

Cheers

--
Seth
Every program has two purposes one for which
it was written and another for which it wasn't
I use the last kind.


<Prev in Thread] Current Thread [Next in Thread>