Hello.
On Tue, 13 Dec 2005 12:55:15 +1100
David Chinner <dgc@xxxxxxx> wrote:
> > I think this is not an acceptable reason.
> > If I have a fast CPU, reasonable filesystem size to equipped memory
> > and slow disk, then system can easily eat up all memory.
> > This leads to local DoS.
>
> Well, no. We'd have lots of reports of this problem if that
> was the case.
I see. It makes sense.
> You need a fast disk to enable the page cache to eat itself - a slow
> disk can't bring in enough data to turn the page cache over fast
> enough to cause this situation.
>
> That's the reason we have never seen this before - not very many
> people decide to put 10TB of fast disk behind a machine with very
> little ram....
I think there is a design issue of filesystem here.
Assume the following scenario.
I put a large (>1TB) data.
A few client want to use 1% of data.
They accesss independent data and not reuse it.
Then I would decide to equip a little RAM.
Because I think there is no need to make data cache.
So a large filesystem with a litte RAM will be probable case.
> If you read the mkfs.xfs man page, you'll see that is says that the size of
> the log is scaled with fs size and reaches it's maximum size at 1TB. So at
I can find that the default size of log comes from fs size,
but I can't find the description that the size of log grows up to 1TB.
Could you point out where it is in man page?.
> of 128MB. That is what I meant when I said remake your filesystem with
> a smaller log - I should have pointed out how to do that with the above
> example...
Ok. I understand.
But sorry, this is not an acceptable option now,
because I have already filled half of 10TB by data.
I'll be happy if 'xfs_growfs -l" option is implemented.
Anyway, thank you for your advice.
--
CHIKAMA Masaki @ NICT
|