xfs
[Top] [All Lists]

Re: Log file size?

To: Joshua Baker-LePain <jlb17@xxxxxxxx>
Subject: Re: Log file size?
From: Steve Lord <lord@xxxxxxx>
Date: Thu, 31 May 2001 16:12:08 -0500
Cc: "C. J. Keist" <cjay@xxxxxxxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
Comments: In-reply-to Joshua Baker-LePain <jlb17@xxxxxxxx> message dated "Thu, 31 May 2001 16:52:09 -0400."
References: <Pine.LNX.4.30.0105311647420.22473-100000@xxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
> On Thu, 31 May 2001 at 2:21pm, C. J. Keist wrote
> 
> > Is there a standard formula on how to determine what log file size for
> > xfs on a given file system size?
> > I'll be looking a creating a 500Gb size xfs file system.
> >
> I asked the same question back in March (for a 560GB hardware IDE-SCSI
> RAID), and Steve Lord suggested 16384b or 32768b.  There was mention of
> adding heuristics to mfks.xfs for this at some point, but I don't recall
> seeing any TAKEs for that...
> 
> Steve also suggested mounting with -o logbufs=8 if you expect heavy
> traffic.
> 
> -- 
> Joshua Baker-LePain
> Department of Biomedical Engineering
> Duke University
> 

We have done some more thinking about this since then, and the heuristics
are in the latest mkfs, but I do not think it will bump the log size for
a filesystem this small ;-), it probably does not start until you are
in the terabyte range.

Anyway, the size of the log (those are 4K blocks by the way) governs
how much metadata you can have in modified state without having to
flush it to disk. A bigger log means there will be more occasions
when you do not end up in what we call tail pushing where each new
transaction going into the filesystem has to push some metadata
out to disk before it can get log space. Of course, constant sustained
activity can always get you there - unless you disk runs at memory
speeds. So a bigger log makes the filesystem run faster more of the
time, it also slows down how long it takes to mount, especially if
recovery is involved. You pays your money and takes your choice.

Log size is not a function of filesystem size, but a function of how
much metadata is changing per second.

I would maybe go for 4096b (which is 16 Mbytes) for a fairly active
large filesystem (I am calling yours large), but you might want to
benchmark a bit if you really care, mkfs will not take too long to
run (there are no inodes to create).

The iclog buffers is also useful - it represents how many log writes
can be in flight to disk at once. If all your buffers are in transit
then transactions will get backed up waiting for them.

Steve


<Prev in Thread] Current Thread [Next in Thread>