xfs
[Top] [All Lists]

extsize in data?

To: <linux-xfs@xxxxxxxxxxx>
Subject: extsize in data?
From: "l.a walsh" <xfs@xxxxxxxxx>
Date: Sat, 15 Mar 2003 13:05:00 -0800
Importance: Normal
Sender: linux-xfs-bounce@xxxxxxxxxxx
I notice you can specify extent sizes as a sub-option to the realtime
section, but it doesn't seem to be an option for the normal data
subsection....a bit of a bummer.  Is there a reason why?  With a maximum
block size of 4k (assuming the man page about linux limitations to page
size are still accurate and assuming, I think, Linux uses a 4k pagesize) and
64K max extent, that's 256k.

Many of my files are > 256k, on one of my larger disks, >15000 files.

Is this a temporary limitation or is this sorta a fixed quality of
the fs?

Also, there was a case of someone making IMAX films writing 4x48Mb or
close to 200Mb/second to an XFS disk over SMBFS for 12+hours.  Quick
math ~> 2 terabytes/file.  Does that imply the files were composed
of over 8 million extents each?  If you had 4 separate streams writing to
disk at the same time, what would be the likely placement of the extents
for each stream?  interleaving with the streams or contiguous/stream, or
what?  Or is it likely undefined/semi-random -- I guess I'd assume
starting with an empty disk for best case.

Background/why I'm asking:

I'm having a discussion on the relative merits of NTFS that seems to need
constant defragmentation vs. Unix file systems that seem to need (or have)
little (xfs_defrag ~once a week by default), or no defragmentation
process (assumption generally being that fragmentation stays below 5-10%
if disk space is kept below 90 or 95% usage -- (is that the "common
wisdom"? Is it even valid?)).

I somehow got into this by proposing that instead of all these companies
writing custom, run-in-the-background, or centrally controlled
defragmenting utilities, they could just invest in porting over some of
the linux file systems to NT, and make money off support, GUI and network
management  (XFS was one I suggested, of course).

Then started getting into a discussion that is starting to be on the
edge of my knowledge -- and, while pointing to the ancient XFS white paper,
it also says that fragmentation is expected to be a long-term issue with
files like outlook .pst files, since they are written and rewritten in
small chunks over a large file with the large file being extended as
necessary, in small bits over time (the one corresponding to my main mail
folder is >148M...it might be painful, but I'm tempted to put it on
smbfs so the file would be on xfs for a while and turn off the xfs-defragger
for a week or so (I run mine nightly...but I'm a bit more fanatical about
optimizing layout).

Any benchmark studies on NTFS/XFS relative speed?


Thanks,
Linda



<Prev in Thread] Current Thread [Next in Thread>