Steve,
I'll get another kernel built with kdb enabled and boot it with the NMI
settings suggested earlier. In the meantime, I got lucky and had a crash
from the job I left running last night. The output is at:
http://www.ncsa.uiuc.edu/people/arnoldg/xfs/cr1
Right now I'm voting for b) below-- xfs driving the vm system up the wall.
-Galen
On Thu, 24 May 2001, Steve Lord
wrote:
>
> >
> > Here's the file.
> >
>
> Hmm, I see there were a couple of followups on the xfs hang during run 4,
> I would really like to chase this one down if there is any chance of help
> from your end in following Keith Owens' suggestions. The tricky part here
> is determining if this is xfs itself, or xfs driving the linux vm system
> up the wall. xfs itself did not change in the read/write path between 2.4.2
> and 2.4.4, but the kernel does have relevent changes, and there are probably
> more to make yet.
>
> >
> > -Galen
> >
>
>
>
> >> Space efficiency comparison
> >> ext2 Filesystem 1k-blocks Used Available Use% Mounted
> >> on /dev/sdd1 710025700 9446452 665072388 1% /storage
> >>
> >> reiserfs /dev/sdc1 710115552 32840 710082712 0% /storage
> >>
> >> xfs /dev/sdd1 710050544 4106940 705943604 1% /storage
>
>
> These are a little bizzare - how much data did yoy have on the disk at
> this point, and where did reiserfs put it! Did you also benchmark mkfs
> times for the different filesystems (I see you got impatient with ext2
> inode creation). Also I wonder if there is not something we can do with
> xfs mkfs parameters to improve performance there, the latest mkfs.xfs
> from cvs has a -d agsize=xxx option, you could specify 4Gbytes here, this
> would allow xfs to allocate larger extents than the default of 1Gbyte,
> not that we would read or write that much in one go on this hardware.
> It might also be of some benefit to use the stripe alignment options of
> mkfs (see the man page for swidth, sw, sunit and su options.
>
> A couple of comments on the actual results, the XFS read path and the ext2
> read path are esentially the same, the readahead logic is the same code,
> the only difference is when the filesystem specific code is called to ask
> where a block lives on disk. I think we can squeeze a bit more out of xfs,
> but it takes mainline linux code changes.
>
> What sized I/O does record rewrite do, looks like we have some work to
> do there?
>
> I also wonder if iozone could be made to do Direct I/O.
>
> Thanks for doing all this benchmarking.
>
> Steve
>
>
--
+
Galen Arnold, system engineer--systems group arnoldg@xxxxxxxxxxxxx
National Center for Supercomputing Applications (217) 244-3473
152 Computer Applications Bldg., 605 E. Spfld. Ave., Champaign, IL 61820
|