[Top] [All Lists]

Re: nfs/local performance with software raid5 and xfs and SMP

To: "linux-xfs@xxxxxxxxxxx" <linux-xfs@xxxxxxxxxxx>
Subject: Re: nfs/local performance with software raid5 and xfs and SMP
From: Jani Jaakkola <jjaakkol@xxxxxxxxxxxxxx>
Date: Mon, 23 Jul 2001 14:31:49 +0300 (EEST)
In-reply-to: <200107191812.f6JICsI32762@xxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Thu, 19 Jul 2001, Steve Lord wrote:

> > A little update,
> >
> > At least, some better news:
> > I did *NOT* see the problem (yet!) with the noapic option
> > with 2.4.7pre8-SMP
> I am informed that there is a fix in the latest kernels in the raid5 code
> to fix some stall issues which ext3 hit. So it maybe that the answer here
> is to run the latest kernels. I still do not like the fact we do non-stripe
> friendly I/O to the log, and want to look into that.
> Jani, this may also be the source of your problems, we did experience
> an almost complete lockup here with the 1.0.1 rpm kernel. If you get a
> chance can you run the latest cvs kernel?

OK now I have the newest CVS kernel. Now I am getting the bad clientid
mount problem on my sw RAID5 partition (this could be caused by the old
kernel saving a garbled log)

Mount says:

mount: wrong fs type, bad option, bad superblock on /dev/md0,
       or too many mounted file systems

And kernel says:

XFS: xlog_recover_process_data: bad clientid
XFS: log mount/recovery failed
XFS: log mount failed

In case anyone is interested, I have saved output of xfs_logprint
and it is available from http://www.cs.helsinki.fi/u/jjaakkol/xfslog.txt

I am starting xfs_repair on the RAID partition now.. It wen't through
nicely and quickly and the files are still there. Hmm, the unlink()
performance is definetely better, but still not very good (this just how
it feels like, I have not yet done any real benchmarks).

Hmm, I'll guess I have to get my hands on some other benchmark software
than bonnie and try it with ext2, xfs+internal log and xfs+external log.

- Jani

<Prev in Thread] Current Thread [Next in Thread>