xfs
[Top] [All Lists]

Re: nfs/local performance with software raid5 and xfs and SMP

To: Steve Lord <lord@xxxxxxx>
Subject: Re: nfs/local performance with software raid5 and xfs and SMP
From: Simon Matter <simon.matter@xxxxxxxxxxxxxxxx>
Date: Fri, 20 Jul 2001 08:16:25 +0200
>received: from mobile.sauter-bc.com (unknown [10.1.6.21]) by basel1.sauter-bc.com (Postfix) with ESMTP id E3FDC57306; Fri, 20 Jul 2001 08:26:11 +0200 (CEST)
Cc: Tru Huynh <tru@xxxxxxxxxx>, jjaakkol@xxxxxxxxxxxxxx, "linux-xfs@xxxxxxxxxxx" <linux-xfs@xxxxxxxxxxx>
Organization: Sauter AG, Basel
References: <200107191812.f6JICsI32762@xxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Steve Lord schrieb:
> 
> > A little update,
> >
> > At least, some better news:
> > I did *NOT* see the problem (yet!) with the noapic option
> > with 2.4.7pre8-SMP
> 
> I am informed that there is a fix in the latest kernels in the raid5 code
> to fix some stall issues which ext3 hit. So it maybe that the answer here
> is to run the latest kernels. I still do not like the fact we do non-stripe
> friendly I/O to the log, and want to look into that.
> 
> Jani, this may also be the source of your problems, we did experience
> an almost complete lockup here with the 1.0.1 rpm kernel. If you get a

I'd like to reproduce this 'almost complete lockup' here, how can I do
this. I mean do I need SMP to reproduce this and what kind operation
did you do on the filesystem to force the lookup? Are we safe on UP?

> chance can you run the latest cvs kernel?
> 
> Steve
> 
> >
> > Steve Lord wrote:
> > >
> > <...>
> >
> > > If you choose another partition (non raid5) for the log and make an 
> > > externa
> > l
> > > log instead of using an internal log then the issue should go away, we
> > > are going to test this here. I suppose a mirrored log would be the best
> > > solution here as you could still survive the loss of a drive with the
> > > log on it.
> >
> > I wish I could do that :)
> > - hda and hdc on a raid 1 array (system disk) + mirrored xfs log
> > - sda..sdh for a 7 disk + 1 hot spare disk for the raid5 array.
> >
> > The only issue is space, trying to squeeze a second IDE disk
> > on the second IDE controller will not easily be manageable is
> > the rack :(
> > I do have a spare system disk on the shelf, and quick method to
> > restore the system disk but I would not risk putting the xfs log
> > on this lone disk.
> >
> > What if I cut the 8 disks in 2 parts
> > 1st partition of sd[a..d] of 2 GB (enough?) for a raid10 array for the
> > log
> > 2nd partition of sd[a..g] for the main raid5 array
> > sdh for the hot spare?
> >
> > Best regards,
> >
> > Tru
> > --
> > Dr Tru Huynh          | Bioinformatique Structurale
> > mailto:tru@xxxxxxxxxx | tel/fax +33 1 45 68 87 37/19
> > Institut Pasteur, 25-28 rue du Docteur Roux, 75724 Paris CEDEX 15 France

-- 
Simon Matter              Tel:  +41 61 695 57 35
Fr.Sauter AG / CIT        Fax:  +41 61 695 53 30
Im Surinam 55
CH-4016 Basel             [mailto:simon.matter@xxxxxxxxxxxxxxxx]



<Prev in Thread] Current Thread [Next in Thread>