xfs
[Top] [All Lists]

Re: nfs/local performance with software raid5 and xfs and SMP

To: Tru Huynh <tru@xxxxxxxxxx>
Subject: Re: nfs/local performance with software raid5 and xfs and SMP
From: Simon Matter <simon.matter@xxxxxxxxxxxxxxxx>
Date: Thu, 19 Jul 2001 15:52:17 +0200
>received: from mobile.sauter-bc.com (unknown [10.1.6.21]) by basel1.sauter-bc.com (Postfix) with ESMTP id C223757306; Thu, 19 Jul 2001 16:02:00 +0200 (CEST)
Cc: "linux-xfs@xxxxxxxxxxx" <linux-xfs@xxxxxxxxxxx>
Organization: Sauter AG, Basel
References: <3B55E49E.5B4B6CEF@pasteur.fr> <4.3.2.7.2.20010719111805.03d75be0@pop.xs4all.nl> <3B56AD91.FA721B93@pasteur.fr> <3B56E33E.B4465042@pasteur.fr>
Sender: owner-linux-xfs@xxxxxxxxxxx
Tru Huynh schrieb:
> 
> Hello,
> 
> It looks like a kernel SMP issue more than a 3ware driver
> problem.
> - I have tried the following SMP kernels:
> 2.4.3-xfs-1.0.1 and 2.4.7pre8 cvs version which include
> the same 3ware 3w-xxxx driver. (I have not yet flash the firmware).
> - I can see it on a plain IDE raid5 partition.
> 
> I will now flash the 3ware card firmware. but I don't
> know what that can change...
> 
> Next step is rebooting with noapic.
> 
> /dev/md1 is now a raid5-xfs on the system HD
> #-------------------------------
> # raid5 on hda
> #-------------------------------
> raiddev                 /dev/md1
> raid-level              5
> nr-raid-disks           3
> persistent-superblock   1
> chunk-size              64
> parity-algorithm        left-symmetric
> #
> device                  /dev/hda6
> raid-disk               0
> device                  /dev/hda7
> raid-disk               1
> device                  /dev/hda8
> raid-disk               2
> 
> I can still see the nfs freeze (server nfs.cluster not responding)
> on both raid5 devices under 2.4.3-xfs. on the client side
> syslog reports a "nfs_statfs: statfs error = 116 ".
> 
> I can still see it under 2.4.7pre8 on /dev/md0 or /dev/md1
> but only after a little longer time...
> 
> Off topic: One strange thing is the checksumming function:
> /var/log/messages
> kernel: raid5: measuring checksumming speed
> kernel:    8regs     :  1169.200 MB/sec
> kernel:    32regs    :   788.000 MB/sec
> kernel:    pIII_sse  :  1727.600 MB/sec
> kernel:    pII_mmx   :  1924.000 MB/sec
> kernel:    p5_mmx    :  2045.200 MB/sec
> kernel: raid5: using function: pIII_sse (1727.600 MB/sec)

I'm not an expert but I think it has something to do with
pIII_sse being cache friendly while the mmx functions are
not. Maybe I'm wrong but I remember something like that.

> 
> Why not using the faster p5_mmx ?
> 
> > > >4) try SMP+raid5 on local IDE HD to rule out 3ware issue
> ruled out. :(
> 
> Tru
> --
> Dr Tru Huynh          | Bioinformatique Structurale
> mailto:tru@xxxxxxxxxx | tel/fax +33 1 45 68 87 37/19
> Institut Pasteur, 25-28 rue du Docteur Roux, 75724 Paris CEDEX 15 France

-- 
Simon Matter              Tel:  +41 61 695 57 35
Fr.Sauter AG / CIT        Fax:  +41 61 695 53 30
Im Surinam 55
CH-4016 Basel             [mailto:simon.matter@xxxxxxxxxxxxxxxx]



<Prev in Thread] Current Thread [Next in Thread>