| To: | <linux-xfs@xxxxxxxxxxx> |
|---|---|
| Subject: | Problems using xfs on RAID 5 volumes |
| From: | "Horchler, Joerg" <joerg.horchler@xxxxxxxxxxxxx> |
| Date: | Mon, 9 Jan 2006 11:34:56 +0100 |
| Sender: | linux-xfs-bounce@xxxxxxxxxxx |
| Thread-index: | AcYVCFWXpPQJrAdCQ9aZciM4ewrS0A== |
| Thread-topic: | Problems using xfs on RAID 5 volumes |
|
Hi, we have a big problem using XFS on our fileserver. Our configuration is: We are using a Dell PowerVault as external RAID Array which is configured with two logical volumes. Each logical volume is configured with 7 physical disks. Six disks are configured to form a RAID 5 and the last is configured as hot spare. Our server is a 'SuSE Linux Enterprise Server 9' running with kernel 2.6.5-7.151-smp. xfsprogs of version 2.6.25-0.2 are installed. I don't know which version of XFS is installed with the running kernel. Now our problem: Every time a physical disk fails (and the RAID swaps from state OPTIMAL to DEGRADED) the RAID rebuilds onto the hot spare. During this rebuild we get a lot of XFS errors in our dmesg: 0x0: 66 4e 1f 21 5d 98 0e d9 23 70 65 00 1f 02 00 7d
nfsd: non-standard errno: -990 The more curious problem is that during such a rebuild we loose some files on the filesystem. The worst case was that XFS stops the filesystem which produces I/O errors. Then we have to remount and repair the filesystem which produces several GB of data lost. Is XFS (as caching filesystem) a bad idea on top of a RAID 5 system? Does anyone know about such errors? Can we fix this by running a kernel update? Thanks in advance
|
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: XFS mount alignment patch, David Chinner |
|---|---|
| Next by Date: | Re: xfs: Makefile-linux-2.6 => Makefile?, Christoph Hellwig |
| Previous by Thread: | Re: XFS mount alignment patch, Alok Kataria |
| Next by Thread: | Re: Problems using xfs on RAID 5 volumes, David Chinner |
| Indexes: | [Date] [Thread] [Top] [All Lists] |