Melissa,
I too had problems related to this configuration. In my case I was using a
12-channel 3Ware 9500 series controller along with 12 WD 250GB drives. XFS
has been my default FS for some time now. Also, I am running SLES9 SP1
(2.6.5-7.111).
My 3Ware-based RAID began to complain about bad data in the scatter gather
lists. Soon my system began oopsing after being up for only a few minutes.
Soon after, my controller flat out died. I tried hooking up the 12 WD drives
to another controller to use with MD RAID. All 12 of the drives have failed
to be readable by any of the SATA controllers I have. They all return seek
errors when trying to mount the drives.
I have another RAID, based on 15 fibre-channel drives and using MD RAID5 with
XFS. This combination has also been great. My partitions are configured as
RAID partitions. This gets autodetected by SLES when you reboot and the
array gets reassembled without a hiccup. I have had this array in operation
for more than 6 months.
In all fairness, this 15 disk array was created under a different distribution
and the migrated over to SLES9. However, I picked up several more Hitachi
SATA drives to replace my 3Ware/WD RAID. It too has had no problem on reboot
with XFS, though it's only a small amount of time.
-Mike
On Tuesday 09 November 2004 14:36, Melissa Terwilliger wrote:
> I have a dual opteron SuSE 9 SLES machine running 2.6.9 kernel. The
> machine has 2 3ware 9500-12 PCI-X SATA raid cards installed with 12
> 250 GB drives on each card.
>
> I can create the partitions using parted & when I mount the two
> partitions they show up correctly as 2.6 TB each and they seem to be
> working perfectly. When I reboot the system I get the error:
> kernel:XFS:size check 2 failed in the kernel log and the mount command
> came back bad superblock on /dev/sdb1. I can no longer mount the
> drives. I tried doing a xfs_repair to fix the drive and I can mount
> it again, but it only shows up as 513GB after that. Then I have to go
> into parted again to delete the partition and recreate it. Again it
> works until I reboot whether or not I have it set to mount in the fstab
> or to mount by hand later.
>
> When I format the drive as ext3 I don't have this problem and it is 100%
> stable. It only looses the information when I use xfs. I'm sure I'm just
> missing something but I'm not certain what.
>
> Basically this is what I am doing to create the partitions:
>
> #parted /dev/sdb
> #mkpart primary xfs 0.0 2622488.000
> #print
> Disk geometry for /dev/sdc: 0.000-2622488.000 megabytes
> Disk label type: msdos
> Minor Start End Type Filesystem Flags
> 1 0.031 525333.742 primary xfs type=83
> #quit
>
> then after the partition was created I did
>
> # mkfs.xfs -f /dev/sdc1
> meta-data=/dev/sdc1 isize=256 agcount=32,
> agsize=20979885 blks
> = sectsz=512
> data = bsize=4096 blocks=671356320,
> imaxpct=25
> = sunit=0 swidth=0 blks,
> unwritten=1
> naming =version 2 bsize=4096
> log =internal log bsize=4096 blocks=32768,
> version=1
> = sectsz=512 sunit=0 blks
> realtime =none extsz=65536 blocks=0, rtextents=0
>
> #mount /dev/sdc1 /scr3
> #df -h
> /dev/sdc1 2.6T 528K 2.6T 1% /scr3
>
> I can mount and unmount this device repeatedly with no problems until
> I reboot.
>
> Thanks,
>
> Melissa Terwilliger
> Space Physics Research Lab
> AOSS Support
> techess@xxxxxxxxx
>
>
> -- Don't anthropomorphize computers. They hate that.
|