xfs
[Top] [All Lists]

Growing a Software Raid 5 array with XFS on it.

To: jakob@xxxxxxxxxxxxx, DCox@xxxxxxxxxxxxxx, linux-xfs@xxxxxxxxxxx, linux-raid@xxxxxxxxxxxxxxxx
Subject: Growing a Software Raid 5 array with XFS on it.
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Fri, 31 Jan 2003 23:47:12 +0100
Sender: linux-xfs-bounce@xxxxxxxxxxx
Hi,

This is the log of a semi succesful conversion of a software raid 5 array with raidreconf from a 3 disk array to a 4 disk array.

raidreconf is a utility to add disks to a existing raid 0 or 5 array. In this case it involved a 3 disk raid 5 array with XFS (version 1 log) on it. The operating system was Red Hat Linux 7.3 Raidreconf was version 0.1.2. The partitions were about 40GB each. The raid was about 77GB netto. The filesystem contained about 39GB of data.

The old raidtab entry looked like this:

# md0 is the /data raid array
raiddev                 /dev/md0
raid-level              5
nr-raid-disks           3
chunk-size              128
parity-algorithm left-symmetric
# Spare disks for hot reconstruction
nr-spare-disks          0
persistent-superblock   1
# Define 1st RAID disk
device                  /dev/hde1
raid-disk               0
# Define 2nd RAID disk
device                  /dev/hdg1
raid-disk               1
# Define 3rd RAID disk
device                  /dev/hdb1
raid-disk               2

The new one looked like this:

# md0 is the /data raid array
raiddev                 /dev/md0
raid-level              5
nr-raid-disks           4
chunk-size              128
parity-algorithm left-symmetric
# Spare disks for hot reconstruction
nr-spare-disks          0
persistent-superblock   1
# Define 1st RAID disk
device                  /dev/hde1
raid-disk               0
# Define 2nd RAID disk
device                  /dev/hdg1
raid-disk               1
# Define 3rd RAID disk
device                  /dev/hdb1
raid-disk               2
# Define 4th RAID disk
device                  /dev/hdc9
raid-disk               3

The partition tables looked something like this.
/dev/hde1             1     77207  38912296+  fd  Linux raid autodetect
/dev/hdg1             1     77207  38912296+  fd  Linux raid autodetect
/dev/hdb1             1      4972  39937558+  fd  Linux raid autodetect
/dev/hdc9         52014    131252  39936424+  fd  Linux raid autodetect

And yes I am very aware that putting 2 7200RPM IDE disks on one UDMA33 channel is not the fastest thing to do.
New ide controller for two more ports arriving soon.

The procedure involves stopping the /dev/md0 raid array and start the conversion with:
raidreconf -o /etc/raidtab.old -n /etc/raidtab.new -m /dev/md0

All this went reasonably well but on the last few percent of the reconstruction raidreconf barfed. I don't have the exact error but it reached this error (by grepping the source): rrc_raid5.c.backup: fprintf "raid5_map_global_to_local: disk %d block out of range: %lu (%lu) gblock = %lu\n" in this case the error was on block 60500 out of 61500 something. The disk was number 1.

Because this happened with just one hash mark to go I ad some hope that the filesystem would be mostly intact. The process already ran for about 2 hours. I recreated the /dev/md0 array with the new raidtab and it started resyncing.

I checked the filesystem with xfs-repair -n and it reported a few directory entries and one which it would move to lost and found. I ran xfs_repair and mounted the filesystem, and sure enough just one dierctory with about 10 files in it was damaged.

The next step was growing the filesystem to it's new size. This was easily done and without error using xfs_growfs.
The /data raid 5 is now 4 disks of 40GB each and the netto space is 111GB.
The growing is only and was done with the raid array still resyncing. This took about 10 seconds.

I got really lucky I guess. At least I don't have to restore a lot of data from backup. AND YES YOU DO NEED BACKUPS. It's good that the manual for raidreconf explicitly states this and every webpage that mentions raidreconf.

It's all working now but this is one for the archives.

Cheers
--
Seth
It might just be your lucky day, if you only knew.


<Prev in Thread] Current Thread [Next in Thread>
  • Growing a Software Raid 5 array with XFS on it., Seth Mos <=