| To: | 舒星 <xshu2006@xxxxxxxxx> |
|---|---|
| Subject: | Re: raid5 grow problem |
| From: | Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx> |
| Date: | Fri, 18 Aug 2006 05:55:06 -0400 (EDT) |
| Cc: | linux-raid@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx |
| In-reply-to: | <7de068fd0608170730s75afb310g58e6c5fd07202ef@mail.gmail.com> |
| References: | <7de068fd0608161815yc34ec6bl886fa637691ee5f8@mail.gmail.com> <7de068fd0608170223g2009962chc222233c70ee9317@mail.gmail.com> <Pine.LNX.4.64.0608170847390.9352@p34.internal.lan> <7de068fd0608170730s75afb310g58e6c5fd07202ef@mail.gmail.com> |
| Sender: | xfs-bounce@xxxxxxxxxxx |
Adding XFS mailing list to this e-mail to show that the grow for xfs
worked.On Thu, 17 Aug 2006, ÊæÐÇ wrote: I've only tried growing a RAID5, which was the only RAID that I remember being supported (to grow) in the kernel, I am not sure if its posible toi know this,but how you grow your raid5,what's your mdadm version? need anyother configure before creat md & use mdadm -G ..to grow?grow other types of RAID arrays.- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
Like this: p34:~# mdadm --create /dev/md3 /dev/hda1 /dev/hde1 /dev/sdc1 --level=5 --raid-disks=3 mdadm: array /dev/md3 started. p34:~# mdadm -D /dev/md3 /dev/md3: Version : 00.90.03 Creation Time : Fri Jul 7 15:44:24 2006 Raid Level : raid5 Array Size : 781417472 (745.22 GiB 800.17 GB) Device Size : 390708736 (372.61 GiB 400.09 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Jul 7 15:44:24 2006
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1 Layout : left-symmetric
Chunk Size : 64KRebuild Status : 0% complete UUID : cf7a7488:64c04921:b8dfe47c:6c785fa1
Events : 0.1 Number Major Minor RaidDevice State
0 3 1 0 active sync /dev/hda1
1 33 1 1 active sync /dev/hde1
3 8 33 2 spare rebuilding /dev/sdc1
p34:~#p34:~# df -h | grep /raid5
/dev/md3 746G 80M 746G 1% /raid5
p34:~# umount /dev/md3
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
Creation Time : Fri Jul 7 15:44:24 2006
Raid Level : raid5
Array Size : 781417472 (745.22 GiB 800.17 GB)
Device Size : 390708736 (372.61 GiB 400.09 GB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 3
Persistence : Superblock is persistent Update Time : Fri Jul 7 18:25:29 2006
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1 Layout : left-symmetric
Chunk Size : 64K UUID : cf7a7488:64c04921:b8dfe47c:6c785fa1
Events : 0.26 Number Major Minor RaidDevice State
0 3 1 0 active sync /dev/hda1
1 33 1 1 active sync /dev/hde1
2 8 33 2 active sync /dev/sdc13 22 1 - spare /dev/hdc1 p34:~# mdadm /dev/md3 --grow --raid-disks=4
mdadm: Need to backup 384K of critical section..
mdadm: ... critical section passed.
p34:~# cat /proc/mdstat
Personalities : [raid1] [raid5] [raid4]
md1 : active raid1 sdb2[1] sda2[0]
136448 blocks [2/2] [UU]md2 : active raid1 sdb3[1] sda3[0]
70268224 blocks [2/2] [UU]md3 : active raid5 hdc1[3] sdc1[2] hde1[1] hda1[0] 781417472 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [>....................] reshape = 0.0% (85120/390708736) finish=840.5min speed=7738K/sec md0 : active raid1 sdb1[1] sda1[0]
2200768 blocks [2/2] [UU]unused devices: <none>
p34:~# cat /proc/mdstat
Personalities : [raid1] [raid5] [raid4]
md1 : active raid1 sdb2[1] sda2[0]
136448 blocks [2/2] [UU]md2 : active raid1 sdb3[1] sda3[0]
70268224 blocks [2/2] [UU]md3 : active raid5 hdc1[3] sdc1[2] hde1[1] hda1[0] 781417472 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [>....................] reshape = 0.0% (286284/390708736) finish=779.8min speed=8342K/sec md0 : active raid1 sdb1[1] sda1[0]
2200768 blocks [2/2] [UU]unused devices: <none> p34:~# p34:~# mount /raid5 p34:~# xfs_growfs /raid5 meta-data=/dev/md3 isize=256 agcount=32, agsize=6104816 blks = sectsz=4096 attr=0 data = bsize=4096 blocks=195354112, imaxpct=25 = sunit=16 swidth=48 blks, unwritten=1 naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=4096 sunit=1 blks realtime =none extsz=196608 blocks=0, rtextents=0 data blocks changed from 195354112 to 195354368 p34:~# p34:~# umount /raid5 p34:~# mount /raid5 p34:~# df -h /dev/md3 746G 80M 746G 1% /raid5 p34:~# |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | TAKE 942533 - xfs_freeze hung on ticket sema in xlog_grant_log_space, Timothy Shimmin |
|---|---|
| Next by Date: | Differences in su/sw values for hw vs. sw RAID 5?, bridavis |
| Previous by Thread: | TAKE 942533 - xfs_freeze hung on ticket sema in xlog_grant_log_space, Timothy Shimmin |
| Next by Thread: | Differences in su/sw values for hw vs. sw RAID 5?, bridavis |
| Indexes: | [Date] [Thread] [Top] [All Lists] |