[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
able to reproduce growfs bug on LVM(FAQ) at will
I am encountering the FAQ'd bug that prevents xfs_growfs from working on
a resized LVM volume. The workaround (umount and xfs_repair) does work,
but I was wondering if I could be of some assistance in tracking the bug
down? Since it is highly reproducible, I can gather any gatherable
information. FWIW, I'm running on:
Athlon XP CPU
Debian Sid
Kernel 2.6.0-test3 (almost stock)
LVM2 on dev-mapper
A (mildly) interesting datapoint is that an incomplete xfs_repair (it
errrored out in Phase 6 because of insufficient space) still corrected
the problem, so some write operation that takes place in Phases 1-5
fixes the issue.
I've included a log of a failed xfs_growfs, a xfs_repair, and a
successful xfs_growfs below.
-Tupshin
bastard:~# xfs_growfs /data/shared/
meta-data=/data/shared isize=256 agcount=8, agsize=163840 blks
= sectsz=512
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=1200, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
bastard:~# umount /data/shared/
bastard:~# xfs_repair /dev/lvm_group_1/
apps cpsft debmir diskless docs uml
vm_redhat shared wine
bastard:~# xfs_repair /dev/lvm_group_1/shared
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- clear lost+found (if it exists) ...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- ensuring existence of lost+found directory
fatal error -- ran out of disk space!
bastard:~# mount /data/shared/
bastard:~# xfs_growfs /data/shared/
meta-data=/data/shared isize=256 agcount=8, agsize=163840 blks
= sectsz=512
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=1200, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
data blocks changed from 1310720 to 1835008