filesystem shrinks after using xfs_repair
Eli Morris
ermorris at ucsc.edu
Fri Jul 23 20:08:08 CDT 2010
On Jul 23, 2010, at 5:54 PM, Dave Chinner wrote:
> On Fri, Jul 23, 2010 at 01:30:40AM -0700, Eli Morris wrote:
>> On Jul 12, 2010, at 4:47 AM, Emmanuel Florac wrote:
>>
>>> Le Sun, 11 Jul 2010 18:10:41 -0700
>>> Eli Morris <ermorris at ucsc.edu> écrivait:
>>>
>>>> Here are some of the log files from my XFS problem. Yes, I think this
>>>> all started with a hardware failure of some sort. My storage is RAID
>>>> 6, a an Astra SecureStor ES.
>>>>
>>>
>>> There are IO errors on sdc, sdd and sdg. Aren't these jbods connected
>>> through the same cable, for instance? You must correct the hardware
>>> problems before attempting any repair or it will do more harm than good.
>>>
>>> --
>>> ------------------------------------------------------------------------
>>> Emmanuel Florac | Direction technique
>>> | Intellique
>>> | <eflorac at intellique.com>
>>> | +33 1 78 94 84 02
>>> ------------------------------------------------------------------------
>>
>> Hi Emmanuel,
>>
>> I think the raid tech support and me found and corrected the
>> hardware problems associated with the RAID. I'm still having the
>> same problem though. I expanded the filesystem to use the space of
>> the now corrected RAID and that seems to work OK. I can write
>> files to the new space OK. But then, if I run xfs_repair on the
>> volume, the newly added space disappears and there are tons of
>> error messages from xfs_repair (listed below).
>
> Can you post the full output of the xfs_repair? The superblock is
> the first thing that is checked and repaired, so if it is being
> "repaired" to reduce the size of the volume then all the other errors
> are just a result of that. e.g. the grow could be leaving stale
> secndary superblocks around and repair is seeing a primary/secondary
> mismatch and restoring the secondary which has the size parameter
> prior to the grow....
>
> Also, the output of 'cat /proc/partitions' would be interesting
> from before the grow, after the grow (when everything is working),
> and again after the xfs_repair when everything goes bad....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david at fromorbit.com
Hi Dave,
Thanks for replying. Here is the output I think you're looking for....
thanks!
Eli
The problem partition is an LVM2 volume:
/dev/mapper/vg1-vol5
[root at nimbus /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 24G 7.6G 15G 34% /
/dev/sda5 1.7T 1.3T 391G 77% /export
/dev/sda2 3.8G 1.5G 2.2G 40% /var
tmpfs 16G 0 16G 0% /dev/shm
/dev/sdb1 995G 946G 19G 99% /storage
tmpfs 7.7G 7.9M 7.7G 1% /var/lib/ganglia/rrds
/dev/mapper/vg1-vol5 51T 51T 90M 100% /export/vol5
[root at nimbus /]# cat /proc/partitions
major minor #blocks name
8 0 1843200000 sda
8 1 8193118 sda1
8 2 4096575 sda2
8 3 1020127 sda3
8 4 1 sda4
8 5 1829883793 sda5
8 16 1084948480 sdb
8 17 1059342133 sdb1
8 18 25599577 sdb2
8 32 13671872256 sdc
8 33 13671872222 sdc1
8 48 13668734464 sdd
8 49 12695309918 sdd1
8 64 13671872256 sde
8 65 13671872222 sde1
8 80 13671872256 sdf
8 81 13671869225 sdf1
8 96 12695309952 sdg
8 97 12695309918 sdg1
253 0 66406219776 dm-0
[root at nimbus /]# xfs_growfs /dev/mapper/vg1-vol5
meta-data=/dev/vg1/vol5 isize=256 agcount=126, agsize=106811488 blks
= sectsz=512 attr=1
data = bsize=4096 blocks=13427728384, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=32768, version=1
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
[root at nimbus /]# umount /export/vol5
[root at nimbus /]# mount /export/vol5
[root at nimbus /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 24G 7.6G 15G 34% /
/dev/sda5 1.7T 1.3T 391G 77% /export
/dev/sda2 3.8G 1.5G 2.2G 40% /var
tmpfs 16G 0 16G 0% /dev/shm
/dev/sdb1 995G 946G 19G 99% /storage
tmpfs 7.7G 7.9M 7.7G 1% /var/lib/ganglia/rrds
/dev/mapper/vg1-vol5 62T 51T 12T 81% /export/vol5
[root at nimbus /]# cat /proc/partitions
major minor #blocks name
8 0 1843200000 sda
8 1 8193118 sda1
8 2 4096575 sda2
8 3 1020127 sda3
8 4 1 sda4
8 5 1829883793 sda5
8 16 1084948480 sdb
8 17 1059342133 sdb1
8 18 25599577 sdb2
8 32 13671872256 sdc
8 33 13671872222 sdc1
8 48 13668734464 sdd
8 49 12695309918 sdd1
8 64 13671872256 sde
8 65 13671872222 sde1
8 80 13671872256 sdf
8 81 13671869225 sdf1
8 96 12695309952 sdg
8 97 12695309918 sdg1
253 0 66406219776 dm-0
[root at nimbus /]# xfs_repair /dev/mapper/vg1-vol5
Phase 1 - find and verify superblock...
writing modified primary superblock
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 11
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 32
- agno = 33
- agno = 34
- agno = 35
- agno = 36
- agno = 37
- agno = 38
- agno = 39
- agno = 40
- agno = 41
- agno = 42
- agno = 43
- agno = 44
- agno = 45
- agno = 46
- agno = 47
- agno = 48
- agno = 49
- agno = 50
- agno = 51
- agno = 52
- agno = 53
- agno = 54
- agno = 55
- agno = 56
- agno = 57
- agno = 58
- agno = 59
- agno = 60
- agno = 61
- agno = 62
- agno = 63
- agno = 64
- agno = 65
- agno = 66
- agno = 67
- agno = 68
- agno = 69
- agno = 70
- agno = 71
- agno = 72
- agno = 73
- agno = 74
- agno = 75
- agno = 76
- agno = 77
- agno = 78
- agno = 79
- agno = 80
- agno = 81
- agno = 82
- agno = 83
- agno = 84
- agno = 85
- agno = 86
- agno = 87
- agno = 88
- agno = 89
- agno = 90
- agno = 91
- agno = 92
- agno = 93
- agno = 94
- agno = 95
- agno = 96
- agno = 97
- agno = 98
- agno = 99
- agno = 100
- agno = 101
- agno = 102
- agno = 103
- agno = 104
- agno = 105
- agno = 106
- agno = 107
- agno = 108
- agno = 109
- agno = 110
- agno = 111
- agno = 112
- agno = 113
- agno = 114
- agno = 115
- agno = 116
- agno = 117
- agno = 118
- agno = 119
- agno = 120
- agno = 121
- agno = 122
- agno = 123
- agno = 124
- agno = 125
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- agno = 7
- agno = 8
- agno = 9
- agno = 10
- agno = 12
- agno = 13
- agno = 14
- agno = 15
- agno = 16
- agno = 17
- agno = 18
- agno = 19
- agno = 20
- agno = 21
- agno = 22
- agno = 23
- agno = 24
- agno = 25
- agno = 26
- agno = 27
- agno = 28
- agno = 29
- agno = 30
- agno = 31
- agno = 32
- agno = 33
- agno = 34
- agno = 35
- agno = 36
- agno = 38
- agno = 40
- agno = 37
- agno = 42
- agno = 43
- agno = 44
- agno = 45
- agno = 46
- agno = 47
- agno = 11
- agno = 49
- agno = 51
- agno = 52
- agno = 53
- agno = 54
- agno = 55
- agno = 56
- agno = 58
- agno = 59
- agno = 48
- agno = 60
- agno = 61
- agno = 41
- agno = 64
- agno = 65
- agno = 66
- agno = 67
- agno = 68
- agno = 69
- agno = 70
- agno = 71
- agno = 72
- agno = 73
- agno = 74
- agno = 75
- agno = 76
- agno = 77
- agno = 78
- agno = 79
- agno = 80
- agno = 81
- agno = 82
- agno = 83
- agno = 84
- agno = 85
- agno = 86
- agno = 87
- agno = 88
- agno = 89
- agno = 90
- agno = 91
- agno = 92
- agno = 93
- agno = 94
- agno = 95
- agno = 96
- agno = 97
- agno = 98
- agno = 99
- agno = 100
- agno = 101
- agno = 102
- agno = 103
- agno = 104
- agno = 105
- agno = 106
- agno = 107
- agno = 108
- agno = 109
- agno = 110
- agno = 111
- agno = 112
- agno = 113
- agno = 114
- agno = 115
- agno = 116
- agno = 117
- agno = 118
- agno = 119
- agno = 120
- agno = 121
- agno = 122
- agno = 123
- agno = 124
- agno = 125
- agno = 62
- agno = 63
- agno = 39
- agno = 57
- agno = 50
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
[root at nimbus /]# mount /export/vol5
[root at nimbus /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2 24G 7.6G 15G 34% /
/dev/sda5 1.7T 1.3T 391G 77% /export
/dev/sda2 3.8G 1.5G 2.2G 40% /var
tmpfs 16G 0 16G 0% /dev/shm
/dev/sdb1 995G 946G 19G 99% /storage
tmpfs 7.7G 7.9M 7.7G 1% /var/lib/ganglia/rrds
/dev/mapper/vg1-vol5 51T 51T 90M 100% /export/vol5
[root at nimbus /]# cat /proc/partitions
major minor #blocks name
8 0 1843200000 sda
8 1 8193118 sda1
8 2 4096575 sda2
8 3 1020127 sda3
8 4 1 sda4
8 5 1829883793 sda5
8 16 1084948480 sdb
8 17 1059342133 sdb1
8 18 25599577 sdb2
8 32 13671872256 sdc
8 33 13671872222 sdc1
8 48 13668734464 sdd
8 49 12695309918 sdd1
8 64 13671872256 sde
8 65 13671872222 sde1
8 80 13671872256 sdf
8 81 13671869225 sdf1
8 96 12695309952 sdg
8 97 12695309918 sdg1
253 0 66406219776 dm-0
More information about the xfs
mailing list