xfs
[Top] [All Lists]

Re: xfs after a week of use

To: Thomas Graichen <graichen@xxxxxxxxxxxxx>, thomas.graichen@xxxxxxxxxxxxx
Subject: Re: xfs after a week of use
From: Steve Lord <lord@xxxxxxx>
Date: Mon, 07 Aug 2000 12:36:24 -0500
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: Message from Thomas Graichen <news-innominate.list.sgi.xfs@xxxxxxxxxxxxx> of "06 Aug 2000 17:32:22 GMT." <news2mail-8mk7f6$634$2@xxxxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs-announce@xxxxxxxxxxx
> ok - after a week of use on a squid running machine (for about 50
> users) i rebooted this otherwise full xfs machine and checked the
> xfs filesystems on it - here are the results so far:
> 
> / filesystem
> 

Just thinking about this, if you did the xfs_repair on a live filesystem
then you really cannot rely on it being correct. Repair uses the block
device interface which will not use the same caching as XFS is using
for its meta-data. In general, repair should not be run on a live
filesystem (repair -n can be since it does not modify anything).
Also if a filesystem was not cleanly unmounted then it should be
remounted and unmounted again before running repair - as the log
may contain the missing parts of the picture.

Steve


> [root@january /root]# /usr/src/xfs/cmd/xfs/repair/xfs_repair /dev/hda5
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
>         - zero log...
>         - scan filesystem freespace and inode maps...
>         - found root inode chunk
> Phase 3 - for each AG...
>         - scan and clear agi unlinked lists...
> error following ag 7 unlinked list
>         - process known inodes and perform inode discovery...
>         - agno = 0
>         - agno = 1
>         - agno = 2
> data fork in ino 2097285 claims free block 131122
> data fork in ino 2097288 claims free block 131088
> imap claims in-use inode 2097288 is free, correcting imap
> data fork in ino 2097289 claims free block 131087
> imap claims in-use inode 2097289 is free, correcting imap
> data fork in ino 2097290 claims free block 131089
> imap claims in-use inode 2097290 is free, correcting imap
> data fork in ino 2097291 claims free block 131106
> data fork in ino 2097291 claims free block 131107
> data fork in ino 2097291 claims free block 131108
> data fork in ino 2097291 claims free block 131109
> data fork in ino 2097291 claims free block 131110
> data fork in ino 2097291 claims free block 131111
> data fork in ino 2097291 claims free block 131112
> data fork in ino 2097291 claims free block 131113
> data fork in ino 2097291 claims free block 131114
> data fork in ino 2097291 claims free block 131115
> data fork in ino 2097291 claims free block 131116
> data fork in ino 2097291 claims free block 131117
> data fork in ino 2097291 claims free block 131118
> data fork in ino 2097291 claims free block 131119
> data fork in ino 2097291 claims free block 131120
> data fork in ino 2097291 claims free block 131121
> imap claims in-use inode 2097291 is free, correcting imap
> data fork in ino 2097292 claims free block 131090
> imap claims in-use inode 2097292 is free, correcting imap
> data fork in ino 2097294 claims free block 131124
> imap claims in-use inode 2097294 is free, correcting imap
> data fork in ino 2097295 claims free block 131141
> imap claims in-use inode 2097295 is free, correcting imap
>         - agno = 3
>         - agno = 4
>         - agno = 5
> imap claims in-use inode 5243053 is free, correcting imap
> imap claims in-use inode 5243056 is free, correcting imap
> imap claims in-use inode 5243057 is free, correcting imap
> imap claims in-use inode 5243058 is free, correcting imap
> imap claims in-use inode 5243059 is free, correcting imap
> imap claims in-use inode 5243060 is free, correcting imap
> imap claims in-use inode 5243061 is free, correcting imap
> imap claims in-use inode 5243062 is free, correcting imap
> imap claims in-use inode 5243064 is free, correcting imap
> imap claims in-use inode 5243067 is free, correcting imap
>         - agno = 6
>         - agno = 7
>         - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
>         - setting up duplicate extent list...
>         - clear lost+found (if it exists) ...
>         - clearing existing "lost+found" inode
>         - marking entry "lost+found" to be deleted
>         - check for inodes claiming duplicate blocks...
>         - agno = 0
>         - agno = 1
>         - agno = 2
> entry "squid.pid" at block 0 offset 400 in directory inode 2097285 references
 fr
> ee inode 2180103
>       clearing inode number in entry at offset 400...
>         - agno = 3
>         - agno = 4
>         - agno = 5
>         - agno = 6
>         - agno = 7
> Phase 5 - rebuild AG headers and trees...
>         - reset superblock...
> Phase 6 - check inode connectivity...
>         - resetting contents of realtime bitmap and summary inodes
>         - ensuring existence of lost+found directory
>         - traversing filesystem starting at / ... 
> rebuilding directory inode 128
> rebuilding directory inode 2097285
>         - traversal finished ... 
>         - traversing all unattached subtrees ... 
>         - traversals finished ... 
>         - moving disconnected inodes to lost+found ... 
> disconnected inode 7340166, moving to lost+found
> disconnected inode 7340168, moving to lost+found
> Phase 7 - verify and correct link counts...
> done
> [root@january /root]#
> 
> /var/spool/squid filesystem
> 
> [root@january /root]# /usr/src/xfs/cmd/xfs/repair/xfs_repair /dev/hda6
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
>         - zero log...
>         - scan filesystem freespace and inode maps...
>         - found root inode chunk
> Phase 3 - for each AG...
>         - scan and clear agi unlinked lists...
>         - process known inodes and perform inode discovery...
>         - agno = 0
>         - agno = 1
>         - agno = 2
>         - agno = 3
>         - agno = 4
>         - agno = 5
>         - agno = 6
>         - agno = 7
>         - process newly discovered inodes...
> Phase 4 - check for duplicate blocks...
>         - setting up duplicate extent list...
>         - clear lost+found (if it exists) ...
>         - check for inodes claiming duplicate blocks...
>         - agno = 0
>         - agno = 1
>         - agno = 2
>         - agno = 3
>         - agno = 4
>         - agno = 5
>         - agno = 6
>         - agno = 7
> Phase 5 - rebuild AG headers and trees...
>         - reset superblock...
> Phase 6 - check inode connectivity...
>         - resetting contents of realtime bitmap and summary inodes
>         - ensuring existence of lost+found directory
>         - traversing filesystem starting at / ... 
>         - traversal finished ... 
>         - traversing all unattached subtrees ... 
>         - traversals finished ... 
>         - moving disconnected inodes to lost+found ... 
> Phase 7 - verify and correct link counts...
> done
> [root@january /root]#
> 
> looks better - so it looks like that xfs maybe have some problems
> running as / fs ? - ok will keep posting my observations over
> time ...
> 
> btw. the kernel was from about july 31st ... august 1st and the
> machine was all time shutdown cleanly (maybe i have something to
> change on the halt scripts for xfs unmounting - it's default
> redhat 6.2 for now ? - for booting i use the lilo read-write
> option)


In theory XFS should unmount correctly, although I think there are some
issues with making a remount read-only look like a clean unmount - this
may be what is happening on the root disk.

Steve

p.s. Lilo does work from XFS!



<Prev in Thread] Current Thread [Next in Thread>