xfs
[Top] [All Lists]

xfs after a week of use

To: linux-xfs@xxxxxxxxxxx
Subject: xfs after a week of use
From: Thomas Graichen <news-innominate.list.sgi.xfs@xxxxxxxxxxxxx>
Date: 6 Aug 2000 17:32:22 GMT
Distribution: local
Organization: innominate AG, Berlin, Germany
Reply-to: Thomas Graichen <graichen@xxxxxxxxxxxxx>
Reply-to: thomas.graichen@xxxxxxxxxxxxx
Sender: owner-linux-xfs-announce@xxxxxxxxxxx
User-agent: tin/1.4.2-20000205 ("Possession") (UNIX) (Linux/2.2.16-local (i586))
ok - after a week of use on a squid running machine (for about 50
users) i rebooted this otherwise full xfs machine and checked the
xfs filesystems on it - here are the results so far:

/ filesystem

[root@january /root]# /usr/src/xfs/cmd/xfs/repair/xfs_repair /dev/hda5
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
error following ag 7 unlinked list
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
data fork in ino 2097285 claims free block 131122
data fork in ino 2097288 claims free block 131088
imap claims in-use inode 2097288 is free, correcting imap
data fork in ino 2097289 claims free block 131087
imap claims in-use inode 2097289 is free, correcting imap
data fork in ino 2097290 claims free block 131089
imap claims in-use inode 2097290 is free, correcting imap
data fork in ino 2097291 claims free block 131106
data fork in ino 2097291 claims free block 131107
data fork in ino 2097291 claims free block 131108
data fork in ino 2097291 claims free block 131109
data fork in ino 2097291 claims free block 131110
data fork in ino 2097291 claims free block 131111
data fork in ino 2097291 claims free block 131112
data fork in ino 2097291 claims free block 131113
data fork in ino 2097291 claims free block 131114
data fork in ino 2097291 claims free block 131115
data fork in ino 2097291 claims free block 131116
data fork in ino 2097291 claims free block 131117
data fork in ino 2097291 claims free block 131118
data fork in ino 2097291 claims free block 131119
data fork in ino 2097291 claims free block 131120
data fork in ino 2097291 claims free block 131121
imap claims in-use inode 2097291 is free, correcting imap
data fork in ino 2097292 claims free block 131090
imap claims in-use inode 2097292 is free, correcting imap
data fork in ino 2097294 claims free block 131124
imap claims in-use inode 2097294 is free, correcting imap
data fork in ino 2097295 claims free block 131141
imap claims in-use inode 2097295 is free, correcting imap
        - agno = 3
        - agno = 4
        - agno = 5
imap claims in-use inode 5243053 is free, correcting imap
imap claims in-use inode 5243056 is free, correcting imap
imap claims in-use inode 5243057 is free, correcting imap
imap claims in-use inode 5243058 is free, correcting imap
imap claims in-use inode 5243059 is free, correcting imap
imap claims in-use inode 5243060 is free, correcting imap
imap claims in-use inode 5243061 is free, correcting imap
imap claims in-use inode 5243062 is free, correcting imap
imap claims in-use inode 5243064 is free, correcting imap
imap claims in-use inode 5243067 is free, correcting imap
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - clear lost+found (if it exists) ...
        - clearing existing "lost+found" inode
        - marking entry "lost+found" to be deleted
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
entry "squid.pid" at block 0 offset 400 in directory inode 2097285 references fr
ee inode 2180103
        clearing inode number in entry at offset 400...
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - ensuring existence of lost+found directory
        - traversing filesystem starting at / ... 
rebuilding directory inode 128
rebuilding directory inode 2097285
        - traversal finished ... 
        - traversing all unattached subtrees ... 
        - traversals finished ... 
        - moving disconnected inodes to lost+found ... 
disconnected inode 7340166, moving to lost+found
disconnected inode 7340168, moving to lost+found
Phase 7 - verify and correct link counts...
done
[root@january /root]#

/var/spool/squid filesystem

[root@january /root]# /usr/src/xfs/cmd/xfs/repair/xfs_repair /dev/hda6
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - clear lost+found (if it exists) ...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - ensuring existence of lost+found directory
        - traversing filesystem starting at / ... 
        - traversal finished ... 
        - traversing all unattached subtrees ... 
        - traversals finished ... 
        - moving disconnected inodes to lost+found ... 
Phase 7 - verify and correct link counts...
done
[root@january /root]#

looks better - so it looks like that xfs maybe have some problems
running as / fs ? - ok will keep posting my observations over
time ...

btw. the kernel was from about july 31st ... august 1st and the
machine was all time shutdown cleanly (maybe i have something to
change on the halt scripts for xfs unmounting - it's default
redhat 6.2 for now ? - for booting i use the lilo read-write
option)

t

-- 
thomas.graichen@xxxxxxxxxxxxx
Technical Director                                       innominate AG
Clustering & Security                                networking people
tel: +49.30.308806-13  fax: -77                   http://innominate.de

<Prev in Thread] Current Thread [Next in Thread>