xfs
[Top] [All Lists]

[PATCH] New xfs_repair handling for inode nlink counts

To: <xfs@xxxxxxxxxxx>, <xfs-dev@xxxxxxx>
Subject: [PATCH] New xfs_repair handling for inode nlink counts
From: "Barry Naujok" <bnaujok@xxxxxxxxxxxxxxxxx>
Date: Fri, 9 Mar 2007 17:20:28 +1100
Sender: xfs-bounce@xxxxxxxxxxx
Thread-index: AcdiEwgGddBZtAjVToWvsumIR0RD5Q==
The attached patch has 3 parts to it:
 - optimised phase 7 (inode nlink count) speed
 - improved memory usage for inode nlink counts
 - memory usage tracking
 - other speed improvements

Overall, phase 7 is almost instant, and phases 6/7 use less 
memory than current versions of xfs_repair.


The optimised phase 7 involved the patches to:

  dino_chunks.c
      This stores the on-disk nlink count for inodes into the 
      inode tree that is created in phase 3.

  phase7.c
      This compares the on-disk nlink counts read in phase 3
      to the actual count it should be generated in phase 6.
      If they are different, creates a transaction and updates
      the inode on disk. No other disk I/O is generated.

  incore.h
      Added disk_nlinks to ino_tree_node_t structure and renamed
      nlinks to counted_nlinks in the backptrs_t structure.
      Also created set/get_inode_disk_nlinks inline functions.


Due to the massive increase in memory required to store these
counts for each inode in the filesystem, I have implemented 
memory optimisation using a dynamically sized elements for 
each inode cluster. Initially, they start at 8 bits each and
double in bits as required by inodes with large nlink counts.
This implementation uses an "nlinkops" function pointers
to keep CPU usage to a minimum. This is entirely implemented
in incore.h and incore_ino.c.



To measure memory used by various parts xfs_repair, I 
implemented memory tracking in global.h and global.c. Default
is not to compile this in, but can enabled by defining 
TRACK_MEMORY when compiling these two files.



Finally, a small enhancement was made in xfs_repair.c. For 
filesystems that fit within the libxfs block cache, phase 6
6 is now significantly faster by flushing dirty blocks to 
disk rather than purging them from memory and then re-reading
again them during phase 6. The flush is required as the 
libxfs block and inode cache is not unified.

Attachment: improved_repair_nlink_handling.patch
Description: Binary data

<Prev in Thread] Current Thread [Next in Thread>