[Top] [All Lists]

RE: xfs_repair speedup changes

To: "'Michael Nishimoto'" <miken@xxxxxxxxx>, "'XFS Mailing List'" <xfs@xxxxxxxxxxx>
Subject: RE: xfs_repair speedup changes
From: "Barry Naujok" <bnaujok@xxxxxxxxxxxxxxxxx>
Date: Tue, 23 Jan 2007 10:59:57 +1100
Cc: "'Chandan Talukdar'" <chandan@xxxxxxxxx>
In-reply-to: <45B53E11.8080406@agami.com>
Sender: xfs-bounce@xxxxxxxxxxx
Thread-index: Acc+eYXLzq7sybG0RKO5VqEkqXlKmgABmBRw
Hi Michael,

It's going to take a little bit of time for me to digest this patch and
to see how it compares to the other work performed by us.

On the surface, it looks quite interesting and I'll benchmark and
analyse the two to see how it compares and integrate the best solution
for the majority of cases.

I'm confused on why the kernel should make any difference to running
xfs_repair. You should be able to get the 2.8.18 xfsprogs tarball from
the FTP site, compile and test it:

I'm currently working on converting the phase 2-5 block map to an extent
based format which will improve memory consumption in addition to
speeding it up in most cases.

The only other forseable change is trying to merge phase 3 and phase 6
with directory checking, but I'm not sure how practical/feasible this is
and whether the amount of work will provide a significant performance


> -----Original Message-----
> From: xfs-bounce@xxxxxxxxxxx [mailto:xfs-bounce@xxxxxxxxxxx] 
> On Behalf Of Michael Nishimoto
> Sent: Tuesday, 23 January 2007 9:43 AM
> To: XFS Mailing List
> Cc: Chandan Talukdar
> Subject: xfs_repair speedup changes
> Hi everyone,
> agami Systems started on a project to speed up xfs_repair before
> we knew that SGI was working on the same task.  Similar to SGI's
> solution, our approach uses readahead to shorten the runtime.  Agami
> also wanted to change the existing code as little as possible.
> By releasing this patch, we hope to start a discussion which will
> lead to continued improvements in xfs_repair runtimes.  Our patch
> has a couple of ideas which should benefit SGI's code.  Using our
> NAS platform which has 4 CPUs and runs XFS over software RAID5,
> we have seen 5 to 8 times speedup, depending on resources allocated
> to a run.  The test filesystem had 1.4TB of data with 24M files.
> Unfortunately, I have not been able to run the latest CVS code
> against our system due to kernel differences.
> SGI's advantages
> ----------------
> 1. User space cache with maximum number of entries
>     a. means that xfs_repair will cause less interference
>        with other mounted filesystems.
>     b. allows tracking of cache behavior.
> 2. Rewrite phase7 to eliminate unnecessary transaction overhead.
> agami's advantages
> ------------------
> 1. Doesn't depend on AIO & generic DIO working correctly.  Will
>     work with older linux kernels.
> 2.  Parallelism model provides additional benefits
>      a. In phases 3 and 4, many threads can be used to prefetch
>         inode blocks regardless of AG count.
>      b. By processing one AG at a time, drives spend less time seeking
>         when multiple AGs are placed on a single drive due to 
> the volume
>         geometry.
>      c. By placing each prefetch in its own thread, more parallelism
>         is achieved especially when retrieving directory blocks.
> Chandan Talukdar performed all the xfs_repair work over last summer.
> Because the work was done on an old base, I have ported it forward to
> a CVS date of May 17, 2006.  I chose this date because it allows a
> cleaner patch to be delivered.
> I would like to hear suggestions for how to proceed.
>      Michael Nishimoto

<Prev in Thread] Current Thread [Next in Thread>