xfs
[Top] [All Lists]

Re: xfs_fsr, sunit, and swidth

To: Dave Hall <kdhall@xxxxxxxxxxxxxx>
Subject: Re: xfs_fsr, sunit, and swidth
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Sat, 13 Apr 2013 10:45:12 +1000
Cc: stan@xxxxxxxxxxxxxxxxx, "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <51684382.50008@xxxxxxxxxxxxxx>
References: <5142AE40.6040408@xxxxxxxxxxxxxxxxx> <20130315114538.GF6369@dastard> <5143F94C.1020708@xxxxxxxxxxxxxxxxx> <20130316072126.GG6369@dastard> <515082C3.2000006@xxxxxxxxxxxxxx> <515361B5.8050603@xxxxxxxxxxxxxxxxx> <5155F2B2.1010308@xxxxxxxxxxxxxx> <20130331012231.GJ6369@dastard> <515C3BF3.60601@xxxxxxxxxxxxxx> <51684382.50008@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Fri, Apr 12, 2013 at 01:25:22PM -0400, Dave Hall wrote:
> Stan,
> 
> IDid this post get lost in the shuffle?  Looking at it I think it
> could have been a bit unclear.  What I need to do anyways is have a
> second, off-site copy of my backup data.  So I'm going to be
> building a second array.  In copying, in order to preserve the hard
> link structure of the source array I'd have to run a sequence of cp
> -al / rsync calls that would mimic what rsnapshot did to get me to
> where I am right now.  (Note that I could also potentially use rsync
> --link-dest.)
> So the question is how would the target xfs file system fare as far
> as my inode fragmentation situation is concerned?  I'm hoping that
> since the target would be a fresh file system, and since during the
> 'copy' phase I'd only be adding inodes, that the inode allocation
> would be more compact and orderly than what I have on the source
> array since.  What do you think?

Sure, it would be to start with, but you'll eventually end up in the
same place. Removing links from the forest is what leads to the
sparse free inode space, so even starting with a dense inode
allocation pattern, it'll turn sparse the moment you remove backups
from the forest....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>