xfs
[Top] [All Lists]

Re: How to reserve disk space in XFS to make the blocks over many files

To: Roger Willcocks <roger@xxxxxxxxxxxxxxxx>
Subject: Re: How to reserve disk space in XFS to make the blocks over many files continuous?
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Sat, 10 Nov 2012 10:15:58 +1100
Cc: huubby zhou <huubby1@xxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <1352473401.3179.48.camel@xxxxxxxxxxxxxxxxxxxxxxxx>
References: <CANS6a=D4SMMqhGJVMLbr-BWqLb-Z4L4LnofzfhqChBvE9dEtPQ@xxxxxxxxxxxxxx> <20121107031952.GA6434@dastard> <1352473401.3179.48.camel@xxxxxxxxxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Fri, Nov 09, 2012 at 03:03:21PM +0000, Roger Willcocks wrote:
> 
> > > My question is, how can I guarantee the file system blocks over files
> > > continuous? Thanks for your time and appreciate your answer.
> > 
> > You can't, directly. 
> 
> We needed to do this so I added code to swapext to support transferring
> leading blocks from one (preallocated single extent) file to another
> empty file. It's pretty straightforward but perhaps too special case for
> general consumption.

Please post the patch - swapping arbitrary ranges between two inodes
is something that is definitely useful. It is needed, for example,
to do directory defragmentation, and I know that non-linear video
editting apps would love ioctls to do similar things within the same
file (e.g. punching ads out of a video stream without having to copy
data around at all).

That's a note for anyone that has implemented stuff like this -
regardless of whether you think it is useful or not, having the
patches out in the open (not matter what the state of the code)
makes it 100x more valuable than keeping it to yourself, and it
allows people to build functionality off them rather than having to
re-invent the wheel.

And, of course, there is the possibility we add the functionality to
the main tree, and you no longer have to maintain and test it
yourself.... :)

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>