xfs
[Top] [All Lists]

Re: xfs_fsr question for improvement

To: Peter Grandi <pg_xf2@xxxxxxxxxxxxxxxxxx>
Subject: Re: xfs_fsr question for improvement
From: Linda Walsh <xfs@xxxxxxxxx>
Date: Sun, 25 Apr 2010 17:02:36 -0700
Cc: Linux XFS <xfs@xxxxxxxxxxx>
In-reply-to: <19412.9412.177637.116303@xxxxxxxxxxxxxxxxxx>
References: <201004161043.11243@xxxxxx> <20100417012415.GE2493@dastard> <20100417091357.4e7ad1e0@xxxxxxxxxxxxxx> <19412.9412.177637.116303@xxxxxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.8.1.24) Gecko/20100228 Lightning/0.9 Thunderbird/2.0.0.24 Mnenhy/0.7.6.666
Peter Grandi wrote:
>>> XFS resists fragmentation better than most other filesystems,
>>> so defragmentation, while possible, is generally not needed.
>
> That's a common myth. For most file systems and filesystems.
> Also most applications write files really badly.
>   http://www.sabi.co.uk/blog/anno06-3rd.html#060914b
----
        Do you have any evidence to back this up?  The article you quote says 
nothing
about xfs allocation -- it's talking about Windows systems using FAT or NTFS -- 
unless,
you've ported XFS to Windows?   If not, I'm I don't see how your comments are
relevant.

> Fortunately 'xfs_fsr' is mostly reliable, but in-place
> defragmentation is a risky propostion for several reasons even
> if 'xfs_fsr' is fairly reliable.
---
        How is it risky?  Do you have any evidence to back up this claim?
It copies from where it is at, to a pre-reserved, vacant space (which it finds 
just
before it does the copy).  When the copy is done successfully, its points the 
inode
at the defragmented data, and frees the old copy -- or at least that's my
'not having looked at the code' understanding of it.  
        This is basically a file copy.  So you are saying that file 
defragmentation
in place using file copy is risky?  Doesn't this imply you are saying
that copying files is risky?  How is this meaninful?

> Note also that 'xfs_fsr' uses a terrible "defragmentation"
> strategy (from 'man xfs_fsr'):
>   "The reorganization algorithm operates on one file at a time,"
----
        xfs_fsr does a superb job of file defragmenting.  It doesn't 
do disk defragmenting.  But it does defragment single files well, which was
all it was designed to do.  We can lament that it hasn't been improved on, 
but no one with money or with 'free time' (ha) and the knowledge, has seen it 
as 
a problem, so it hasn't been fixed.


> That also should not be the case unless your applications write
> strategy is wrong and you get extremely interleaved streams, in
> which case you get what you paid for the application programmer.
---
        That's a rather naive view.  It may not be one application but several
writing to the disk at once.  Or it could be one, but recording multiple streams
to disk at the same time -- of course it would have to write them to disk as 
they
come in, as memory is limited -- how else would you prevent interleaving in such
a case?  There are too many situations where fragmenting can occur to toss them
all off and say they are the result of not paying an application programmer to 
do it
"correctly".

I don't see why you posted -- it wasn't to help anyone nor to offer constructive
criticism.  It was a bit harsh on the criticism side, as though something about 
it
was 'personal' for you....   Also sensed a tinge of bitterness in that last bit 
of
criticism about the video stream fragmentation.  I'm sorry for your loss, but 
please
try to understand, that this is a forum/list for developers/users to help with 
xfs problems.

Please rethink your approach here, and I'm apologize in advance if I'm out of
line, but something seemed off-key in this post, but maybe I'm misreading things
completetely... its happened before. :-)

Linda Walsh

<Prev in Thread] Current Thread [Next in Thread>