xfs
[Top] [All Lists]

Re: xfs_fsr question for improvement

To: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Subject: Re: xfs_fsr question for improvement
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Sun, 25 Apr 2010 16:04:34 -0500
Cc: Peter Grandi <pg_xf2@xxxxxxxxxxxxxxxxxx>, Linux XFS <xfs@xxxxxxxxxxx>
In-reply-to: <20100425150209.5167fe96@xxxxxxxxxxxxxx>
References: <201004161043.11243@xxxxxx> <20100417012415.GE2493@dastard> <20100417091357.4e7ad1e0@xxxxxxxxxxxxxx> <19412.9412.177637.116303@xxxxxxxxxxxxxxxxxx> <20100425150209.5167fe96@xxxxxxxxxxxxxx>
User-agent: Thunderbird 2.0.0.24 (Macintosh/20100228)
Emmanuel Florac wrote:
> Le Sun, 25 Apr 2010 12:17:24 +0100 vous écriviez:
> 
>>> my test VMware server (performance dropped down to abysmal
>>> level until I set up a daily xfs_fsr cron job),  
>> That should not be the case unless you are using very sparse
>> VM image files, in which case you get what you pay for.
>>
> 
> This is a development and test VM Ware server, so it hosts lots ( 100
> or so) of test VM with sparse image files (when you start a VM to host
> a quick test, you don't want to spend 15 minutes initializing the
> drives).

If you have the -space- then you can use space preallocation to
do this very quickly, FWIW.  xfs_io's resvsp command, or fallocate
in recent util-linux-ng, if vmware doesn't do it on its own already.

You pay some penalty for unwritten extent conversion but it'd be
better than massive fragmentation of the images.

>>> and a write-intensive video server.  
>> That also should not be the case unless your applications write
>> strategy is wrong and you get extremely interleaved streams, in
>> which case you get what you paid for the application programmer.
> 
> The application write strategy is as simple as possible; several
> different machines unaware of every other write huge media files to a
> samba share. I don't know how it could be worse, however it could
> hardly be enhanced in any way.

If it's all large writes, you could mount -o allocsize=512m or so:

  allocsize=size
        Sets the buffered I/O end-of-file preallocation size when
        doing delayed allocation writeout (default size is 64KiB).
        Valid values for this option are page size (typically 4KiB)
        through to 1GiB, inclusive, in power-of-2 increments.

and that might help.

-Eric

<Prev in Thread] Current Thread [Next in Thread>