Request for information on bloated writes using Swift

Dilip Simha nmdilipsimha at gmail.com
Tue Feb 2 21:42:06 CST 2016


Apologies:
Small correction:

The stat was taken on t1.txt but mistakenly printed it as t4.txt.

On Tue, Feb 2, 2016 at 7:40 PM, Dilip Simha <nmdilipsimha at gmail.com> wrote:

> Hi Eric,
>
> Thank you for your quick reply.
>
> Using xfs_io as per your suggestion, I am able to reproduce the issue.
> However, I need to falloc for 256K and write for 257K to see this issue.
>
> # xfs_io -f -c "falloc 0 256k" -c "pwrite 0 257k" /srv/node/r1/t1.txt
> # stat /srv/node/r1/t4.txt | grep Blocks
>   Size: 263168     Blocks: 1536       IO Block: 4096   regular file
>
> # xfs_io -f -c "pwrite 0 257k" /srv/node/r1/t2.txt
> # stat  /srv/node/r1/t2.txt | grep Blocks
> Size: 263168    *Blocks*: 520        IO Block: 4096   regular file
>
> # xfs_info /srv/node/r1
> meta-data=/dev/mapper/35000cca05831283c-part2 isize=256    agcount=4,
> agsize=183141504 blks
>          =                       sectsz=512   attr=2, projid32bit=1
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=732566016, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =internal               bsize=4096   blocks=357698, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
>
> # cat /proc/mounts | grep r1
>
> /dev/mapper/35000cca05831283c-part2 /srv/node/*r1* xfs
> rw,nosuid,nodev,noexec,noatime,nodiratime,attr2,inode64,logbufs=8,noquota 0
> 0
> I waited for around 15 mins before collecting the stat output to give the
> background reclamation logic a fair chance to do its job. I also tried
> changing the value of speculative_prealloc_lifetime from 300 to 10. But it
> was of no use.
>
> cat /proc/sys/fs/xfs/speculative_prealloc_lifetime
> 10
>
> Regards,
> Dilip
>
> On Tue, Feb 2, 2016 at 6:47 PM, Eric Sandeen <sandeen at sandeen.net> wrote:
>
>>
>>
>> On 2/2/16 4:32 PM, Dilip Simha wrote:
>> > Hi,
>> >
>> > I have a question regarding speculated preallocation in XFS, w.r.t
>> > kernel version: 3.16.0-46-generic. I am using Swift version: 1.0 and
>> > mkfs.xfs version 3.2.1
>> >
>> > When I write a 256KiB file to Swift, I see that the underlying XFS
>> > uses 3x the amount of space/blocks to write that data. Upon
>> > performing detailed experiments, I see that when Swift uses fallocate
>> > (default approach), XFS doesn't reclaim the preallocated blocks that
>> > XFS allocated. Swift fallocate doesn't exceed the body size(256
>> > KiB).
>> >
>> > Interestingly, when either allocsize=4k or when swift doesn't use
>> > fallocate, XFS doesn't consume additional space.
>> >
>> > Can you please let me know if this is a known bug and if its fixed in
>> > the later versions?
>>
>> Can you clarify the exact sequence of events?
>>
>> i.e. -
>>
>> xfs_io -f -c "fallocate 0 256k" -c "pwrite 0 256k" somefile
>>
>> leads to unreaclaimed preallocation, while
>>
>> xfs_io -f -c "pwrite 0 256k" somefile
>>
>> does not?  Or is it some other sequence?  I don't have a
>> 3.16 handy to test, but if you can describe it in more detail
>> that'd help.  Some of this is influenced by fs geometry, too
>> so xfs_info output would be good, along with any mount options
>> you might be using.
>>
>> Are you preallocating with or without KEEP_SIZE?
>>
>> -Eric
>>
>> _______________________________________________
>> xfs mailing list
>> xfs at oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20160202/de04adb6/attachment.html>


More information about the xfs mailing list