xfs
[Top] [All Lists]

Re: High Fragmentation with XFS and NFS Sync

To: "Darrick J. Wong" <darrick.wong@xxxxxxxxxx>
Subject: Re: High Fragmentation with XFS and NFS Sync
From: Nick Fisk <friskyfisk10@xxxxxxxxxxxxxx>
Date: Sat, 2 Jul 2016 22:00:33 +0100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=sjUo5kX1hNfkRax5rJRyIvbMYurZoIb2v8FMdagvbcY=; b=p0CQg3wZBjVdbh+y9pmCsgHHMe0OAA6MswuNOycXGnFc+pLnqm2roXpj6fpLljgwtW MWoCx8EgarmH+VF+lgCOtn8XHZxO9532jyA4A6zVPd9M8d5J6s2U6w2rRYoHiBdzBcuF vTPPxTuViyHp62geADhPz6VFKmFpGJnaV/sws2Mt2fjg3ArWmHrbfM+k0SJ8PGdtRuf4 W7qbUv2LUKipdfU/qbfs09gHAV40vdzdBksJnN4x8VBS31b2O0P6hXezU1931d13V8uM EHXMv6kfr+tgSWUYLmlqHCLpAmFb1Ba0Hh9Erd9CYl+szQWz2hG/R6L10Q7h8IXgN5bD r9dg==
In-reply-to: <20160702201249.GH4917@xxxxxxxxxxxxxxxx>
References: <CAC5UwBi8Skjx90_XC5Z5B8P+CadawBZ3iUabKtm-2ZvrkgocZQ@xxxxxxxxxxxxxx> <20160702201249.GH4917@xxxxxxxxxxxxxxxx>
On 2 July 2016 at 21:12, Darrick J. Wong <darrick.wong@xxxxxxxxxx> wrote:
> On Sat, Jul 02, 2016 at 09:52:40AM +0100, Nick Fisk wrote:
>> Hi, hope someone can help me here.
>>
>> I'm exporting some XFS fs's to ESX via NFS with the sync option enabled. I'm
>> seeing really heavy fragmentation when multiple VM's are copied onto the
>> share at the same time. I'm also seeing kmem_alloc failures, which is
>> probably the biggest problem as this effectively takes everything down.
>
> (Probably a result of loading the millions of bmbt extents into memory?)

Yes I thought that was the case.

>
>> Underlying storage is a Ceph RBD, the server the FS is running on, is
>> running kernel 4.5.7. Mount options are currently default. I'm seeing
>> Millions of extents, where the ideal is listed as a couple of thousand when
>> running xfs_db, there is only a couple of 100 files on the FS. It looks
>> like roughly the extent sizes roughly match the IO size that the VM's were
>> written to XFS with. So it looks like each parallel IO thread is being
>> allocated next to each other rather than at spaced out regions of the disk.
>>
>> From what I understand, this is because each NFS write opens and closes the
>> file which throws off any chance that XFS will be able to use its allocation
>> features to stop parallel write streams from interleaving with each other.
>>
>> Is there anything I can tune to try and give each write to each file a
>> little bit of space, so that it at least gives readahead a chance when
>> reading, that it might hit at least a few MB of sequential data?
>
> /me wonders if setting an extent size hint on the rootdir before copying
> the files over would help here...

I've set a 16M hint and will copy a new VM over, interested to see
what happens. Thanks for the suggestion.

>
> --D
>
>>
>> I have read that inode32 allocates more randomly compared to inode64, so I'm
>> not sure if it's worth trying this as there will likely be less than a 1000
>> files per FS.
>>
>> Or am I best just to run fsr after everything has been copied on?
>>
>> Thanks for any advice
>> Nick
>
>> _______________________________________________
>> xfs mailing list
>> xfs@xxxxxxxxxxx
>> http://oss.sgi.com/mailman/listinfo/xfs
>

<Prev in Thread] Current Thread [Next in Thread>