xfs
[Top] [All Lists]

Re: mkfs.xfs -n size=65536

To: "Al Lau (alau2)" <alau2@xxxxxxxxx>
Subject: Re: mkfs.xfs -n size=65536
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 13 Oct 2015 14:33:04 +1100
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <8a1b26a3b869448e805485c529c447a4@xxxxxxxxxxxxxxxxxxxxx>
References: <0F279340237AA148AD7E3C6A70561A5E01266BE7@xxxxxxxxxxxxxxxxxxxxx> <20151013002308.GI27164@dastard> <8a1b26a3b869448e805485c529c447a4@xxxxxxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Oct 13, 2015 at 01:39:13AM +0000, Al Lau (alau2) wrote:
> Have a 3 TB file.  Logically divide into 1024 sections.  Each
> section has a process doing dd to a randomly selected 4K block in
> a loop.  Will this test case eventually cause the extent
> fragmentation that lead to the kmem_alloc message?
> 
> dd if=/var/kmem_alloc/junk of=/var/kmem_alloc/fragmented obs=4096 bs=4096 
> count=1 seek=604885543 conv=fsync,notrunc oflag=direct

If you were loking for a recipe to massively fragment a file, then
you found it. And, yes, when you start to get millions of extents in
a file such as this workload will cause, you'll start having memory
allocation problems.

But I don't think that sets the GFP_ZERO flag anywhere, so that's
not necessarily where the memroy shortage is coming from. I just
committed some changes to the dev tree that allow for more detailed
information from this allocation error point to be obtained -
perhaps if woul dbe worthwhile trying a kernel build form the
current for-next tree and turning the error level up to 11?

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>