mkfs.xfs -n size=65536
Al Lau (alau2)
alau2 at cisco.com
Mon Oct 12 22:42:01 CDT 2015
Hi Dave,
I can try the dev kernel with your change. How do I go about getting the new bits?
# uname -a
Linux abc.company.com 3.10.0-229.1.2.el7.x86_64 #1 SMP Fri Mar 6 17:12:08 EST 2015 x86_64 x86_64 x86_64 GNU/Linux
Thanks,
-Al
-----Original Message-----
From: Dave Chinner [mailto:david at fromorbit.com]
Sent: Monday, October 12, 2015 8:33 PM
To: Al Lau (alau2)
Cc: xfs at oss.sgi.com
Subject: Re: mkfs.xfs -n size=65536
On Tue, Oct 13, 2015 at 01:39:13AM +0000, Al Lau (alau2) wrote:
> Have a 3 TB file. Logically divide into 1024 sections. Each section
> has a process doing dd to a randomly selected 4K block in a loop.
> Will this test case eventually cause the extent fragmentation that
> lead to the kmem_alloc message?
>
> dd if=/var/kmem_alloc/junk of=/var/kmem_alloc/fragmented obs=4096
> bs=4096 count=1 seek=604885543 conv=fsync,notrunc oflag=direct
If you were loking for a recipe to massively fragment a file, then you found it. And, yes, when you start to get millions of extents in a file such as this workload will cause, you'll start having memory allocation problems.
But I don't think that sets the GFP_ZERO flag anywhere, so that's not necessarily where the memroy shortage is coming from. I just committed some changes to the dev tree that allow for more detailed information from this allocation error point to be obtained - perhaps if woul dbe worthwhile trying a kernel build form the current for-next tree and turning the error level up to 11?
Cheers,
Dave.
--
Dave Chinner
david at fromorbit.com
More information about the xfs
mailing list