On Tue, Oct 13, 2015 at 01:39:13AM +0000, Al Lau (alau2) wrote:
> Have a 3 TB file. Logically divide into 1024 sections. Each
> section has a process doing dd to a randomly selected 4K block in
> a loop. Will this test case eventually cause the extent
> fragmentation that lead to the kmem_alloc message?
>
> dd if=/var/kmem_alloc/junk of=/var/kmem_alloc/fragmented obs=4096 bs=4096
> count=1 seek=604885543 conv=fsync,notrunc oflag=direct
If you were loking for a recipe to massively fragment a file, then
you found it. And, yes, when you start to get millions of extents in
a file such as this workload will cause, you'll start having memory
allocation problems.
But I don't think that sets the GFP_ZERO flag anywhere, so that's
not necessarily where the memroy shortage is coming from. I just
committed some changes to the dev tree that allow for more detailed
information from this allocation error point to be obtained -
perhaps if woul dbe worthwhile trying a kernel build form the
current for-next tree and turning the error level up to 11?
Cheers,
Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
|