I did a run this morning following Andi's suggestion, reverting all the
files in the "TAKE: revamped pagebuf locking" posting. The first SPEC test
ran faster, but kswapd still went mad during the second SPEC test init. I
didn't get an Alt-Sysrq trace of it, but will later this afternoon. Are
there any other XFS changes between Feb 7th and Mar 29th that might affect
VM behaviour?
Erik
> -----Original Message-----
> From: Stephen Lord [mailto:lord@xxxxxxx]
> Sent: Thursday, April 04, 2002 10:06 AM
> To: HABBINGA,ERIK (HP-Loveland,ex1)
> Cc: Andi Kleen; 'linux-xfs@xxxxxxxxxxx'
> Subject: Re: heavy VM load due to revamped pagebuf locking?
>
>
> Andi Kleen wrote:
>
> >On Thu, Apr 04, 2002 at 11:13:39AM -0500, HABBINGA,ERIK
> (HP-Loveland,ex1) wrote:
> >
> >>I've updated to 2.4.18 w/ a XFS CVS download from
> 03/29/2002. During SPEC
> >>testing, the VM takes over all CPU load as pagebuf_iostart
> starts waiting
> >>for memory, and then kmalloc starts waiting for memory.
> All of this time
> >>spent in shrink_cache causes the SPEC test to time out.
> Once the test
> >>stops, the box settles down and VM CPU load goes away. All of the
> >>shrink_cache functions are waiting for schedule() to come
> back, because of
> >>the test for current->need_resched at the top of the
> shrink_cache loop. For
> >>grins, I commented out that test, and now many nfsd
> processes are sitting in
> >>_pagebuf_find_lockable_buffer->pagebuf_iostart's call to
> pagebuf_iowait.
> >>Could the revamped pagebuf locking cause this behaviour?
> >>
> >
> >You could test ist. Just revert that TAKE and test again.
> >
> >-Andi
> >
> Andi's suggestion is a good one, we have not seen this here, your
> configuation is
> clearly larger than anything we have locally.
>
> You might also try just reverting the changes in
> page_buf_io.c (you will
> need
> the header file change). The locking changes should not affect memory
> consumption that much, but changes went into this file which
> put buffer
> heads on pages in more circumstances than we used to.
>
> Steve
>
>
>
|