On Sun, Oct 14, 2007 at 04:12:20PM -0700, Jeremy Fitzhardinge wrote:
> David Chinner wrote:
> > You mean xfs_buf.c.
> Yes, sorry.
> > And yes, we delay unmapping pages until we have a batch of them
> > to unmap. vmap and vunmap do not scale, so this is batching helps
> > alleviate some of the worst of the problems.
> How much performance does it cost?
Every vunmap() cal causes a global TLB sync, and the region lists
are globl with a spin lock protecting them. I thin kNick has shown
a 64p altix with ~60 cpus spinning on the vmap locks under a
> What kind of workloads would it show
> up under?
A directory traversal when using large directory block sizes
with large directories....
> > Realistically, if this delayed release of vmaps is a problem for
> > Xen, then I think that some generic VM solution is needed to this
> > problem as vmap() is likely to become more common in future (think
> > large blocks in filesystems). Nick - any comments?
> Well, the only real problem is that the pages are returned to the free
> pool and reallocated while still being part of a mapping. If the pages
> are still owned by the filesystem/pagecache, then there's no problem.
The pages are still attached to the blockdev address space mapping,
but there's nothing stopping them from being reclaimed before they are
> What's the lifetime of things being vmapped/unmapped in xfs? Are they
> necessarily being freed when they're unmapped, or could unmapping of
> freed memory be more immediate than other memory?
It's all "freed memory". At the time we pull the buffer down, there are
no further references to the buffer. the pages are released and the mapping
is never used again until it is torn down. it is torn down either on the
next xfsbufd run (either memory pressure or every 15s) or every 64th
new vmap() call to map new buffers.
> Maybe it just needs a notifier chain or something.
We've already got a memroy shrinker hook that triggers this reclaim.
SGI Australian Software Group