[Top] [All Lists]

Re: xfs_buf and buffercache/pagecache connection

To: Andi Kleen <andi@xxxxxxxxxxxxxx>
Subject: Re: xfs_buf and buffercache/pagecache connection
From: Yannis Klonatos <klonatos@xxxxxxxxxxxx>
Date: Mon, 31 May 2010 21:25:29 +0300
Cc: xfs@xxxxxxxxxxx
In-reply-to: <87aargt0ap.fsf@xxxxxxxxxxxxxxxxx>
References: <4C03E46B.9040407@xxxxxxxxxxxx> <87aargt0ap.fsf@xxxxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; el; rv: Gecko/20100317 Thunderbird/3.0.4
στις 5/31/2010 9:04 PM, O/H Andi Kleen έγραψε:
Yannis Klonatos <klonatos@xxxxxxxxxxxx> writes:

        I was looking to add a kernel hook to my system in order to
monitor buffer-cache hit and misses. Initially I was
planning to add my modifications to the __getblk(). However, i noticed
that XFS does not directly use the buffer-cache
for its pages but it seems to implement its own buffer.
        What I am now looking for is 1) the place where XFS checks
whether a page exists in its buffer or not and 2)
what are the possible interactions between xfs_buf and the Linux
kernel buffer-cache.
        I would appreciate any information regarding the above issues.
The kernel does not track all accesses, e.g. through mmap.
So you can only get misses (which is essentially IO rate and already
accounted), but not hits.


First of all thanks for your quick reply.

So, if i understand correctly, what you are saying is that it is basically impossible to modify xfs code
so that i get that specific information? This sounds a bit strange since if XFS was indeed using the
buffercache as ext3 or other fs does, the following modification would suffice (file fs/buffer.c):

struct buffer_head *
find_get_block(struct block_device *bdev, sector_t block, int size) {

     	struct buffer_head *bh = lookup_bh_lru(bdev, block, size);
	if (bh) buffercache_hits++;
	else buffercache_misses++;
        if (bh == NULL) {
               bh = __find_get_block_slow(bdev, block);
               if (bh)
       if (bh) {
       return bh;



<Prev in Thread] Current Thread [Next in Thread>