xfs
[Top] [All Lists]

***** SUSPECTED SPAM ***** [RFD 09/17] xfs: optimise inode chunk freein

To: xfs@xxxxxxxxxxx
Subject: ***** SUSPECTED SPAM ***** [RFD 09/17] xfs: optimise inode chunk freeing
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Mon, 12 Aug 2013 23:19:59 +1000
Delivered-to: xfs@xxxxxxxxxxx
Importance: Low
In-reply-to: <1376313607-28133-1-git-send-email-david@xxxxxxxxxxxxx>
References: <1376313607-28133-1-git-send-email-david@xxxxxxxxxxxxx>
From: Dave Chinner <dchinner@xxxxxxxxxx>

Now that the inode chunk freeing is done asynchronously, we can make
more intelligent decisions about freeing inode chunks. As we have a
inode btree that tracks free inodes, we can quickly find out whether
the adjacent inode chunks are free. We can then match inode chunk
freeing patterns to the allocation patterns that are in use.

We can also track the rate at which we are freeing inode chunks and
compare that to the rate at which we are allocating inode chunks. If
we are both allocating and freeing inode chunks, then we should slow
down the rate at which we are freeing inode chunks so that
allocation can occur directly from the empty inode chunks rather
than forcing them to be reallocated shortly after then have been
freed.

Further, for sequential chunks we should be able to implement bulk
removal of the records from the inode btrees as long as we can
guarantee that it only results in a single merge operation. The
constraints and processes would be similar to the bulk insert
operation proposed for inode allocation.

Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
---
 fs/xfs/xfs_ag.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/fs/xfs/xfs_ag.h b/fs/xfs/xfs_ag.h
index b34f641..c423191 100644
--- a/fs/xfs/xfs_ag.h
+++ b/fs/xfs/xfs_ag.h
@@ -254,6 +254,7 @@ typedef struct xfs_perag {
 
        int             pagi_chunk_alloc_rate;
        int             pagi_chunk_free_rate;
+       xfs_agino_t     pagi_free_chunk;
 
        /*
         * Inode allocation search lookup optimisation.
-- 
1.8.3.2

<Prev in Thread] Current Thread [Next in Thread>