xfs
[Top] [All Lists]

Re: Performance problems with millions of inodes

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Performance problems with millions of inodes
From: Christoph Litauer <litauer@xxxxxxxxxxxxxx>
Date: Thu, 26 Jun 2008 09:29:08 +0200
Cc: xfs@xxxxxxxxxxx
In-reply-to: <20080625231210.GF11558@disturbed>
References: <4862598B.80905@xxxxxxxxxxxxxx> <20080625231210.GF11558@disturbed>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (Macintosh/20080421)
Dave Chinner schrieb:
On Wed, Jun 25, 2008 at 04:43:23PM +0200, Christoph Litauer wrote:
Hi,

sorry if this has been asked before, I am new to this mailing list. I
didn't find any hints in the FAQ or by googling ...

I have a backup server driving two kinds of backup software: bacula and
backuppc. bacula saves it's backups on raid1, backuppc on raid2
(different hardware, but both fast hardware raids).
I have massive performance problems with backuppc which I tracked down
to performance problems of the filesystem on raid2 (I think so). The
main difference between the two backup systems is that backuppc uses
millions of inodes for it's backup (in fact it duplicates the directory
structure of the backup client).

raid1 consists of 91675 inodes, raid2 of 143646439. The filesystems were
created without any options. raid1 is about 7 TB, raid2 about 10TB. Both
filesystems are mounted with options '(rw,noatime,nodiratime,ihashsize=65536)'.

I used bonnie++ to benchmark both filesystems. Here are the results of
'bonnie++ -u root -f -n 10:0:0:1000':

raid1:
-------------------
Sequential Output: 82505 K/sec
Sequential Input : 102192 K/sec
Sequential file creation: 7184/sec
Random file creation    : 17277/sec

raid2:
-------------------
Sequential Output: 124802 K/sec
Sequential Input : 109158 K/sec
Sequential file creation: 123/sec
Random file creation    : 138/sec

As you can see, raid2's throughput is higher than raid1's. But the file
creation times are rather slow ...

Maybe the 143 million inodes cause this effect?

Certain will be. You've got about 3 AGs that are holding inodes, so
that's probably 35M+ inodes per AG. With the way allocation works,
it's probably doing a dual-traversal of the AGI btree to find a free
inode "near" to the parent and that is consuming lots and lots of
CPU time.

So, would more AGs improve performance? As backuppc is still in testing state (for me) it would be no problem to create a new xfs filesystem with a "better" configuration. I am afraid that the number of inodes will increase very much if I backup more clients and filesystems. So, what configuration would you recommend?


Any idea how to avoid it?

I had a protoype patch back when I was at SGI than stopped this
search when the search reached a radius that was no longer "near".
This greatly reduced CPU time for allocation on large inode count
AGs and hence create rates increased significantly.

[Mark - IIRC that patch was in the miscellaneous patch tarball I
left behind...]

The only other way of dealing with this is to use inode64 so that
inodes get spread across the entire filesystem instead of just a
few AGs at the start of the filesystem. It's too late to change the
existing inodes, but new inodes would get spread around....

Unfortunatly my backup server is a 32 bit system ...

--
Regards
Christoph
________________________________________________________________________
Christoph Litauer                  litauer@xxxxxxxxxxxxxx
Uni Koblenz, Computing Center,     http://www.uni-koblenz.de/~litauer
Postfach 201602, 56016 Koblenz     Fon: +49 261 287-1311, Fax: -100 1311
PGP-Fingerprint: F39C E314 2650 650D 8092 9514 3A56 FBD8 79E3 27B2


<Prev in Thread] Current Thread [Next in Thread>