[Top] [All Lists]

Re: howto keep xfs directory searches fast for a long time

To: xfs@xxxxxxxxxxx, stan@xxxxxxxxxxxxxxxxx, Peter Grandi <pg_xf2@xxxxxxxxxxxxxxxxxx>
Subject: Re: howto keep xfs directory searches fast for a long time
From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Mon, 13 Aug 2012 18:44:34 +0200
In-reply-to: <5028057F.3090007@xxxxxxxxxxxxxxxxx>
Organization: it-management http://it-management.at
References: <6344220.LKveJofnHA@saturn> <5028057F.3090007@xxxxxxxxxxxxxxxxx>
User-agent: KMail/4.7.2 (Linux/3.5.1-zmi; KDE/4.7.2; x86_64; ; )
First, thanks to both of you.

Am Sonntag, 12. August 2012, 14:35:27 schrieb Stan Hoeppner:
> So the problem here is max vmdk size?  Just use an RDM.

That would have been an option before someone created the VDMK space
over the full RAID ;-)

> Peter Grandi:
> Ah the usual goal of a single large storage pool for cheap.

I don't need O_PONIES or 5.000 IOPS. I've just been trying to figure out
if there's anything I can do to "optimize" a given VM and storage space
via xfs formatting. This I guess is what 95% of admins worldwide have to
do these days: Generic, virtualized environments with a given storage,
and customer wants X. Where X is sometimes a DB, sometimes a file store,
sometimes archive store. And customer expects endless IOPS, sub-zero
delay, and endless disk space. I tend to destroy their ponies quickly,
but that doesn't mean you can't try to keep systems quick.

That particular VM is not important, but I want to keep user
satisfaction at a quality level. About 10 times a week someone connects
to that machine, searches a file and downloads it over the Internet. So
download or read speed is of no value. But access/find times are.

I guess the best I can do is run du/find every morning to pre-fill the
inode caches on that VM, so when someone connects the search runs fast.

The current VM shows this:

# df -i /disks/big1/
Filesystem                    Inodes   IUsed      IFree IUse% Mounted on
/dev/mapper/sp1--sha 1717934464 1255882 1716678582    1% /disks/big1
# df /disks/big1/
Filesystem               1K-blocks       Used  Available Use% Mounted on
/dev/mapper/sp1--sha 8587585536 6004421384 2583164152  70% /disks/big1

So 6TB data in 1.3 mio inodes. The VM caches that easily, seems that's
the only real thing to optimize against.


CFQ seems bad, but there's no documented way out of that. I've edited
that, and added a short vm.vfs_cache_pressure description. Please
someone recheck.

mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531

Attachment: signature.asc
Description: This is a digitally signed message part.

<Prev in Thread] Current Thread [Next in Thread>