xfs
[Top] [All Lists]

Re: XFS and DPX files

To: xfs@xxxxxxxxxxx
Subject: Re: XFS and DPX files
From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Mon, 2 Nov 2009 12:05:27 +0100
In-reply-to: <20091031174836.3fc9505b@xxxxxxxxxxxxxx>
Organization: it-management http://it-management.at
References: <4AEC2CF4.8040703@xxxxxxx> <4AEC4BAA.20606@xxxxxxx> <20091031174836.3fc9505b@xxxxxxxxxxxxxx>
User-agent: KMail/1.10.3 (Linux/2.6.31.4-ZMI; KDE/4.1.3; x86_64; ; )
On Samstag 31 Oktober 2009 Emmanuel Florac wrote:
> Another trick is to mkfs the drive with su and sw matching the
> underlying RAID, for instance for a 15 drives RAID6 with 64K stripe
> use something like (beware, unverified syntax from memory):
>
> mkfs -t xfs -d su=65536,sw=15 /dev/sdXX

I believe for a 15 drive RAID-6, where 2 disks are used for redundancy, 
the correct mkfs would be:
mkfs -t xfs -d su=65536,sw=13 /dev/sdXX

That is, you tell XFS how many *data disks* there are, not how many 
disks the RAID uses, because the important thing is that XFS should 
distribute it's metadata over different disks.

One thing you could try: Each 2 minutes, create a new dir and store new 
files there. It could well be that XFS becomes slower when having a 
certain amount of files in a dir. If you change the dir, and now 
everything writes without drops, that should be the problem.
If you can't change the dir for your application, start a small batch 
job that moves the files to another dir, or removes them.

Another thing to try is if it would help to turn disk cache writes *on*, 
despite all warnings if the FAQ. That could also give an idea where to 
look at next time.

mfg zmi
-- 
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4

<Prev in Thread] Current Thread [Next in Thread>