xfs
[Top] [All Lists]

Re: XFS: Abysmal write performance because of excessive seeking (allocat

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)
From: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Mon, 9 Apr 2012 14:45:58 +0200
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4F827341.2000607@xxxxxxxxxxxxxxxxx>
Organization: Intellique
References: <CAAxjCEwBMbd0x7WQmFELM8JyFu6Kv_b+KDe3XFqJE6shfSAfyQ@xxxxxxxxxxxxxx> <20350.9643.379841.771496@xxxxxxxxxxxxxxxxxx> <20350.13616.901974.523140@xxxxxxxxxxxxxxxxxx> <CAAxjCEzkemiYin4KYZX62Ei6QLUFbgZESdwS8krBy0dSqOn6aA@xxxxxxxxxxxxxx> <4F7F7C25.8040605@xxxxxxxxxxxxxxxxx> <20120407104912.44881be3@xxxxxxxxxxxxxx> <4F81F5FD.1090809@xxxxxxxxxxxxxxxxx> <20120408234555.695e291f@xxxxxxxxxxxxxx> <4F827341.2000607@xxxxxxxxxxxxxxxxx>
Le Mon, 09 Apr 2012 00:27:29 -0500 vous écriviez:

> In your RAID10 random write testing, was this with a filesystem or
> doing direct block IO? 

Doing random IO in a file lying on an XFS filesystem.

> If the latter, I wonder if its write pattern
> is anything like the access pattern we'd see hitting dozens of AGs
> while creating 10s of thousands of files.

I suppose the file creation process to hit more some defined hot spots
than pure random access.

I just have a machine for testing purposes with 15 4TB drives in
RAID-6, not exactly a IOPS demon :)

So I've build a tar file to make it somewhat similar to OP's
problem :

root@3[raid]# ls -lh test.tar 
-rw-r--r-- 1 root root 2,6G  9 avril 13:52 test.tar
root@3[raid]# tar tf test.tar | wc -l
234318

# echo 3 > /proc/sys/vm/drop_caches
# time tar xf test.tar

real    1m2.584s
user    0m1.376s
sys     0m13.643s

Let's rerun it with files cached (the machine has 16 GB RAM, so
every single file must be cached):

# time tar xf test.tar

real    0m50.842s
user    0m0.809s
sys     0m13.767s

Typical IOs during unarchiving: no read, write IO bound.

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda               0,00  1573,50    0,00  480,50     0,00    36,96   157,52    
60,65  124,45   2,08 100,10
dm-0              0,00     0,00    0,00 2067,00     0,00    39,56    39,20   
322,55  151,62   0,48 100,10

The OP setup being 6 15k drives, should provide roughly the same
number of true IOPS (1200) as my slow as hell bunch of 7200RPM 
4TB drives (1500). I suppose write cache makes for most of
the difference; or else 15K drives are overrated :)

Alas, I can't run the test on this machine with ext4: I can't 
get mkfs.ext4 to swallow my big device. 

mkfs -t ext4 -v -b 4096 -n /dev/dm-0 2147483647

should work (though drastically limiting the filesystem size),
but dies miserably when removing the -n flag. Mmmph, I suppose it's 
production ready if you don't have much data to store.

JFS doesn't work either. And I was wondering why I'm using XFS?  :)

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |   <eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

<Prev in Thread] Current Thread [Next in Thread>