xfs
[Top] [All Lists]

xfs write performance issue

To: xfs@xxxxxxxxxxx
Subject: xfs write performance issue
From: Hans-Peter Jansen <hpj@xxxxxxxxx>
Date: Thu, 19 Mar 2015 18:01:50 +0100
Delivered-to: xfs@xxxxxxxxxxx
User-agent: KMail/4.14.5 (Linux/3.19.1-2.gc0946e9-desktop; KDE/4.14.5; x86_64; ; )
Hi,

I'm struggling with a severe write performance problem to a 12TB XFS FS.
The system sports an ancient userspace (openSUSE 11.1), but major parts are 
current, e.g. kernel 3.19.1.

Unfortunately, for historical reasons, it's also 32bit (pae), and I cannot get 
rid of it quickly..

Partition was migrated several times (to higher capacity disks), and the 
filesystem is somewhat aged too:

~# LANG=C xfs_info /dev/sdc1
meta-data=/dev/sdc1              isize=256    agcount=17, agsize=183105406 
blks
         =                       sectsz=512   attr=2, projid32bit=0
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2929687287, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =Intern                 bsize=4096   blocks=32768, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

~# LANG=C parted /dev/sdc
GNU Parted 1.8.8
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s                                                           
(parted) print                                                            
Model: Areca ARC-1680-VOL#001 (scsi)
Disk /dev/sdc: 23437498368s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End           Size          File system  Name     Flags          
       
 1      34s    23437498334s  23437498301s  xfs          primary  , , , , , , , 
, , , , 

(parted) q                                                                

This is on a Areca 1680 RAID 5 set, consisting of:

SLOT 05(0:7)    Raid Set # 001  4000.8GB        Hitachi HUS724040ALE640
SLOT 06(0:6)    Raid Set # 001  4000.8GB        Hitachi HUS724040ALE640
SLOT 07(0:5)    Raid Set # 001  4000.8GB        Hitachi HUS724040ALE640
SLOT 08(0:4)    Raid Set # 001  4000.8GB        HGST HUS724040ALA640 

Volume Set Name         ARC-1680-VOL#001
Raid Set Name   Raid Set # 001
Volume Capacity         12000.0GB
SCSI Ch/Id/Lun  0/0/3
Raid Level      Raid 5
Stripe Size     128KBytes
Block Size      512Bytes
Member Disks    4
Cache Mode      Write Back
Write Protection        Disabled
Tagged Queuing  Enabled
Volume State    Normal

Read performance for a bigger file is about 400 MB/s on average (with flushed 
caches of course..)

~# LANG=C dd if=Django_Unchained.mp4 of=/dev/null bs=1M 
1305+1 records in
1305+1 records out
1369162196 bytes (1.4 GB) copied, 3.32714 s, 412 MB/s

Write performance is disastrous: it's about 1.5 MB/s.

~# LANG=C dd if=Django_Unchained.mp4 of=xxx bs=1M 
482+0 records in
482+0 records out
505413632 bytes (505 MB) copied, 368.816 s, 1.4 MB/s
1083+0 records in
1083+0 records out
1135607808 bytes (1.1 GB) copied, 840.072 s, 1.4 MB/s
1305+1 records in
1305+1 records out
1369162196 bytes (1.4 GB) copied, 1014.87 s, 1.4 MB/s

The question is, what could explain these numbers. Bad alignment? Bad stripe 
size? And what can I do to resolve this - without loosing all my data..

Cheers,
Pete

<Prev in Thread] Current Thread [Next in Thread>