xfs
[Top] [All Lists]

Re: xfs write performance issue

To: Hans-Peter Jansen <hpj@xxxxxxxxx>
Subject: Re: xfs write performance issue
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 20 Mar 2015 10:18:54 +1100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <8976870.8vOdNBKrI1@xrated>
References: <8976870.8vOdNBKrI1@xrated>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Mar 19, 2015 at 06:01:50PM +0100, Hans-Peter Jansen wrote:
> ~# LANG=C xfs_info /dev/sdc1
> meta-data=/dev/sdc1              isize=256    agcount=17, agsize=183105406 
> blks
>          =                       sectsz=512   attr=2, projid32bit=0
>          =                       crc=0        finobt=0
> data     =                       bsize=4096   blocks=2929687287, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
> log      =Intern                 bsize=4096   blocks=32768, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
....
> ~# LANG=C dd if=Django_Unchained.mp4 of=/dev/null bs=1M 
> 1305+1 records in
> 1305+1 records out
> 1369162196 bytes (1.4 GB) copied, 3.32714 s, 412 MB/s
> 
> Write performance is disastrous: it's about 1.5 MB/s.
> 
> ~# LANG=C dd if=Django_Unchained.mp4 of=xxx bs=1M 
> 482+0 records in
> 482+0 records out
> 505413632 bytes (505 MB) copied, 368.816 s, 1.4 MB/s

Why did it stop half way through? ENOSPC?

> 1083+0 records in
> 1083+0 records out
> 1135607808 bytes (1.1 GB) copied, 840.072 s, 1.4 MB/s

That's incomplete, too.

> The question is, what could explain these numbers. Bad alignment? Bad stripe 
> size? And what can I do to resolve this - without loosing all my data..

More than likely you've fragmented free space, and so writes
are small random write IO. Output of 'df -h' and:

$ xfs_db -r -c "freesp -s" <dev>

would be instructive.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>