xfs
[Top] [All Lists]

Performance decrease over time

To: xfs@xxxxxxxxxxx
Subject: Performance decrease over time
From: Markus Trippelsdorf <markus@xxxxxxxxxxxxxxx>
Date: Thu, 1 Aug 2013 22:21:08 +0200
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=simple; d=mail.ud10.udmedia.de; h= date:from:to:subject:message-id:mime-version:content-type; s= beta; bh=cEQbWpKTJW+SiSj6qmvkmepy34LE/VUiQAyO5A9dlFs=; b=nTeL046 U+KzOSKjAWu/tck0LHxcJtuSm48g5BlExgXIaMhautPmbs6WboS3SMAer9IiNjvd FMEKbtkDHl5jNhk3kzkUVngtT0nc6JY0Vq+RcGiKl+V9A3c8v/f2+5rkcx2Hq7wH 1ZAF81l0fnlthp2qph7QZB5P+OlU1VKD/U6A=
Yesterday I noticed that the nightly rsync run that backups my root
fs took over 8 minutes to complete. Half a year ago when the backup disk
was freshly formated it only took 2 minutes. (The size of my root fs stayed
constant during this time).

So I decided to reformat the drive, but first took some measurements.
The drive in question also contains my film and music collection,
several git trees and is used to compile projects quite often.

Model Family:     Seagate Barracuda Green (AF)
Device Model:     ST1500DL003-9VT16L

/dev/sdb on /var type xfs (rw,relatime,attr2,inode64,logbsize=256k,noquota)
/dev/sdb       xfs       1.4T  702G  695G  51% /var

 # xfs_db -c frag -r /dev/sdb
actual 1540833, ideal 1529956, fragmentation factor 0.71%

# iozone -I -a -s 100M -r 4k -r 64k -r 512k -i 0 -i 1 -i 2
        Iozone: Performance Test of File I/O
                Version $Revision: 3.408 $
                Compiled for 64 bit mode.
                Build: linux-AMD64 
...
        Run began: Thu Aug  1 12:55:09 2013

        O_DIRECT feature enabled
        Auto Mode
        File size set to 102400 KB
        Record Size 4 KB
        Record Size 64 KB
        Record Size 512 KB
        Command line used: iozone -I -a -s 100M -r 4k -r 64k -r 512k -i 0 -i 1 
-i 2
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    
bkwd   record   stride                                   
              KB  reclen   write rewrite    read    reread    read   write    
read  rewrite     read   fwrite frewrite   fread  freread
          102400       4    8083    9218     3817     3786     515     789      
                                                    
          102400      64   56905   48177    17239    26347    7381   15643      
                                                    
          102400     512  113689   86344    84583    83192   37136   63275      
                                                    

After fresh format and restore from another backup, performance is much
better again:

# iozone -I -a -s 100M -r 4k -r 64k -r 512k -i 0 -i 1 -i 2
                                                            random  random    
bkwd   record   stride                                   
              KB  reclen   write rewrite    read    reread    read   write    
read  rewrite     read   fwrite frewrite   fread  freread
          102400       4   13923   18760    19461    27305     761     652      
                                                    
          102400      64   95822   95724    82331    90763   10455   11944      
                                                    
          102400     512   93343   95386    94504    95073   43282   69179 

Couple of questions. Is it normal that throughput decreases this much in
half a year on a heavily used disk that is only half full? What can be
done (as a user) to mitigate this effect? 

-- 
Markus

<Prev in Thread] Current Thread [Next in Thread>