[Top] [All Lists]

Question regarding performance on big files.

To: xfs@xxxxxxxxxxx
Subject: Question regarding performance on big files.
From: Mathieu AVILA <mathieu.avila@xxxxxxxxxxxxxxxx>
Date: Mon, 20 Sep 2010 19:04:57 +0200
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; fr; rv: Gecko/20100915 Lightning/1.0b2 Thunderbird/3.1.4
Hello XFS team,

I have run into trouble with XFS, but excuse me if this question has been asked a dozens times.

I'm am filling a very big file on a XFS filesystem on Linux that stands on a software RAID 0. Performance are very good until I get 2 "holes" during which my write stalls for a few seconds.
Mkfs parameters:
mkxfs.xfs -b size 4096 -s size 4096 -d agcount=2 -i size=2048
The RAID0 is done a 2 SATA disks of 500 GB each.

My test is just running "dd" with 8M blocks:
dd if=/dev/zero of=/DATA/big
(/DATA is the XFS file system)

The system is basically a RHEL5 with a 2.6.18 kernel and XFS packages from CentOS.

The problem happens 2 times: one time around 210 GB and the second time around 688 GB (hole in performance and response time is bigger the second time -- around 20 seconds)

Do you have any clue ? Do my mkfs parameters make sense ? The goal here is really to have something that is able to store big files at a constant throughput -- the test is done on purpose.

Mathieu Avila
IT & Integration Engineer

OpenCube Technologies http://www.opencubetech.com
Parc Technologique du Canal, 9 avenue de l'Europe
31520 Ramonville St Agne - FRANCE
Tel. : +33 (0) 561 285 606 - Fax : +33 (0) 561 285 635
<Prev in Thread] Current Thread [Next in Thread>