Question regarding performance on big files.
Mathieu AVILA
mathieu.avila at opencubetech.com
Mon Sep 20 12:04:57 CDT 2010
Hello XFS team,
I have run into trouble with XFS, but excuse me if this question has
been asked a dozens times.
I'm am filling a very big file on a XFS filesystem on Linux that stands
on a software RAID 0. Performance are very good until I get 2 "holes"
during which my write stalls for a few seconds.
Mkfs parameters:
mkxfs.xfs -b size 4096 -s size 4096 -d agcount=2 -i size=2048
The RAID0 is done a 2 SATA disks of 500 GB each.
My test is just running "dd" with 8M blocks:
dd if=/dev/zero of=/DATA/big
(/DATA is the XFS file system)
The system is basically a RHEL5 with a 2.6.18 kernel and XFS packages
from CentOS.
The problem happens 2 times: one time around 210 GB and the second time
around 688 GB (hole in performance and response time is bigger the
second time -- around 20 seconds)
Do you have any clue ? Do my mkfs parameters make sense ? The goal here
is really to have something that is able to store big files at a
constant throughput -- the test is done on purpose.
--
*Mathieu Avila*
IT & Integration Engineer
mathieu.avila at opencubetech.com
OpenCube Technologies http://www.opencubetech.com
Parc Technologique du Canal, 9 avenue de l'Europe
31520 Ramonville St Agne - FRANCE
Tel. : +33 (0) 561 285 606 - Fax : +33 (0) 561 285 635
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20100920/155d07b0/attachment.htm>
More information about the xfs
mailing list