xfs
[Top] [All Lists]

XFS peculiar behavior

To: xfs@xxxxxxxxxxx
Subject: XFS peculiar behavior
From: Yannis Klonatos <klonatos@xxxxxxxxxxxx>
Date: Wed, 23 Jun 2010 10:37:19 +0300
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.10) Gecko/20100512 Thunderbird/3.0.5
Hi all!

I have come across the following peculiar behavior in XFS and i would appreciate any information anyone
could provide.
In our lab we have a system that has twelve 500GByte hard disks (total capacity 6TByte), connected to an Areca (ARC-1680D-IX-12) SAS storage controller. The disks are configured as a RAID-0 device. Then I create a clean XFS filesystem on top of the raid volume, using the whole capacity. We use this test-setup to measure performance improvement for a TPC-H experiment. We copy the database over the clean XFS filesystem using the cp utility. The database used in our experiments is 56GBytes in size (data + indices). The problem is that i have noticed that XFS may - not all times - split a table over a large disk distance. For example in one run i have noticed that a file of 13GByte is split over a 4,7TByte distance (I calculate this distance by subtracting the final block used for the file with the first one. The two disk blocks values are acquired using the
FIBMAP ioctl).
Is there some reasoning behind this (peculiar) behavior? I would expect that since the underlying storage is so large, and the dataset is so small, XFS would try to minimize disk seeks and thus place the file sequentially in disk. Furthermore, I understand that there may be some blocks left unused by XFS between subsequent file blocks used in order to handle any write appends that may come afterward. But i wouldn't expect such a large splitting of a single
file.
        Any help?

Thanks in advance,
Yannis Klonatos

<Prev in Thread] Current Thread [Next in Thread>