[Top] [All Lists]

Re: XFS and DPX files

To: "AndrewL733@xxxxxxx" <AndrewL733@xxxxxxx>
Subject: Re: XFS and DPX files
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Mon, 02 Nov 2009 21:09:33 -0600
Cc: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>, Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <4AEF5438.5050801@xxxxxxx>
References: <4AEC2CF4.8040703@xxxxxxx> <4AEC4BAA.20606@xxxxxxx> <20091031174836.3fc9505b@xxxxxxxxxxxxxx> <200911021205.28006@xxxxxx> <20091102185249.0da8e388@xxxxxxxxxxxxxxxxxxxx> <4AEF5438.5050801@xxxxxxx>
User-agent: Thunderbird (Macintosh/20090812)
AndrewL733@xxxxxxx wrote:

I believe for a 15 drive RAID-6, where 2 disks are used forredundancy, the correct mkfs would be:
mkfs -t xfs -d su=65536,sw=13 /dev/sdXX

Yes you're right, I replied a bit too quickly :)

Another thing to try is if it would help to turn disk cache writes
*on*, despite all warnings if the FAQ.

Thank you for your suggestions. Yes I have write caching enabled. And I have StorSave set to "Performance". And I have a UPS on the system at all times!

The information about barriers was useful. In years past I was running much older firmware for the 3ware 9650 cards and that did not support barriers. But it is true the current firmware does support barriers. I also believe the 3ware StorSave "Performance" setting will disable barriers as well -- at least it makes the card ignore FUA commands.

Anyway, I have mounted the XFS filesystem with the "nobarrier" flag and I'm still seeing the same behavior. If you want to take a closer look at what I mean, please go to this link:


At this point, I have tried the following -- and none of these approaches seems to fix the problem:

   -- preallocation of DPX files
   -- reservation of DXP files (Make 10,000 zero-byte files named
   0000001.dpx through 0010000.dpx)
   -- creating xfs filesystem with external log device (also a 16-drive
   RAID array, because that's what I have available)
   -- mounting with large logbsize
   -- mounting with more logbufs
   -- mounting with larger allocsize

Have you said how large the filesystem is? If it's > 1T or 2T, and you're on a 64-bit system, have you tried the inode64 to get nicer inode vs. data allocation behavior?

Other suggestions might be to try blktrace/seekwatcher to see where your IO is going, or maybe even oprofile to see if xfs is burning cpu searching for allocations, or somesuch ...


Again, I want to point out that I don't have any problem with the underlying RAID device. On Linux itself, I get Bonnie++ scores of around 740 MB/sec reading and 650 MB/sec writing, minimum. Over 10 Gigabit Ethernet, I can write uncompressed HD streams (160 MB/sec) and I can read 2K DPX files (300+ MB/sec). DD shows similar results.

My gut feeling is that XFS is falling over after creating a certain number of new files. Because the DPX format creates one file for every frame (30 files/sec), it's not really a video stream. It's really like making 30 photoshop files per second. It seems as if some resource that XFS needs is being used up after a certain number of files are created, and that it is very disruptive and costly to get more of that resource. Why ext3 and ext4 can keep going past 60,000 files and xfs falls over after 4000 or 5000 files, I do not understand.

xfs mailing list

<Prev in Thread] Current Thread [Next in Thread>