| To: | markgw@xxxxxxx |
|---|---|
| Subject: | Re: is xfs good if I have millions of files and thousands of hardlinks? |
| From: | Tomasz Chmielewski <mangoo@xxxxxxxx> |
| Date: | Wed, 20 Feb 2008 10:56:07 +0100 |
| Cc: | xfs@xxxxxxxxxxx |
| In-reply-to: | <47BB5873.6040703@sgi.com> |
| References: | <47BADF75.2070004@wpkg.org> <47BB5873.6040703@sgi.com> |
| Sender: | xfs-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.8) Gecko/20061110 Mandriva/1.5.0.8-1mdv2007.1 (2007.1) Thunderbird/1.5.0.8 Mnenhy/0.7.4.666 |
Peter Grandi wrote: mangoo> In general, because new files and hardlinks are being mangoo> added all the time and the old ones are being removed, mangoo> this leads to a very, very poor performance. Well, adding new files and hardlinks all the time leads to that that the inodes are scattered all over the disk. mangoo> When I want to remove a lot of directories/files (which mangoo> will be hardlinks, mostly), I see disk write speed is mangoo> down to 50 kB/s - 200 kB/s (fifty - two hundred mangoo> kilobytes/s) - this is the "bandwidth" used during the mangoo> deletion. The filesystem is available via iSCSI, so it's easy to measure the current performance. But iSCSI is not a problem here - performance is very good on an empty filesystem on that very same iSCSI/SAN device. What I mean, is that when I remove large amount of files, the bandwidth used for writing to the disk is only down to 50-200 kB/s. Down from what, one might ask? Let me paste here yet another quotation from linux-fsdevel list, it may shed some more light: Recently I began removing some of unneeded files (or hardlinks) and to my surprise, it takes longer than I initially expected. After cache is emptied (echo 3 > /proc/sys/vm/drop_caches) I can usually remove about 50000-200000 files with moderate performance. I see up to 5000 kB read/write from/to the disk, wa reported by top is usually 20-70%. After that, waiting for IO grows to 99%, and disk write speed is down to 50 kB/s - 200 kB/s (fifty - two hundred kilobytes/s).
mangoo> Will xfs handle a large number of files, including lots mangoo> of hardlinks, any better than ext3? Oh, I did consult the archive. There are not many posts about hardlinks here on this xfs list (or, at least I didn't find many). There was even a similar subject last year: someone had a 17 TB array used for backup, which was getting full, and asked if xfs is or will be capable of transparent compression. As xfs will not have transparent compression in a foreseeable future, it was suggested to him that he should use hardlinks - that alone could save him lots of space. I wonder if the guy uses hardlinks now, and if yes, how does it behave on this 17 TB array (my filesystem is just 1.2 TB, but soon, I'm about to create a bigger one on another device - and hence my questions). -- Tomasz Chmielewski http://wpkg.org |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: Is CVS stable?, Klaus Strebel |
|---|---|
| Next by Date: | Re: [patch] detect and correct bad features2 superblock field, Eric Sandeen |
| Previous by Thread: | Re: is xfs good if I have millions of files and thousands of hardlinks?, Tomasz Chmielewski |
| Next by Thread: | Re: is xfs good if I have millions of files and thousands of hardlinks?, Peter Grandi |
| Indexes: | [Date] [Thread] [Top] [All Lists] |