xfs
[Top] [All Lists]

Re: is xfs good if I have millions of files and thousands of hardlinks?

To: markgw@xxxxxxx
Subject: Re: is xfs good if I have millions of files and thousands of hardlinks?
From: Tomasz Chmielewski <mangoo@xxxxxxxx>
Date: Wed, 20 Feb 2008 10:56:07 +0100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <47BB5873.6040703@sgi.com>
References: <47BADF75.2070004@wpkg.org> <47BB5873.6040703@sgi.com>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.8) Gecko/20061110 Mandriva/1.5.0.8-1mdv2007.1 (2007.1) Thunderbird/1.5.0.8 Mnenhy/0.7.4.666
Peter Grandi wrote:

mangoo> In general, because new files and hardlinks are being
mangoo> added all the time and the old ones are being removed,
mangoo> this leads to a very, very poor performance.

That is not the cause of the poor performance. The ultimate
cause is rather different.

Well, adding new files and hardlinks all the time leads to that that the inodes are scattered all over the disk.



mangoo> When I want to remove a lot of directories/files (which
mangoo> will be hardlinks, mostly), I see disk write speed is
mangoo> down to 50 kB/s - 200 kB/s (fifty - two hundred
mangoo> kilobytes/s) - this is the "bandwidth" used during the
mangoo> deletion.

How is bandwidth relevant for that? OK that there are quotes,
but it seems very very stranget regardless.

The filesystem is available via iSCSI, so it's easy to measure the current performance. But iSCSI is not a problem here - performance is very good on an empty filesystem on that very same iSCSI/SAN device.


What I mean, is that when I remove large amount of files, the bandwidth used for writing to the disk is only down to 50-200 kB/s. Down from what, one might ask? Let me paste here yet another quotation from linux-fsdevel list, it may shed some more light:

  Recently I began removing some of unneeded files (or hardlinks) and
  to my surprise, it takes longer than I initially expected.

  After cache is emptied (echo 3 > /proc/sys/vm/drop_caches) I can
  usually remove about 50000-200000 files with moderate performance.
  I see up to 5000 kB read/write from/to the disk, wa reported by top
  is usually 20-70%.

  After that, waiting for IO grows to 99%, and disk write speed is down
  to 50 kB/s - 200 kB/s (fifty - two hundred kilobytes/s).


mangoo> Also, the filesystem is very fragmented ("dd
mangoo> if=/dev/zero of=some_file bs=64k" writes only about 1
mangoo> MB/s).

Then more the merrier.

Umm, no. Usually, one is merrier when these numbers are high, not low ;)


mangoo> Will xfs handle a large number of files, including lots
mangoo> of hardlinks, any better than ext3?

It shows consideration to consult the archives of a mailing list
before aking a question. It may be a good idea to do it even
after posting a question :-).

Oh, I did consult the archive. There are not many posts about hardlinks here on this xfs list (or, at least I didn't find many).


There was even a similar subject last year: someone had a 17 TB array used for backup, which was getting full, and asked if xfs is or will be capable of transparent compression.
As xfs will not have transparent compression in a foreseeable future, it was suggested to him that he should use hardlinks - that alone could save him lots of space.


I wonder if the guy uses hardlinks now, and if yes, how does it behave on this 17 TB array (my filesystem is just 1.2 TB, but soon, I'm about to create a bigger one on another device - and hence my questions).



--
Tomasz Chmielewski
http://wpkg.org


<Prev in Thread] Current Thread [Next in Thread>