xfs
[Top] [All Lists]

Re: 20 milion 64k files - SLOW !!!

To: Jan De Landtsheer <jan@xxxxxxxxxxxx>
Subject: Re: 20 milion 64k files - SLOW !!!
From: Steve Lord <lord@xxxxxxx>
Date: 01 Oct 2003 11:48:12 -0500
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <1065001066.29584.84.camel@xxxxxxxxxxxxxx>
Organization:
References: <1065001066.29584.84.camel@xxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Wed, 2003-10-01 at 04:37, Jan De Landtsheer wrote:
> Fast in the beginning, but slowing graduately down till not usable any
> more.
> 
> I'll try to explain...
> We have an app that receives 64k measurement data from a bunch of
> probes.
> This data will be sent over the Gigabit network to a central TB server
> on wich I want to use XFS for it's stability.
> 
> Problem is that once more than 1.000.000 files are written, things
> gradually slow down, till the way we want to store and retrieve files
> does not serve any more as a solution.
> 
> I've writte a little script to test this files creation process as it
> would be on a live system (python script) and the more files , the
> slower tings get, till I can only get at about 10 files of 64K/sec,
> which is way to slow to consider it as a solution.
> 
> Question is if a filesystem can be used as an alternative for databases,
> but still... I just want to store files, so I don't think a DB would be
> a solution, as I also need to retrieve the data as quickly.
> 
> If Someone wants the script to test and confirm my findings, and
> eventually help me find a solution, I would be most gratefull.

Simple question, one directory, or many?

any hashing scheme where you put 1 million files in 1 dir is going
to slow down as you extend it.

Steve


-- 

Steve Lord                                      voice: +1-651-683-3511
Principal Engineer, Filesystem Software         email: lord@xxxxxxx


<Prev in Thread] Current Thread [Next in Thread>