xfs
[Top] [All Lists]

Re: [reiserfs-list] Re: benchmarks

To: Nikita Danilov <NikitaDanilov@xxxxxxxxx>
Subject: Re: [reiserfs-list] Re: benchmarks
From: Hans Reiser <reiser@xxxxxxxxxxx>
Date: Tue, 17 Jul 2001 12:21:09 +0400
Cc: Xuan Baldauf <xuan--reiserfs@xxxxxxxxxxx>, Russell Coker <russell@xxxxxxxxxxxx>, Chris Wedgwood <cw@xxxxxxxx>, rsharpe@xxxxxxxxxx, Seth Mos <knuffie@xxxxxxxxx>, Federico Sevilla III <jijo@xxxxxxxxxxxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx, reiserfs-list@xxxxxxxxxxx
Organization: Namesys
References: <Pine.BSI.4.10.10107141752080.18419-100000@xs3.xs4all.nl> <3B5169E5.827BFED@namesys.com> <20010716210029.I11938@weta.f00f.org> <20010716101313.2DC3E965@lyta.coker.com.au> <3B52C49F.9FE1F503@namesys.com> <15186.51514.66966.458597@beta.namesys.com> <3B5341BA.1F68F755@baldauf.org> <15187.18225.196286.123754@beta.namesys.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
Nikita Danilov wrote:
 
> For each open file you have:
> 
>  struct file (96b)
>  struct inode (460b)
>  struct dentry (112b)
> 
> at least. This totals to 668M of kernel memory, that is, unpageable.
> All files are kept in several hash tables and hash-tables are known to
> degrade. Well, actually, I am afraid current Linux kernels cannot open
> 1e6 of files.

You don't have to do things this stupidly.
But even if you do, you have shown that a server that cannot handle the load 
from a million files
receiving IO would be burdened by the overhead of keeping them open.  Not sure 
what your point is.

Hans


<Prev in Thread] Current Thread [Next in Thread>