On Wed, 2006-10-04 at 10:56 -0600, Andreas Dilger wrote:
> For ext4 we are exploring the possibility of directories being larger
> than 2GB in size. For ext3/ext4 the 2GB limit is about 50M files, and
> the 2-level htree limit is about 25M files (this is a kernel code and not
> disk format limit).
> Amusingly (or not) some users of very large filesystems hit this limit
> with their HPC batch jobs because they have 10,000 or 128,000 processes
> creating files in a directory on an hourly basis (job restart files,
> data dumps for visualization, etc) and it is not always easy to change
> the apps.
> My question (esp. for XFS folks) is if anyone has looked at this problem
> before, and what kind of problems they might have hit in userspace and in
> the kernel due to "large" directory sizes (i.e. > 2GB). It appears at
> first glance that 64-bit systems will do OK because off_t is a long
> (for telldir output), but that 32-bit systems would need to use O_LARGEFILE
> when opening the file in order to be able to read the full directory
> contents. It might also be possible to return -EFBIG only in the case
> that telldir is used beyond 2GB (the LFS spec doesn't really talk about
> large directories at all).
ext3 directory entries are always multiples of 4 bytes in length. So
the lowest 2 bits of the offset are always zero. Right? Why not shift
the returned offset and f_pos 2 bits right?
JFS uses an index into an array for the position (which isn't even in
the directory traversal order) so it can handle about 2G files in a
directory (although deleted entries aren't reused).
IBM Linux Technology Center