On Thu, 11 Sep 2003, Kevin P. Fleming wrote:
> I've been using XFS for a while now on Linux, but never on a
> filesystem larger than about 60GB or so. Tomorrow I need to configure
> a disk array for a client with a single XFS filesystem. The total size
> of the filesystem will be approximately 300GB, and it needs to be able
> hold 1.5-2.0 million files at any one time. There are no database work
> loads or transaction processing workloads, it's just a big fat file
> server for their network. They store a wide range of file sizes, but
> at least 50% of the files will be less than 32KB in size. The
If the files are very small, I suppose you might consider a smaller
than default block size, just so you don't waste space. 2 million
files would waste about 4G of space, or a little over 1% of capacity,
so I guess it's not a big deal. The default of page-sized block sizes is
probably better tested, anyway.
> filesystem will be shared out using Samba 3.0, and there will be
> limited usage of extended attributes and ACLs through Samba (but
> probably no more than a couple thousand files, unless Samba decides to
> put extended attributes on things I'm not aware of yet).
>
> The server is running kernel 2.6.0-test5. Anyone have any suggestions
Brave man. :)
> on configuring the filesystem? I hesitate to just use mkfs.xfs
> defaults for something this large, and I certainly don't want them to
> run out of space for new files/directories when the system is not full.
You won't run out of space; xfs dynamically allocates inodes up to a
set max percentage, default 25% of space in inodes. However, you
can always use xfs_growfs to increase that if necessary.
25% of 300G / 256byte inodes still leaves you with plenty of room
for -lots- of files & dirs (314,572,800 if I calculated right).
how are you assembling the 300G? stripe values etc might be worth
examining.
-Eric
|