I've been using XFS for a while now on Linux, but never on a
filesystem larger than about 60GB or so. Tomorrow I need to configure
a disk array for a client with a single XFS filesystem. The total size
of the filesystem will be approximately 300GB, and it needs to be able
hold 1.5-2.0 million files at any one time. There are no database work
loads or transaction processing workloads, it's just a big fat file
server for their network. They store a wide range of file sizes, but
at least 50% of the files will be less than 32KB in size. The
filesystem will be shared out using Samba 3.0, and there will be
limited usage of extended attributes and ACLs through Samba (but
probably no more than a couple thousand files, unless Samba decides to
put extended attributes on things I'm not aware of yet).
The server is running kernel 2.6.0-test5. Anyone have any suggestions
on configuring the filesystem? I hesitate to just use mkfs.xfs
defaults for something this large, and I certainly don't want them to
run out of space for new files/directories when the system is not full.
|