On Wed, Jan 02, 2008 at 02:16:27PM +0100, Carsten Aulbert wrote:
> A file server with a 10 TB large xfs file system running on a RAID6
> SATA array, the server has 16 GB of memory. I want to test how long
> it would take to run xfs_repair on it and if the amount of memory is
> enough for that.
It depends on how fast the IO is and also how many files there are.
If you have a small number of really large files it's fairly fast, if
you have a large number of really small files (ie. email maildir) then
it tends to be much slower.
> (2) Damage the file sytem
> (3) Run xfs_repair
xfs_repair will run without having to damage the filesystem (though
if/when damaged it will probably be a little slower).
> Otherwise: How can a damage a xfs file system to make the job harder
> for xfs_repair.
Google for fsfuzzer.
> I guess a simple dd if=/dev/random of=/dev/sdb1 with some offsets
> will not be very effective, right?
If it misses the metadata, xfs_repair won't even notice. If you whack
large chunks of metadata you might see considerable data loss.