xfs
[Top] [All Lists]

Insane file system overhead on large volume

To: xfs@xxxxxxxxxxx
Subject: Insane file system overhead on large volume
From: Manny <dermaniac@xxxxxxxxx>
Date: Fri, 27 Jan 2012 08:50:38 +0100
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; bh=SHGRrQlZpauNsCfmIneAeD2gGcMtCksWO0nSiB49id0=; b=pWoeSPnVmJ3JIA6WSjNmhVMBMg6B/oxTB7mFJbVv4KIrJEo53/2BF3mBsve6EV7Wso EpuINhw3a5ByAG+2g10o5gUilNnHyKvoSXqgaDru+d7Iyb9b7EfLctQ9WXYMBnZkfOnU RD4m+TfMKORy4mBRBwHT6gZBdAodAOzFoXq+4=
Hi there,

I'm not sure if this is intended behavior, but I was a bit stumped
when I formatted a 30TB volume (12x3TB minus 2x3TB for parity in RAID
6) with XFS and noticed that there were only 22 TB left. I just called
mkfs.xfs with default parameters - except for swith and sunit which
match the RAID setup.

Is it normal that I lost 8TB just for the file system? That's almost
30% of the volume. Should I set the block size higher? Or should I
increase the number of allocation groups? Would that make a
difference? Whats the preferred method for handling such large
volumes?

Thanks a lot,
Manny

<Prev in Thread] Current Thread [Next in Thread>