xfs
[Top] [All Lists]

Re: Insane file system overhead on large volume

To: xfs@xxxxxxxxxxx
Subject: Re: Insane file system overhead on large volume
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Fri, 27 Jan 2012 13:08:27 -0600
In-reply-to: <CAEBWcAT2zfDskgDjFr0KcnfsT2A65r04AM1cv2-TfnNJTB1__Q@xxxxxxxxxxxxxx>
References: <CAEBWcAT2zfDskgDjFr0KcnfsT2A65r04AM1cv2-TfnNJTB1__Q@xxxxxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
On 1/27/2012 1:50 AM, Manny wrote:
> Hi there,
> 
> I'm not sure if this is intended behavior, but I was a bit stumped
> when I formatted a 30TB volume (12x3TB minus 2x3TB for parity in RAID
> 6) with XFS and noticed that there were only 22 TB left. I just called
> mkfs.xfs with default parameters - except for swith and sunit which
> match the RAID setup.
> 
> Is it normal that I lost 8TB just for the file system? That's almost
> 30% of the volume. Should I set the block size higher? Or should I
> increase the number of allocation groups? Would that make a
> difference? Whats the preferred method for handling such large
> volumes?

Maybe you simply assigned 2 spares and forgot, so you actually only have
10 RAID6 disks with 8 disks worth of stripe, equaling 24 TB, or 21.8
TiB.  21.8 TiB matches up pretty closely with your 22 TB, so this
scenario seems pretty plausible, dare I say likely.

If this is the case you'll want to reformat the 10 disk RAID6 with the
proper sunit/swidth values.

-- 
Stan

<Prev in Thread] Current Thread [Next in Thread>