xfs
[Top] [All Lists]

Re: Insane file system overhead on large volume

To: Manny <dermaniac@xxxxxxxxx>
Subject: Re: Insane file system overhead on large volume
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Fri, 27 Jan 2012 12:21:48 -0600
Cc: xfs@xxxxxxxxxxx
In-reply-to: <CAEBWcAT2zfDskgDjFr0KcnfsT2A65r04AM1cv2-TfnNJTB1__Q@xxxxxxxxxxxxxx>
References: <CAEBWcAT2zfDskgDjFr0KcnfsT2A65r04AM1cv2-TfnNJTB1__Q@xxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
On 1/27/12 1:50 AM, Manny wrote:
> Hi there,
> 
> I'm not sure if this is intended behavior, but I was a bit stumped
> when I formatted a 30TB volume (12x3TB minus 2x3TB for parity in RAID
> 6) with XFS and noticed that there were only 22 TB left. I just called
> mkfs.xfs with default parameters - except for swith and sunit which
> match the RAID setup.
> 
> Is it normal that I lost 8TB just for the file system? That's almost
> 30% of the volume. Should I set the block size higher? Or should I
> increase the number of allocation groups? Would that make a
> difference? Whats the preferred method for handling such large
> volumes?

If it was 12x3TB I imagine you're confusing TB with TiB, so
perhaps your 30T is really only 27TiB to start with.

Anyway, fs metadata should not eat much space:

# mkfs.xfs -dfile,name=fsfile,size=30t
# ls -lh fsfile
-rw-r--r-- 1 root root 30T Jan 27 12:18 fsfile
# mount -o loop fsfile  mnt/
# df -h mnt
Filesystem            Size  Used Avail Use% Mounted on
/tmp/fsfile            30T  5.0M   30T   1% /tmp/mnt

So Christoph's question was a good one; where are you getting
your sizes?

-Eric

> Thanks a lot,
> Manny
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
> 

<Prev in Thread] Current Thread [Next in Thread>