No subject
Tue Jan 31 03:57:03 CST 2012
actually HAS a file system of X size to make a determination such as
that. Having run into limits that 'should not have been there' at
1TB, 2TB, 8TB, 16TB, and 32TB when I've crossed each one (different
file systems but all at the time of crossing them have been
'supposedly' capable of handling it, don't. Most recent is the 32TiB
limit in JFS, granted it looks to be all the jfs tools but that doesn't
matter when you still loose all your data. ;) <br>
<br>
I know that XFS can handle >64TiB as I have that running (though
made sure I had backups before I expanded to that). I have not seen
a deployment of 128TiB to see if that works, not saying it can't or
wont just that I haven't seen it.<br>
<br>
However from the thread here it appears that <128TiB (just shy it
seems) works and what the OP seems to be running into is a units
discrepancy. Using base 10 on the drives and then having the system
use base 2 for display. This is more dramatic the larger the
drive/array and the lack of education/updates to properly display the
units (?iB for base 2 (e.g. TiB) and ?B for base 10 (e.g. TB)). So
easily confused.<br>
<br>
<br>
</font><br>
On 03/27/2010 04:06, Emmanuel Florac wrote:
<blockquote cite="mid:20100327100618.71e24a0a at galadriel.home"
type="cite">
<pre wrap="">Le Thu, 25 Mar 2010 16:15:42 -0700 (PDT) vous écriviez:
</pre>
<blockquote type="cite">
<pre wrap="">is this just rounding error combined with the 1000=1k vs 1024=1k
marketing stuff, or is there some limit I am bumping into here.
</pre>
</blockquote>
<pre wrap="">
This isn't an xfs limit, I've set up several hundred big xfs FS for
more than 5 years (13 to 76 TB) and never saw that. It must be a bug in
df or elsewhere. What distribution is this? and architecture?
</pre>
</blockquote>
</body>
</html>
--------------040306060109070806090807--
More information about the xfs
mailing list