[Top] [All Lists]

Re: Maximum file system size of XFS?

To: xfs@xxxxxxxxxxx, stan@xxxxxxxxxxxxxxxxx
Subject: Re: Maximum file system size of XFS?
From: Hans-Peter Jansen <hpj@xxxxxxxxx>
Date: Mon, 11 Mar 2013 17:15:08 +0100
Cc: Pascal <pa5ca1@xxxxxx>, Eric Sandeen <sandeen@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <513DB9C2.3050408@xxxxxxxxxxxxxxxxx>
References: <20130309215121.0e614ef8@thinky> <513C3C43.7080104@xxxxxxxxxxxxxxxxx> <513DB9C2.3050408@xxxxxxxxxxxxxxxxx>
User-agent: KMail/4.9.5 (Linux/3.4.28-2.20-desktop; KDE/4.9.5; x86_64; ; )
Am Montag, 11. März 2013, 06:02:26 schrieb Stan Hoeppner:
> On 3/10/2013 1:54 AM, Stan Hoeppner wrote:
> > So in summary, an Exabyte scale XFS is simply not practical today, and
> > won't be for at least another couple of decades, or more, if ever.  The
> > same holds true for some of the other filesystems you're going to be
> > writing about.  Some of the cluster and/or distributed filesystems
> > you're looking at could probably scale to Exabytes today.  That is, if
> > someone had the budget for half a million hard drives, host systems,
> > switches, etc, the facilities to house it all, and the budget for power
> > and cooling.  That's 834 racks for drives alone, just under 1/3rd of a
> > mile long if installed in a single row.
> Jet lag due to time travel caused a math error above.  With today's 4TB
> drives it would require 2.25 million units for a raw 9EB capacity.
> That's 3,750 racks of 600 drives each.  These would stretch 1.42 miles,
> 7500 ft.

And I just acknowledged the building plans for our new datacenter, based on 
your former calculations. The question is, who carries the costs of the now 
needed 4 other floors of that building.. 

Are you well-insured, Stan?


<Prev in Thread] Current Thread [Next in Thread>