[Top] [All Lists]

Re: Maximum file system size of XFS?

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: Maximum file system size of XFS?
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Mon, 11 Mar 2013 06:02:26 -0500
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, xfs@xxxxxxxxxxx, Pascal <pa5ca1@xxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <513C3C43.7080104@xxxxxxxxxxxxxxxxx>
References: <20130309215121.0e614ef8@thinky> <513BB7C3.4050009@xxxxxxxxxx> <20130309233940.3b7c0910@thinky> <513BDD6E.7010507@xxxxxxxxxxx> <513C3C43.7080104@xxxxxxxxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130215 Thunderbird/17.0.3
On 3/10/2013 1:54 AM, Stan Hoeppner wrote:

> So in summary, an Exabyte scale XFS is simply not practical today, and
> won't be for at least another couple of decades, or more, if ever.  The
> same holds true for some of the other filesystems you're going to be
> writing about.  Some of the cluster and/or distributed filesystems
> you're looking at could probably scale to Exabytes today.  That is, if
> someone had the budget for half a million hard drives, host systems,
> switches, etc, the facilities to house it all, and the budget for power
> and cooling.  That's 834 racks for drives alone, just under 1/3rd of a
> mile long if installed in a single row.

Jet lag due to time travel caused a math error above.  With today's 4TB
drives it would require 2.25 million units for a raw 9EB capacity.
That's 3,750 racks of 600 drives each.  These would stretch 1.42 miles,
7500 ft.


<Prev in Thread] Current Thread [Next in Thread>