xfs
[Top] [All Lists]

Re: XFS on Fedora i686, armv7hl

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: XFS on Fedora i686, armv7hl
From: Chris Murphy <lists@xxxxxxxxxxxxxxxxx>
Date: Thu, 27 Feb 2014 01:12:28 -0700
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20140227072122.GL29907@dastard>
References: <8A28EF88-012E-4036-BDB6-E76B1CC569A7@xxxxxxxxxxxxxxxxx> <20140227072122.GL29907@dastard>
On Feb 27, 2014, at 12:21 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:

> On Wed, Feb 26, 2014 at 10:37:59PM -0700, Chris Murphy wrote:
>> Hi,
>> 
>> Fedora is considering XFS as their default file system. They
>> support three primary architectures: x86_64, i686, and armv7hl.
>> Do XFS devs have any reservations about XFS as a default file
>> system on either i686, or arm?
> 
> i686 is regularly tested on upstream dev kernels. ARM is less tested
> as it's not the primary development platform for anyone - we tend to
> rely on community feedback for arm because the hardware is so wide
> and varied and there are some crackpot CPU cache architectures out
> there in ARM land that we simply can't test against….

OK good, I'll post the URL for your response to the relevant Fedora lists.

> 
>> So far the only thing I've run into with kernel
>> 3.13.4-200.fc20.i686+PAE will not mount an XFS volume larger than
>> 16TB.
> 
> That's not an XFS limit - that's a limit of the block device caused
> by the page cache address space being limited to 16TB. Techinically
> the XFS kernel doesn't have such a limit because it doesn't use the
> block device address space to index or cache metadata, but that
> doesn't help anyone if the userspace tools don't work on anything
> larger than a 16TB block device.

Are the kernel messages regarding corruption slightly misleading? i.e. the file 
system on-disk isn't corrupt, but the kernel's view of it is distorted because 
of the page cache limit? Someone has a weird drunken bet, because I can't think 
of a good reason why, and they mount a valued 16+TB XFS volume from a 64-bit 
system on a 32-bit system, they don't really have to run xfs_repair once 
putting it back on the 64-bit system, do they?

> 
> As it is, you're crazy if you put more than a couple of TB of
> storage on a 32 bit system. The machiens simply don't have the
> process address space to repair a filesystem larger than a few
> terabytes (i.e. 2GB RAM limit). That holds true for any filesystem -
> ext3 and ext4 also have the same problems when running e2fsck…

Right no kidding.

> 
>> But I haven't tried filling a < 16TB volume with a
>> significant amount of data while running 32bit, and anyway it's
>> just easier to ask if there are other gotchas, or reservations
>> about this combination.
> 
> It'll work just as well as ext3 and ext4 in such situations. That
> doesn't mean we recommend that you do it ;)

Sure. It's good to have that feedback.

> 
> I bet that's because nobody has filled a btrfs filesystem past the
> point where it tries to access beyond 16TB on a 32 bit system and so
> it's never been reported as a bug… :/

That makes sense.

Chris Murphy

<Prev in Thread] Current Thread [Next in Thread>