On Mon, Mar 16, 2015 at 08:12:16PM -0500, Alireza Haghdoost wrote:
> On Mon, Mar 16, 2015 at 3:32 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > On Mon, Mar 16, 2015 at 11:28:53AM -0400, James Bottomley wrote:
> >> Probably need to cc dm-devel here. However, I think we're all agreed
> >> this is RAID across multiple devices, rather than within a single
> >> device? In which case we just need a way of ensuring identical zoning
> >> on the raided devices and what you get is either a standard zone (for
> >> mirror) or a larger zone (for hamming etc).
> > Any sort of RAID is a bloody hard problem, hence the fact that I'm
> > designing a solution for a filesystem on top of an entire bare
> > drive. I'm not trying to solve every use case in the world, just the
> > one where the drive manufactures think SMR will be mostly used: the
> > back end of "never delete" distributed storage environments....
> > We can't wait for years for infrastructure layers to catch up in the
> > brave new world of shipping SMR drives. We may not like them, but we
> > have to make stuff work. I'm not trying to solve every problem - I'm
> > just tryin gto address the biggest use case I see for SMR devices
> > and it just so happens that XFS is already used pervasively in that
> > same use case, mostly within the same "no raid, fs per entire
> > device" constraints as I've documented for this proposal...
> I am confused what kind of application you are referring to for this
> "back end, no raid, fs per entire device". Are you gonna rely on the
> application to do replication for disk failure protection ?
Exactly. Think distributed storage such as Ceph and gluster where
the data redundancy and failure recovery algorithms are in layers
*above* the local filesystem, not in the storage below the fs. The
"no raid, fs per device" model is already a very common back end
storage configuration for such deployments.