xfs
[Top] [All Lists]

Re: Question on migrating data between PVs in xfs

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Question on migrating data between PVs in xfs
From: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Date: Thu, 11 Aug 2016 12:44:59 +0200
Cc: Wei Lin <lin.wei15@xxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20160810215149.GM19025@dastard>
Organization: Intellique
References: <20160809145046.GB5583@ic> <20160809223503.GJ19025@dastard> <20160810092313.GA16193@ic> <20160810105639.GR16044@dastard> <20160810183132.0b9ae8e4@xxxxxxxxxxxxxxxxxxxx> <20160810215149.GM19025@dastard>
Le Thu, 11 Aug 2016 07:51:49 +1000
Dave Chinner <david@xxxxxxxxxxxxx> Ãcrivait:

> On Wed, Aug 10, 2016 at 06:31:32PM +0200, Emmanuel Florac wrote:
> > Le Wed, 10 Aug 2016 20:56:39 +1000
> > Dave Chinner <david@xxxxxxxxxxxxx> Ãcrivait:
> >   
> > > Have you lookd at using dm-cache instead of modifying the
> > > filesystem?
> > >   
> > 
> > Or bcache, fcache, or EnhanceIO. So far from my own testing bcache
> > is significantly faster and dm-cache by far the slowest of the
> > bunch, but bcache needs some more loving (his main developer is
> > busy writing some new tiered, caching filesystem instead).  
> 
> Yeah, the problem with bcache is that it is effectively an orphaned
> driver. If there are obvious and reproducable performance
> differentials between bcache and dm-cache, you should bring them to
> the attention of the dm developers to see if they can fix them...

Good idea. Well bcache may be orphaned of its main developer, however
others still submit quite a lot of stability patches (among them
Christoph Hellwig which is also active here IIRC).

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |   <eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

<Prev in Thread] Current Thread [Next in Thread>