[Top] [All Lists]

Re: LVM + XFS + external log + snapshots

To: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Subject: Re: LVM + XFS + external log + snapshots
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Sat, 22 Jun 2013 02:58:16 -0500
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <022467CC-8EB2-41E9-8AF6-46F781882F6B@xxxxxxxxx>
References: <022467CC-8EB2-41E9-8AF6-46F781882F6B@xxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130509 Thunderbird/17.0.6
Sorry for err aurfalien, first reply didn't make the list.

On 6/22/2013 12:30 AM, aurfalien wrote:
> Hi all,
> So I have an XFS file system within LVM  which has an external log.

Migrate to internal logs as snaps are easy.  Since delaylog (2.6.39+)
external logs are generally no longer needed, provide little to no
advantage as log bandwidth drops dramatically.  If RHEL or CentOS that's
6.2 or later.

> lvcreate -L250G -s -n datasnapshot /dev/vg_spock_data/lv_data
> But when i try to mount the file system;

For now, to get past this current problem, you must snapshot both the
filesystem and the log, during the same freeze.  You're currently
cloning the Lone Ranger without cloning Tonto.  Ain't gonna work.  And
it looks like the log device doesn't reside on LVM, so you're outta
luck, can't snap it.

> logdev=/dev/sdc1

> Am I doing something wrong or is this not possible?

Your first error was putting the log on raw storage instead of an LV.
If your kernel is new enough to support delaylog, then your 2nd error
was using an external log.  External logs were once useful for high
metadata workloads, but since delaylog they are largely no longer needed.

xfsdump the filesytem, mkfs.xfs with an internal log, then xfsrestore.

Alternatively you could keep the external log:  Remount the live XFS
using an alternate location for the log.  Put an LV on sdc1.  Then
remount again using this new LV as the log location.  Then you can snap
both XFS and the log, and mount the snap with the matching log.


<Prev in Thread] Current Thread [Next in Thread>