xfs
[Top] [All Lists]

Re: xfs_repair problem.

To: "Eric Sandeen" <sandeen@xxxxxxxxxxx>
Subject: Re: xfs_repair problem.
From: "David Bernick" <dbernick@xxxxxxxxx>
Date: Sun, 21 Dec 2008 17:08:58 -0500
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:cc:in-reply-to:mime-version:content-type:references; bh=CNfxe1Vo97JhPsLWbtp3SqKYsXAxP2huWDudKvdPJhI=; b=LcXz8pRit8SvfOhm2et/FjJYwRTdV1p5mfuE+lN8JuoxfhQINKAfV5eXzaKdF34eqc I7I4g/o0/5txD7qYyQWwKnOyU2gXbpB3nbif6njKhhXHtFNDO0fxOjrk0Qu/hArcg6b0 bOoSjagjtTXbuwwH/haOFprNP2t0qxlVAQ9L8=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:references; b=RlnguCA4pPmvlReymHT4PN4tzUzbQysYmQwglx/iG4SLKSbE3ePHgTsS3lKvfqqBuz IbyXh8v3Xra8VrWWUEqKlkRasN6z82VlLVWeTI6WScme684vVDss3FWlfjkPHV/rOboF cccC3R+lC6510EAdzcyr/Rtk8el2V7kZrWBCU=
In-reply-to: <494E766B.5080102@xxxxxxxxxxx>
References: <7bcfcfff0812210703r4bd889cave8e2d60c56587e3e@xxxxxxxxxxxxxx> <494E66D9.5030704@xxxxxxxxxxx> <7bcfcfff0812210852v6c1cd522i334de914e1e9a112@xxxxxxxxxxxxxx> <494E766B.5080102@xxxxxxxxxxx>
So I ran an xfs_repair -v on my filesystem. While the FS was originally 12T
and 95% full. I ran xfs_repair and it throw many "out of space errors" when
it was running. That makes some sense.

I expanded it with a new device with xfs_grow. It seems to work, because the
disk is now bigger when I mount it.

When I ran xfs_repair on that (latest version), it reverts back to the
original size. Is there anything I need to do to make the xfs_growfs
permanent?

If I can make the XFS partition bigger, I can likely make it work because
then it won't run out of space! But the partition, despite saying its
"bigger" here, doesn't seem to "take" after the xfs_repair. Any ideas?

Below is the xfs_db and some other useful things.
   8    17 12695312483 sdb1
   8    33 8803844062 sdc1
   8    34 4280452000 sdc2

 --- Logical volume ---
  LV Name                /dev/docs/v5Docs
  VG Name                docs
  LV UUID                G85Zi9-s63C-yWrU-yyf0-STP6-YOhJ-6Ne3pS
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                24.01 TB
  Current LE             6293848
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
xfs_db> sb 0
xfs_db> print
magicnum = 0x58465342
blocksize = 4096
dblocks = 6444900352
rblocks = 0
rextents = 0
uuid = f086bb71-d67b-4cc1-b622-1f10349e6a49
logstart = 1073741828
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 1
agblocks = 67108864
agcount = 97
rbmblocks = 0
logblocks = 32768
versionnum = 0x3084
sectsize = 512
inodesize = 256
inopblock = 16
fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 26
rextslog = 0
inprogress = 0
imax_pct = 25
icount = 149545792
ifree = 274
fdblocks = 4275133930
frextents = 0
uquotino = 0
gquotino = 0
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 0
width = 0
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 0
features2 = 0


On Sun, Dec 21, 2008 at 12:01 PM, Eric Sandeen <sandeen@xxxxxxxxxxx> wrote:

> David Bernick wrote:
> > Thanks for the help so far:
> >
> > It my output was from "sb 0". Thanks for reminding me to be explicit.
> >
> > The system is a 64-bit system with 32-GB of RAM. It's going through the
> > FS right now with XFS repair.
> > Output of xfs_repair says, "arno=3" and about 81.6% of RAM is used by
> > the process. Think 32 G will be enough to handle this task?
> > I actually don't KNOW the original error, unfortunately, when growing. I
> > came into this late.
> >
> > We're using repair 2.9.4. Worth getting a more recent version?
>
> 2.9.8 had some memory usage improvements (reductions) for repair IIRC
>
> > Kernel is - 2.6.18-92.1.1.el5
>
> heh; RHEL5 does not support xfs ;)
>
> You probably hit:
>
> TAKE 959978 - growing an XFS filesystem by more than 2TB is broken
> http://oss.sgi.com/archives/xfs/2007-01/msg00053.html
>
> I'd see if you can get centos to backport that fix (I assume you're
> using centos or at least their kernel module; if not you can backport it
> yourself...)
>
> > I "backed off" by vgsplit-ing the new physical device from the original
> > vgroup, so I was left with my original partition. I am hoping to mount
> > the original device since the "expanded" fs didn't work. I am hoping
> > xfs_repair helps that.
>
> well, you don't want to take out part of the device if the fs thinks it
> owns it now, but from the db output I think you still have the smaller
> size.
>
> I'd read through:
>
> http://oss.sgi.com/archives/xfs/2008-01/msg00085.html
>
> and see if it helps you recover.
>
> -Eric
>


[[HTML alternate version deleted]]

<Prev in Thread] Current Thread [Next in Thread>