xfs_repair segfault
Viet Nguyen
vietnguyen at gmail.com
Tue Oct 1 16:12:16 CDT 2013
Hi again,
Here's the stack trace:
#0 __xfs_dir3_data_check (dp=<value optimized out>, bp=<value optimized
out>) at xfs_dir2_data.c:149
#1 0x0000000000451d32 in xfs_dir3_block_verify (bp=0x94369210) at
xfs_dir2_block.c:62
#2 0x0000000000451ed1 in xfs_dir3_block_read_verify (bp=0x94369210) at
xfs_dir2_block.c:73
#3 0x0000000000431e2a in libxfs_readbuf (btp=0x6aaca0, blkno=5292504,
len=8, flags=0, ops=0x478c60) at rdwr.c:718
#4 0x0000000000412295 in da_read_buf (mp=0x7fffffffe090, nex=1, bmp=<value
optimized out>, ops=<value optimized out>) at dir2.c:129
#5 0x0000000000415c26 in process_block_dir2 (mp=0x7fffffffe090,
ino=8639864, dip=0x95030000, ino_discovery=1, dino_dirty=<value optimized
out>, dirname=0x472201 "", parent=0x7fffffffdf28, blkmap=0x7ffff0342010) at
dir2.c:1594
#6 process_dir2 (mp=0x7fffffffe090, ino=8639864, dip=0x95030000,
ino_discovery=1, dino_dirty=<value optimized out>, dirname=0x472201 "",
parent=0x7fffffffdf28, blkmap=0x7ffff0342010) at dir2.c:1993
#7 0x0000000000411e6c in process_dinode_int (mp=0x7fffffffe090,
dino=0x95030000, agno=1, ino=0, was_free=0, dirty=0x7fffffffdf38,
used=0x7fffffffdf3c, verify_mode=0, uncertain=0, ino_discovery=1,
check_dups=0, extra_attr_check=1, isa_dir=0x7fffffffdf34,
parent=0x7fffffffdf28) at dinode.c:2859
#8 0x000000000041213e in process_dinode (mp=<value optimized out>,
dino=<value optimized out>, agno=<value optimized out>, ino=<value
optimized out>, was_free=<value optimized out>, dirty=<value optimized
out>, used=0x7fffffffdf3c, ino_discovery=1, check_dups=0,
extra_attr_check=1, isa_dir=0x7fffffffdf34, parent=0x7fffffffdf28) at
dinode.c:2967
#9 0x000000000040a870 in process_inode_chunk (mp=0x7fffffffe090, agno=0,
num_inos=<value optimized out>, first_irec=0x7fff5d63f320, ino_discovery=1,
check_dups=0, extra_attr_check=1, bogus=0x7fffffffdfcc) at dino_chunks.c:772
#10 0x000000000040ae97 in process_aginodes (mp=0x7fffffffe090, pf_args=0x0,
agno=0, ino_discovery=1, check_dups=0, extra_attr_check=1) at
dino_chunks.c:1014
#11 0x000000000041978d in process_ag_func (wq=0x695f40, agno=0, arg=0x0) at
phase3.c:77
#12 0x0000000000419bac in process_ags (mp=0x7fffffffe090) at phase3.c:116
#13 phase3 (mp=0x7fffffffe090) at phase3.c:155
#14 0x000000000042d200 in main (argc=<value optimized out>, argv=<value
optimized out>) at xfs_repair.c:749
On Tue, Oct 1, 2013 at 1:19 PM, Dave Chinner <david at fromorbit.com> wrote:
> On Tue, Oct 01, 2013 at 12:57:42PM -0700, Viet Nguyen wrote:
> > Hi,
> >
> > I have a corrupted xfs partition that segfaults when I run xfs_repair, at
> > the same place every time.
> >
> > I'm using the latest version of xfs_repair that I am aware of: xfs_repair
> > version 3.2.0-alpha1
> >
> > I simply run it as so: xfs_repair -P /dev/sda1
> >
> > Here's a sample of the last few lines that are spit out:
> > correcting nextents for inode 8637985
> > correcting nblocks for inode 8637985, was 198 - counted 0
> > correcting nextents for inode 8637985, was 1 - counted 0
> > data fork in regular inode 8637987 claims used block 7847452695
> > correcting nextents for inode 8637987
> > correcting nblocks for inode 8637987, was 198 - counted 0
> > correcting nextents for inode 8637987, was 1 - counted 0
> > data fork in regular inode 8637999 claims used block 11068974204
> > correcting nextents for inode 8637999
> > correcting nblocks for inode 8637999, was 200 - counted 0
> > correcting nextents for inode 8637999, was 1 - counted 0
> > data fork in regular inode 8638002 claims used block 11873152787
> > correcting nextents for inode 8638002
> > correcting nblocks for inode 8638002, was 201 - counted 0
> > correcting nextents for inode 8638002, was 1 - counted 0
> > imap claims a free inode 8638005 is in use, correcting imap and clearing
> > inode
> > cleared inode 8638005
> > imap claims a free inode 8638011 is in use, correcting imap and clearing
> > inode
> > cleared inode 8638011
> > Segmentation fault (core dumped)
> >
> > It crashes after attempting to clear that same inode every time.
> >
> > Any advice you can give me on this?
>
> Can you run it under gdb and send the stack trace that tells us
> where it crashed?
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david at fromorbit.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20131001/af31b8e6/attachment-0001.html>
More information about the xfs
mailing list