I am having an unfortunate day and last night.
dmesg gives me:
XFS mounting filesystem sda6
Ending clean XFS mount for filesystem: sda6
0x0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Filesystem "sda6": XFS internal error xfs_da_do_buf(2) at line 2273
of file fs/xfs/xfs_da_btree.c. Caller 0xf9bcf522
Call Trace:
[<f9bcf16e>] xfs_da_do_buf+0x5ee/0x900 [xfs]
[<f9bcf522>] xfs_da_read_buf+0x42/0x50 [xfs]
[<f9bcf522>] xfs_da_read_buf+0x42/0x50 [xfs]
[<f98d1a96>] journal_mark_dirty+0x116/0x260 [reiserfs]
[<f9bcf522>] xfs_da_read_buf+0x42/0x50 [xfs]
[<f9bd509f>] xfs_dir2_block_getdents+0x9f/0x2e0 [xfs]
[<f9bd509f>] xfs_dir2_block_getdents+0x9f/0x2e0 [xfs]
[<c0153b7d>] __alloc_pages+0xad/0x310
[<f9bbbc92>] xfs_bmap_last_offset+0x122/0x140 [xfs]
[<f9bd39ca>] xfs_dir2_isblock+0x1a/0x70 [xfs]
[<f9bd3d09>] xfs_dir2_getdents+0xc9/0x150 [xfs]
[<f9bd36a0>] xfs_dir2_put_dirent64_direct+0x0/0xb0 [xfs]
[<f9bd36a0>] xfs_dir2_put_dirent64_direct+0x0/0xb0 [xfs]
[<f9c06958>] xfs_readdir+0x58/0xb0 [xfs]
[<f9c12440>] linvfs_getattr+0x0/0x40 [xfs]
[<f9c0f720>] linvfs_readdir+0x100/0x206 [xfs]
[<c0186380>] filldir64+0x0/0x150
[<c0186755>] vfs_readdir+0x95/0xc0
[<c0186380>] filldir64+0x0/0x150
[<c0186823>] sys_getdents64+0xa3/0x150
[<c01091d9>] sysenter_past_esp+0x52/0x79
xfs_repair cannot find a correct primary or secondary superblock.
if I print the raw superblock I get the following:
manga:~ # xfs_db /dev/sda6
xfs_db> sb 0
xfs_db> p
magicnum = 0x58465342
blocksize = 4096
dblocks = 74537575
rblocks = 0
rextents = 0
uuid = 15b4a8d0-9161-4b3c-9419-5151cacd715c
logstart = 37748740
rootino = 128
rbmino = 129
rsumino = 130
rextsize = 16
agblocks = 1048576
agcount = 72
rbmblocks = 0
logblocks = 8192
versionnum = 0x2094
sectsize = 512
inodesize = 256
inopblock = 16
fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
blocklog = 12
sectlog = 9
inodelog = 8
inopblog = 4
agblklog = 20
rextslog = 0
inprogress = 0
imax_pct = 25
icount = 466816
ifree = 49645
fdblocks = 5154603
frextents = 0
uquotino = 0
gquotino = 0
qflags = 0
flags = 0
shared_vn = 0
inoalignmt = 2
unit = 0
width = 0
dirblklog = 0
logsectlog = 0
logsectsize = 0
logsunit = 0
features2 = 0
the particulars:
Suse enterprise server 9
dell poweredge 2850
I have a large 360 gig raid 5 array broken into several partitions.
dev/sda1 * 1 5 40131 de Dell Utility
/dev/sda2 6 129 996030 82 Linux swap
/dev/sda3 130 4992 39062047+ 83 Linux
/dev/sda4 4993 44542 317685375 5 Extended
/dev/sda5 4993 7424 19535008+ 83 Linux
/dev/sda6 7425 44542 298150303+ 83 Linux
sda5 and sda6 are both xfs mounts
so my sda5 filesystem went corrupt somehow. I backed up all the data
and using yast partition manager, deleted the partition.
now what used to be sda6 looks like sda5 to the partition manager
something like the following table
/dev/sda5 7425 44542 298150303+ 83 Linux
so I added a new partition to replace the one I deleted,
/dev/sda6 4993 7424 19535008+ 83 Linux
and set the type to reiserfs.
and quit yast partition manager.
I was given errors about unable to modify partition sda6 etc...
I may not have unmounted sda6 correctly before modding partition
table. bad me.
So I reopen YAST partition manager and now there is only /dev/sda5
which mysteriously starts at 4993 and ends 44542.
OK so that is a bit weird/disconcerting.
but when I fdisk the partition table my old partitions are still there.
I fear that yast may have written some part of reiser into sda6 which
it thought was sda5 at the time.
argh.
I have fixed the order of the partitions and I feel like I have them
back to normal(I deleted them all and reentered them into fdisk)
I have tried gpart, it wrote a very wrong partition table.
Anyone have any advice?
Thank you.
Asa
Assemble p.510.524.8255 f.510.295.2710 742 Gilman St,
Berkeley CA 94710-1327
[[HTML alternate version deleted]]
|