On Thu, 22 Jun 2006 12:56:40 +1000, Nathan wrote:
> On Tue, Jun 20, 2006 at 01:20:39AM -0700, Avuton Olrich wrote:
>> fatal error -- can't read block 16777216 for directory inode
>> 1507133580
I got the same error on my desktop (Mac Mini PPC, 73GB XFS filesystem,
Debian unstable) when running xfs_repair after having problems booting
(filesystem shutdown leading to init not being able to start gettys
and stuff, I guess):
fatal error -- can't read block 1677216 for directory inode 77860837
So I tried this advice (unfortunately I do not have space for taking a
copy of the filesystem for analysis):
> Once you save a copy of it for further analysis of xfs_repair,
> if you can, you can clear out this problem by directly poking at
> the device using xfs_db in expert mode. "xfs_db -x /dev/xxx";
> then "inode 1507133580"; then "write core.mode 0"; and then try
> another xfs_repair run.
The subsequent xfs_repair run gave different, expected I guess, output
(it noticed the modified directory inode and stuff), but at the end of
Phase 7, it said:
cache_purge: share on cache 0x100930b0 left 1 nodes!?
cache_purge: share on cache 0x100930b0 left 1 nodes!?
done
So, I ran xfs_check on it. The output then was:
bad directory data magic # 0x30 for dir ino 180 block 8388608
I proceeded to run xfs_repair. In Phase 3 under '- agno 0' it said:
bad dir magic number 0x30 in inode 180 bno = 8388608
It continued and in Phase 6 after saying '- traversing filesystem
starting at / ...' it said:
rebuilding directory inode 128
And a flurry of "disconnected inode [number], moving to lost+found"
followed.
At the end of Phase 7, I got the two lines:
cache_purge: share on cache 0x100930b0 left 1 nodes!?
cache_purge: share on cache 0x100930b0 left 1 nodes!?
again.
Running xfs_repair another time gives the same output/result.
(I did this by booting from a Debian unstable net-install cd and using
the xfs_repair, _check and _db from xfsprogs_2.8.4-1_powerpc.deb)
Any hints on how to fully repair the filesystem, in place?
Best regards,
Adam
--
"Jag tar en mandarin." Adam Sjøgren
asjo@xxxxxxxxxxxx
|