<br><br><div class="gmail_quote">On Fri, Sep 21, 2012 at 10:07 AM, Eric Sandeen <span dir="ltr"><<a href="mailto:sandeen@sandeen.net" target="_blank">sandeen@sandeen.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="im">On 9/21/12 10:51 AM, Anand Tiwari wrote:<br>
><br>
><br>
> On Thu, Sep 20, 2012 at 11:00 PM, Eric Sandeen <<a href="mailto:sandeen@sandeen.net">sandeen@sandeen.net</a><br>
</div><div><div class="h5">> <mailto:<a href="mailto:sandeen@sandeen.net">sandeen@sandeen.net</a>>> wrote:<br>
><br>
> On 9/20/12 7:40 PM, Anand Tiwari wrote:<br>
>> Hi All,<br>
>><br>
>> I have been looking into an issue with xfs_repair with realtime sub<br>
>> volume. some times while running xfs_repair I see following errors<br>
>><br>
>> ---------------------------- data fork in rt inode 134 claims used<br>
>> rt block 19607 bad data fork in inode 134 would have cleared inode<br>
>> 134 data fork in rt inode 135 claims used rt block 29607 bad data<br>
>> fork in inode 135 would have cleared inode 135 - agno = 1 - agno =<br>
>> 2 - agno = 3 - process newly discovered inodes... Phase 4 - check<br>
>> for duplicate blocks... - setting up duplicate extent list... -<br>
>> check for inodes claiming duplicate blocks... - agno = 0 - agno =<br>
>> 1 - agno = 2 - agno = 3 entry "test-011" in shortform directory 128<br>
>> references free inode 134 would have junked entry "test-011" in<br>
>> directory inode 128 entry "test-0" in shortform directory 128<br>
>> references free inode 135 would have junked entry "test-0" in<br>
>> directory inode 128 data fork in rt ino 134 claims dup rt<br>
>> extent,off - 0, start - 7942144, count 2097000 bad data fork in<br>
>> inode 134 would have cleared inode 134 data fork in rt ino 135<br>
>> claims dup rt extent,off - 0, start - 13062144, count 2097000 bad<br>
>> data fork in inode 135 would have cleared inode 135 No modify flag<br>
>> set, skipping phase 5 ------------------------<br>
>><br>
>> Here is the bmap for both inodes.<br>
>><br>
>> xfs_db> inode 135 xfs_db> bmap data offset 0 startblock 13062144<br>
>> (12/479232) count 2097000 flag 0 data offset 2097000 startblock<br>
>> 15159144 (14/479080) count 2097000 flag 0 data offset 4194000<br>
>> startblock 17256144 (16/478928) count 2097000 flag 0 data offset<br>
>> 6291000 startblock 19353144 (18/478776) count 2097000 flag 0 data<br>
>> offset 8388000 startblock 21450144 (20/478624) count 2097000 flag<br>
>> 0 data offset 10485000 startblock 23547144 (22/478472) count<br>
>> 2097000 flag 0 data offset 12582000 startblock 25644144 (24/478320)<br>
>> count 2097000 flag 0 data offset 14679000 startblock 27741144<br>
>> (26/478168) count 2097000 flag 0 data offset 16776000 startblock<br>
>> 29838144 (28/478016) count 2097000 flag 0 data offset 18873000<br>
>> startblock 31935144 (30/477864) count 1607000 flag 0 xfs_db> inode<br>
>> 134 xfs_db> bmap data offset 0 startblock 7942144 (7/602112) count<br>
>> 2097000 flag 0 data offset 2097000 startblock 10039144 (9/601960)<br>
>> count 2097000 flag 0 data offset 4194000 startblock 12136144<br>
>> (11/601808) count 926000 flag 0<br>
><br>
> It's been a while since I thought about realtime, but -<br>
><br>
> That all seems fine, I don't see anything overlapping there, they<br>
> are all perfectly adjacent, though of interesting size.<br>
><br>
>><br>
>> by looking into xfs_repair code, it looks like repair does not<br>
>> handle a case where we have more than one extent in a real-time<br>
>> extent. following is code from repair/dinode.c: process_rt_rec<br>
><br>
> "more than one extent in a real-time extent?" I'm not sure what that<br>
> means.<br>
><br>
> Every extent above is length 2097000 blocks, and they are adjacent.<br>
> But you say your realtime extent size is 512 blocks ... which doesn't<br>
> go into 2097000 evenly. So that's odd, at least.<br>
><br>
><br>
> well, lets look at first extent<br>
>> data offset 0 startblock 13062144 (12/479232) count 2097000 flag 0<br>
>> data offset 2097000 startblock 15159144 (14/479080) count 2097000<br>
>> flag 0<br>
> startblock is aligned and rtext is 25512, since the blockcount is<br>
> not multiple of 512, last realtime extent ( 25512 + 4095) is<br>
> partially used, 360 blks second extent start from realtime extent<br>
> 29607 (ie 25512 + 4095). so, yes, extents are not overlapping, but<br>
> 29607 realtime extent is shared by two extents. Now once xfs_repair<br>
> detects this case in phase 2, it bails out and clears that inode. I<br>
> think search for duplicate extent is done in phase 4, but inode is<br>
> marked already.<br>
<br>
</div></div>... ok I realize I was misunderstanding some things about the realtime<br>
volume. (It's been a very long time since I thought about it). Still,<br>
I'd like to look at the metadump image if possible.<br>
<br>
Thanks,<br>
-Eric<br>
</blockquote></div><br><br>metadump attached<br>