xfs
[Top] [All Lists]

Re: stress test on ppc

To: Thomas Graichen <graichen@xxxxxxxxxxxxx>
Subject: Re: stress test on ppc
From: Russell Cattelan <cattelan@xxxxxxxxxxx>
Date: Thu, 30 Nov 2000 15:13:43 -0600
Cc: linux-xfs@xxxxxxxxxxx
References: <news2mail-8uecj0$i5e$1@xxxxxxxxxxxxxxxxxxxxxx> <10011261336.ZM166460@xxxxxxxxxxxxxxxxxxxxxxxx> <news2mail-8vt5ub$dhv$3@xxxxxxxxxxxxxxxxxxxxxx> <10011281048.ZM165042@xxxxxxxxxxxxxxxxxxxxxxxx> <news2mail-9001u3$n0b$2@xxxxxxxxxxxxxxxxxxxxxx> <10011290940.ZM169800@xxxxxxxxxxxxxxxxxxxxxxxx> <news2mail-902hr6$307$1@xxxxxxxxxxxxxxxxxxxxxx> <10011301040.ZM164128@xxxxxxxxxxxxxxxxxxxxxxxx> <news2mail-90550o$3u0$3@xxxxxxxxxxxxxxxxxxxxxx> <3A267EBE.52E6CC95@xxxxxxxxxxx> <news2mail-9061lo$dc3$3@xxxxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Thomas Graichen wrote:

Ok good...
So everything thing looks good right after mkfs and the mount.

The problem does appear to the first write to the super block.
as shown by
od -c -N 8 -j 512 /dev/sdb1
not reporting XAGF

My best guess that this point as to what it happening:
We have added valid bits to the page structure, with each
bit representing 1 512 byte block.
It appears that someplace in the pagebuf  math the basic unit
is now 1024, rather than 512.
Since the page size has doubled it does make sense the BB size has
doubled.
So now the trick is to find the problem.

Lets verify the problem first
just before the generic_make_request function in page_buf.c:
_pagebuf_page_io
                        printk ("_pagebuf page io calling gmk itr %d bh
0x%p block %d size %ld\t",itr,

psync->bh[itr],psync->bh[itr]->b_blocknr,psync->bh[itr]->b_size);
                        { int i;
                        for(i=0; i < 4; i++){
                          printk("%c",psync->bh[itr]->b_data[i]);
                        }
                        printk("\n");
                        }

The numbers we will be interested in will be the b_ blocknr and the
b_bsize.


> Russell Cattelan <cattelan@xxxxxxxxxxx> wrote:
>
> [ok and now the alpha]
>
> > Finally where in XFS are we trashing AG blocks.
>
> > Ok lets do things in this order
> > mkfs a file system
> > od -c -N 8 -j 512 /dev/sdb1
> > should result in
> > 0001000   X   A   G   F  \0  \0  \0 001
> > od -c -N 8 -j 1024 /dev/sdb1
> > and
> >  od -c -N 8 -j 1024 /dev/sdb1
> > 0002000   X   A   G   I  \0  \0  \0 001
>
> looks like this last one is wrong? - it's the same as the second
>
> > Those are the magic #s
>
> root@cyan:~# od -c -N 8 -j 512 /dev/sdb1
> 0001000   X   A   G   F  \0  \0  \0 001
> 0001010
> root@cyan:~# od -c -N 8 -j 1024 /dev/sdb1
> 0002000   X   A   G   I  \0  \0  \0 001
> 0002010
> root@cyan:~#
>
> > now try
> > xfs_db
> > sb 0
> > print
> > agf 0
> > print
> > agi 0
> > print
>
> > save output
>
> root@cyan:~# xfs_db /dev/sdb1
> xfs_db: sb 0
> xfs_db: print
> magicnum = 0x58465342
> blocksize = 8192
> dblocks = 33130
> rblocks = 0
> rextents = 0
> uuid = eac0a48b-4dff-41c3-b2cc-e110bd5c3ff8
> logstart = 32772
> rootino = 256
> rbmino = 257
> rsumino = 258
> rextsize = 8
> agblocks = 4142
> agcount = 8
> rbmblocks = 0
> logblocks = 1000
> versionnum = 0x2084
> sectsize = 512
> inodesize = 256
> inopblock = 32
> fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
> blocklog = 13
> sectlog = 9
> inodelog = 8
> inopblog = 5
> agblklog = 13
> rextslog = 0
> inprogress = 0
> imax_pct = 25
> icount = 64
> ifree = 61
> fdblocks = 32096
> frextents = 0
> uquotino = 0
> pquotino = 0
> qflags = 0
> flags = 0
> shared_vn = 0
> inoalignmt = 1
> unit = 0
> width = 0
> dirblklog = 0
> xfs_db: agf 0
> xfs_db: print
> magicnum = 0x58414746
> versionnum = 1
> seqno = 0
> length = 4142
> bnoroot = 1
> cntroot = 2
> bnolevel = 1
> cntlevel = 1
> flfirst = 0
> fllast = 3
> flcount = 4
> freeblks = 4132
> longest = 4132
> xfs_db: agi 0
> xfs_db: print
> magicnum = 0x58414749
> versionnum = 1
> seqno = 0
> length = 4142
> count = 64
> root = 3
> level = 1
> freecount = 61
> newino = 256
> dirino = null
> unlinked[0-63] =
> xfs_db: quit
> root@cyan:~#
>
> > Mount the file system
> > run od commands and xfs_db command again
> > save output
>
> root@cyan:~# od -c -N 8 -j 512 /dev/sdb1
> 0001000   X   A   G   F  \0  \0  \0 001
> 0001010
> root@cyan:~# od -c -N 8 -j 1024 /dev/sdb1
> 0002000   X   A   G   I  \0  \0  \0 001
> 0002010
> root@cyan:~# xfs_db -r /dev/sdb1
> xfs_db: sb 0
> xfs_db: print
> magicnum = 0x58465342
> blocksize = 8192
> dblocks = 33130
> rblocks = 0
> rextents = 0
> uuid = eac0a48b-4dff-41c3-b2cc-e110bd5c3ff8
> logstart = 32772
> rootino = 256
> rbmino = 257
> rsumino = 258
> rextsize = 8
> agblocks = 4142
> agcount = 8
> rbmblocks = 0
> logblocks = 1000
> versionnum = 0x2084
> sectsize = 512
> inodesize = 256
> inopblock = 32
> fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
> blocklog = 13
> sectlog = 9
> inodelog = 8
> inopblog = 5
> agblklog = 13
> rextslog = 0
> inprogress = 0
> imax_pct = 25
> icount = 64
> ifree = 61
> fdblocks = 32096
> frextents = 0
> uquotino = 0
> pquotino = 0
> qflags = 0
> flags = 0
> shared_vn = 0
> inoalignmt = 1
> unit = 0
> width = 0
> dirblklog = 0
> xfs_db: agf 0
> xfs_db: print
> magicnum = 0x58414746
> versionnum = 1
> seqno = 0
> length = 4142
> bnoroot = 1
> cntroot = 2
> bnolevel = 1
> cntlevel = 1
> flfirst = 0
> fllast = 3
> flcount = 4
> freeblks = 4132
> longest = 4132
> xfs_db: agi 0
> xfs_db: print
> magicnum = 0x58414749
> versionnum = 1
> seqno = 0
> length = 4142
> count = 64
> root = 3
> level = 1
> freecount = 61
> newino = 256
> dirino = null
> unlinked[0-63] =
> xfs_db:
>
> > Now do some FS activity
> > run tod command and xfs_sb commands again
> > save output.
>
> root@cyan:~#
> 0001000  \0  \0  \0  \0 377 377 377 377
> 0001010
> root@cyan:~# od -c -N 8 -j 1024 /dev/sdb1
> 0002000   X   A   G   I  \0  \0  \0 001
> 0002010
> root@cyan:~# xfs_db -r /dev/sdb1
> xfs_db: sb 0
> xfs_db: print
> magicnum = 0x58465342
> blocksize = 8192
> dblocks = 33130
> rblocks = 0
> rextents = 0
> uuid = 89cb7e3c-c0cc-4a42-8013-aef1f692b80b
> logstart = 32772
> rootino = 256
> rbmino = 257
> rsumino = 258
> rextsize = 8
> agblocks = 4142
> agcount = 8
> rbmblocks = 0
> logblocks = 1000
> versionnum = 0x2084
> sectsize = 512
> inodesize = 256
> inopblock = 32
> fname = "\000\000\000\000\000\000\000\000\000\000\000\000"
> blocklog = 13
> sectlog = 9
> inodelog = 8
> inopblog = 5
> agblklog = 13
> rextslog = 0
> inprogress = 0
> imax_pct = 25
> icount = 2240
> ifree = 202
> fdblocks = 8423
> frextents = 0
> uquotino = 0
> pquotino = 0
> qflags = 0
> flags = 0
> shared_vn = 0
> inoalignmt = 1
> unit = 0
> width = 0
> dirblklog = 0
> xfs_db: agf 0
> xfs_db: print
> magicnum = 0
> versionnum = 4294967295
> seqno = 0
> length = 0
> bnoroot = 2966461184
> cntroot = 16580607
> bnolevel = 16580607
> cntlevel = 0
> flfirst = 2147483648
> fllast = 16777216
> flcount = 3223092738
> freeblks = 16580607
> longest = 32258
> xfs_db:agi 0
> xfs_db: print
> magicnum = 0x58414749
> versionnum = 1
> seqno = 0
> length = 4142
> count = 448
> root = 3
> level = 1
> freecount = 0
> newino = 122016
> dirino = null
> unlinked[0-63] =
> xfs_db:
>
> > Send all the output to us.
>
> i hope it is ok to post it here - i think it's not too much - but if
> anyone does not like those big debugging mails i can also upload
> them to a ftp accessable place in the future
>
> > This should tell us where we need to start fixing things first.
>
> > I suspect we will need to find an alpha around someplace to actually
> > get some of this stuff debugged.
>
> i'll update the alpha to the latest kernel now and recheck that it
> is still there (you never know :-) ... also keep in mind: compiler
> bugs are possible too ... it's gcc 2.95.2 which has problems on
> intel too but seems to work fine on the ppc - but this kind
> of problem does not really look like a compiler bug to me
> on the other side
>
> good luck
>
> t
>
> --
> thomas.graichen@xxxxxxxxxxxxxx
> technical director                                       innominate AG
> clustering & security                             the linux architects
> tel: +49-30-308806-13   fax: -77            http://www.innominate.com


<Prev in Thread] Current Thread [Next in Thread>