xfs
[Top] [All Lists]

Re: xfs_repair dumps core on damaged filesystem (was: Re: XFS assertion

To: Steve Lord <lord@xxxxxxx>
Subject: Re: xfs_repair dumps core on damaged filesystem (was: Re: XFS assertion failed: vp->v_bh.bh_first != NULL)
From: Peter.Kelemen@xxxxxxx
Date: Thu, 7 Sep 2000 11:58:53 +0200
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <200009062229.RAA25306@jen.americas.sgi.com>; from lord@sgi.com on Wed, Sep 06, 2000 at 05:29:09PM -0500
Organization: CERN European Laboratory for Particle Physics, Switzerland
References: <200009062229.RAA25306@jen.americas.sgi.com>
Reply-to: Peter.Kelemen@xxxxxxx
Sender: owner-linux-xfs@xxxxxxxxxxx
User-agent: Mutt/1.2.5i
On Wed, 2000-09-06 17:29:09 -0500, Steve Lord wrote:

Steve,

> What sort of load were you putting on the system, also what is
> your configuration:

The same machine I experienced the vnode-crash on.  I was running
8 (eight) parallel bonnie++ processes w/3GB data files.

Dual Pentium III 500Mhz, 256M RAM, 4x37.5GB IBM DeskStar disks.
One of the disks contain one big 36GB XFS filesystem (created
with default values of mkfs.xfs).

[root@pcrd18 /root]# xfs_info /shift/pcrd18/data02
meta-data=/shift/pcrd18/data02   isize=256    agcount=35, agsize=261630 blks
data     =                       bsize=4096   blocks=9157042, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=0
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=1200
realtime =none                   extsz=65536  blocks=0, rtextents=0

> The vnode crash is being worked on, all I have managed to do so
> far is make it harder to hit. It is basically top of the list at
> the moment.

I'll update my CVS repository soon to see the changes (I'm using
the cvs20000829 snapshot).

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@xxxxxxx
.+'         `+...+'         `+...+'         `+...+'         `+...+'

<Prev in Thread] Current Thread [Next in Thread>