xfs
[Top] [All Lists]

ADD 802017 - ASSERT fail in xlog_get_bp on small mem machine

To: dxm@xxxxxxxxxxxx
Subject: ADD 802017 - ASSERT fail in xlog_get_bp on small mem machine
From: pv@xxxxxxxxxxxxx (lord@xxxxxxx)
Date: Mon, 18 Sep 2000 07:42:08 -0700 (PDT)
Cc: linux-xfs@xxxxxxxxxxx
Reply-to: sgi.bugs.xfs@xxxxxxxxxxxxxxxxx
Sender: owner-linux-xfs@xxxxxxxxxxx
Webexec: webpvupdate,pvincident
Webpv: jen.americas.sgi.com
View Incident: 
http://co-op.engr.sgi.com/BugWorks/code/bwxquery.cgi?search=Search&wlong=1&view_type=Bug&wi=802017

 Status : open                         Priority : 3                         
 Assigned Engineer : dxm               Submitter : dxm                      
*Modified User : lord                 *Modified User Domain : sgi.com       
*Description :
I haven't seen this problem for ages on my 64Mb crash box,
but the problem is still there.

I installed XFS on my home machine last night and was very
happy with its performance (P100, 32Mb RAM, 32Gb disk) until 
I tried to cleanly remount my XFS partition and tripped an 
ASSERT in xlog_get_bp.

My home machine is very tight on memory, but I don't think
it's an unreasonable machine to try to run XFS on. Unfortunately,

.....


==========================
ADDITIONAL INFORMATION (ADD)
From: lord@xxxxxxx (BugWorks)
Date: Sep 18 2000 07:42:08AM
==========================

Ideally we do need a better way for recovery to run without
chewing up large chunks of memory. Without looking at the
code I bet this is the case where we are asking for a 128K
buffer to read log space into. This is actually half the amount
which Irix would use at this point. A single buffer is requested
which would cover the maximum number of iclogs * the max size of
an iclog. On Irix we can have 8 32K byte iclogs, on Linux this
was reduced to 4.

So what we really need here is a change in the algorithm used
in xlog_find_zeroed() so that disk I/O can be done in smaller
chunks rather than reading one large chunk and processing it.

An interim fix would be to make the mount fail cleanly.

<Prev in Thread] Current Thread [Next in Thread>