xfs
[Top] [All Lists]

TAKE 981498 - Use KM_NOFS for debug trace buffers

To: sgi.bugs.xfs@xxxxxxxxxxxx, xfs@xxxxxxxxxxx
Subject: TAKE 981498 - Use KM_NOFS for debug trace buffers
From: lachlan@xxxxxxx (Lachlan McIlroy)
Date: Wed, 6 Aug 2008 16:15:53 +1000 (EST)
Sender: xfs-bounce@xxxxxxxxxxx
Use KM_NOFS for debug trace buffers

Use KM_NOFS to prevent recursion back into the filesystem which can
cause deadlocks.

In the case of xfs_iread() we hold the lock on the inode cluster buffer
while allocating memory for the trace buffers.  If we recurse back into
XFS to flush data that may require a transaction to allocate extents
which needs log space.  This can deadlock with the xfsaild thread which
can't push the tail of the log because it is trying to get the inode
cluster buffer lock.

Date:  Wed Aug  6 16:15:14 AEST 2008
Workarea:  redback.melbourne.sgi.com:/home/lachlan/isms/2.6.x-mm
Inspected by:  david@xxxxxxxxxxxxx
Author:  lachlan

The following file(s) were checked into:
  longdrop.melbourne.sgi.com:/isms/linux/2.6.x-xfs-melb


Modid:  xfs-linux-melb:xfs-kern:31838a
fs/xfs/xfs_log.c - 1.362 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_log.c.diff?r1=text&tr1=1.362&r2=text&tr2=1.361&f=h
fs/xfs/xfs_buf_item.c - 1.168 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_buf_item.c.diff?r1=text&tr1=1.168&r2=text&tr2=1.167&f=h
fs/xfs/xfs_inode.c - 1.518 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_inode.c.diff?r1=text&tr1=1.518&r2=text&tr2=1.517&f=h
fs/xfs/quota/xfs_dquot.c - 1.38 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/quota/xfs_dquot.c.diff?r1=text&tr1=1.38&r2=text&tr2=1.37&f=h
fs/xfs/linux-2.6/xfs_buf.c - 1.262 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/linux-2.6/xfs_buf.c.diff?r1=text&tr1=1.262&r2=text&tr2=1.261&f=h
fs/xfs/xfs_filestream.c - 1.9 - changed
http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-linux/xfs_filestream.c.diff?r1=text&tr1=1.9&r2=text&tr2=1.8&f=h
        - Use KM_NOFS for debug trace buffers




<Prev in Thread] Current Thread [Next in Thread>