xfs-masters
[Top] [All Lists]

[xfs-masters] [Bug 8664] New: Circular lock

To: xfs-masters@xxxxxxxxxxx
Subject: [xfs-masters] [Bug 8664] New: Circular lock
From: bugme-daemon@xxxxxxxxxxxxxxxxxxx
Date: Sat, 23 Jun 2007 04:48:47 -0700 (PDT)
Reply-to: xfs-masters@xxxxxxxxxxx
Sender: xfs-masters-bounce@xxxxxxxxxxx
http://bugzilla.kernel.org/show_bug.cgi?id=8664

           Summary: Circular lock
           Product: File System
           Version: 2.5
     KernelVersion: 2.6.22-rc5
          Platform: All
        OS/Version: Linux
              Tree: Mainline
            Status: NEW
          Severity: normal
          Priority: P1
         Component: XFS
        AssignedTo: xfs-masters@xxxxxxxxxxx
        ReportedBy: mrb74@xxxxxx


Distribution:
Debian Sid

Problem Description:
When booting the machine with the new kernel I get the following output:

Jun 23 13:17:33 localhost kernel: [   30.380001] ohci1394: fw-host0: SelfID
received outside of bus reset sequence
Jun 23 13:17:33 localhost kernel: [   30.421509] Adding 803208k swap on
/dev/hda5.  Priority:-1 extents:1 across:803208k
Jun 23 13:17:33 localhost kernel: [   30.600782] 
Jun 23 13:17:33 localhost kernel: [   30.600785]
=======================================================
Jun 23 13:17:33 localhost kernel: [   30.600902] [ INFO: possible circular
locking dependency detected ]
Jun 23 13:17:33 localhost kernel: [   30.600962] 2.6.22-rc5 #6
Jun 23 13:17:33 localhost kernel: [   30.601018]
-------------------------------------------------------
Jun 23 13:17:33 localhost kernel: [   30.601077] mount/2454 is trying to
acquire lock:
Jun 23 13:17:33 localhost kernel: [   30.601135] 
(&(&ip->i_lock)->mr_lock/1){--..}, at: [<c022ee42>] xfs_ilock+0x82/0xc0
Jun 23 13:17:33 localhost kernel: [   30.601429] 
Jun 23 13:17:33 localhost kernel: [   30.601430] but task is already holding
lock:
Jun 23 13:17:33 localhost kernel: [   30.601539] 
(&(&ip->i_lock)->mr_lock){----}, at: [<c022ee42>] xfs_ilock+0x82/0xc0
Jun 23 13:17:33 localhost kernel: [   30.601783] 
Jun 23 13:17:33 localhost kernel: [   30.601784] which lock already depends on
the new lock.
Jun 23 13:17:33 localhost kernel: [   30.601785] 
Jun 23 13:17:33 localhost kernel: [   30.601947] 
Jun 23 13:17:33 localhost kernel: [   30.601948] the existing dependency chain
(in reverse order) is:
Jun 23 13:17:33 localhost kernel: [   30.602059] 
Jun 23 13:17:33 localhost kernel: [   30.602060] -> #1
(&(&ip->i_lock)->mr_lock){----}:
Jun 23 13:17:33 localhost kernel: [   30.602299]        [<c013ada1>]
__lock_acquire+0xdb1/0xf80
Jun 23 13:17:33 localhost kernel: [   30.602707]        [<c013afc5>]
lock_acquire+0x55/0x70
Jun 23 13:17:33 localhost kernel: [   30.603112]        [<c0131f89>]
down_write_nested+0x29/0x50
Jun 23 13:17:33 localhost kernel: [   30.603519]        [<c022ee42>]
xfs_ilock+0x82/0xc0
Jun 23 13:17:33 localhost kernel: [   30.603924]        [<c022f937>]
xfs_iget_core+0x377/0x5a0
Jun 23 13:17:33 localhost kernel: [   30.604329]        [<c022fc1e>]
xfs_iget+0xbe/0x120
Jun 23 13:17:33 localhost kernel: [   30.604733]        [<c02492d3>]
xfs_trans_iget+0xf3/0x160
Jun 23 13:17:33 localhost kernel: [   30.605137]        [<c023314e>]
xfs_ialloc+0xae/0x500
Jun 23 13:17:33 localhost kernel: [   30.605542]        [<c0249e8c>]
xfs_dir_ialloc+0x6c/0x2a0
Jun 23 13:17:33 localhost kernel: [   30.605946]        [<c02504b5>]
xfs_create+0x335/0x630
Jun 23 13:17:33 localhost kernel: [   30.606350]        [<c025bb1e>]
xfs_vn_mknod+0x20e/0x320
Jun 23 13:17:33 localhost kernel: [   30.606755]        [<c025bc62>]
xfs_vn_create+0x12/0x20
Jun 23 13:17:33 localhost kernel: [   30.607160]        [<c0171daa>]
vfs_create+0xaa/0xf0
Jun 23 13:17:33 localhost kernel: [   30.607567]        [<c0174fcf>]
open_namei+0x5cf/0x630
Jun 23 13:17:33 localhost kernel: [   30.607971]        [<c016883c>]
do_filp_open+0x2c/0x50
Jun 23 13:17:33 localhost kernel: [   30.608376]        [<c01688a7>]
do_sys_open+0x47/0xe0
Jun 23 13:17:33 localhost kernel: [   30.608780]        [<c016897c>]
sys_open+0x1c/0x20
Jun 23 13:17:33 localhost kernel: [   30.609183]        [<c0104128>]
syscall_call+0x7/0xb
Jun 23 13:17:33 localhost kernel: [   30.609588]        [<ffffffff>] 0xffffffff
Jun 23 13:17:33 localhost kernel: [   30.609993] 
Jun 23 13:17:33 localhost kernel: [   30.609994] -> #0
(&(&ip->i_lock)->mr_lock/1){--..}:
Jun 23 13:17:33 localhost kernel: [   30.610277]        [<c013ac1d>]
__lock_acquire+0xc2d/0xf80
Jun 23 13:17:33 localhost kernel: [   30.610682]        [<c013afc5>]
lock_acquire+0x55/0x70
Jun 23 13:17:33 localhost kernel: [   30.611086]        [<c0131f89>]
down_write_nested+0x29/0x50
Jun 23 13:17:33 localhost kernel: [   30.611492]        [<c022ee42>]
xfs_ilock+0x82/0xc0
Jun 23 13:17:33 localhost kernel: [   30.611896]        [<c024d87f>]
xfs_lock_inodes+0x14f/0x170
Jun 23 13:17:33 localhost kernel: [   30.612300]        [<c02509c9>]
xfs_link+0x219/0x450
Jun 23 13:17:33 localhost kernel: [   30.612704]        [<c025b711>]
xfs_vn_link+0x41/0x90
Jun 23 13:17:33 localhost kernel: [   30.613108]        [<c01719b3>]
vfs_link+0xf3/0x150
Jun 23 13:17:33 localhost kernel: [   30.613512]        [<c01747bc>]
sys_linkat+0xdc/0x100
Jun 23 13:17:33 localhost kernel: [   30.613916]        [<c0174810>]
sys_link+0x30/0x40
Jun 23 13:17:33 localhost kernel: [   30.619081]        [<c0104128>]
syscall_call+0x7/0xb
Jun 23 13:17:33 localhost kernel: [   30.619484]        [<ffffffff>] 0xffffffff
Jun 23 13:17:33 localhost kernel: [   30.619887] 
Jun 23 13:17:33 localhost kernel: [   30.619888] other info that might help us
debug this:
Jun 23 13:17:33 localhost kernel: [   30.619889] 
Jun 23 13:17:33 localhost kernel: [   30.620052] 3 locks held by mount/2454:
Jun 23 13:17:33 localhost kernel: [   30.620108]  #0: 
(&inode->i_mutex/1){--..}, at: [<c01725a5>] lookup_create+0x25/0x90
Jun 23 13:17:33 localhost kernel: [   30.620438]  #1:  (&inode->i_mutex){--..},
at: [<c04a3978>] mutex_lock+0x8/0x10
Jun 23 13:17:33 localhost kernel: [   30.620727]  #2: 
(&(&ip->i_lock)->mr_lock){----}, at: [<c022ee42>] xfs_ilock+0x82/0xc0
Jun 23 13:17:33 localhost kernel: [   30.621014] 
Jun 23 13:17:33 localhost kernel: [   30.621015] stack backtrace:
Jun 23 13:17:33 localhost kernel: [   30.621124]  [<c010506a>]
show_trace_log_lvl+0x1a/0x30
Jun 23 13:17:33 localhost kernel: [   30.621227]  [<c0105db2>]
show_trace+0x12/0x20
Jun 23 13:17:33 localhost kernel: [   30.621329]  [<c0105e65>]
dump_stack+0x15/0x20
Jun 23 13:17:33 localhost kernel: [   30.621430]  [<c0138d9e>]
print_circular_bug_tail+0x6e/0x80
Jun 23 13:17:33 localhost kernel: [   30.621533]  [<c013ac1d>]
__lock_acquire+0xc2d/0xf80
Jun 23 13:17:33 localhost kernel: [   30.621636]  [<c013afc5>]
lock_acquire+0x55/0x70
Jun 23 13:17:33 localhost kernel: [   30.621738]  [<c0131f89>]
down_write_nested+0x29/0x50
Jun 23 13:17:33 localhost kernel: [   30.621840]  [<c022ee42>]
xfs_ilock+0x82/0xc0
Jun 23 13:17:33 localhost kernel: [   30.621942]  [<c024d87f>]
xfs_lock_inodes+0x14f/0x170
Jun 23 13:17:33 localhost kernel: [   30.622044]  [<c02509c9>]
xfs_link+0x219/0x450
Jun 23 13:17:33 localhost kernel: [   30.622146]  [<c025b711>]
xfs_vn_link+0x41/0x90
Jun 23 13:17:33 localhost kernel: [   30.622247]  [<c01719b3>]
vfs_link+0xf3/0x150
Jun 23 13:17:33 localhost kernel: [   30.622349]  [<c01747bc>]
sys_linkat+0xdc/0x100
Jun 23 13:17:33 localhost kernel: [   30.622450]  [<c0174810>]
sys_link+0x30/0x40
Jun 23 13:17:33 localhost kernel: [   30.622551]  [<c0104128>]
syscall_call+0x7/0xb
Jun 23 13:17:33 localhost kernel: [   30.622652]  =======================
Jun 23 13:17:33 localhost kernel: [   30.653705] ieee1394: Host added:
ID:BUS[0-00:1023]  GUID[000000508de69322]
Jun 23 13:17:33 localhost kernel: [   31.791302] XFS mounting filesystem hda6
Jun 23 13:17:33 localhost kernel: [   31.856657] Starting XFS recovery on
filesystem: hda6 (logdev: internal)
Jun 23 13:17:33 localhost kernel: [   32.205165] Ending XFS recovery on
filesystem: hda6 (logdev: internal)
Jun 23 13:17:33 localhost kernel: [   32.382929] XFS mounting filesystem hda7
Jun 23 13:17:33 localhost kernel: [   32.842131] Ending clean XFS mount for
filesystem: hda7


-- 
Configure bugmail: http://bugzilla.kernel.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.


<Prev in Thread] Current Thread [Next in Thread>