xfs
[Top] [All Lists]

Re: [PATCH] xfs: clarify lock ordering comment

To: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx>
Subject: Re: [PATCH] xfs: clarify lock ordering comment
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 9 Oct 2015 09:24:50 +1100
Cc: linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <1444341481-14139-1-git-send-email-ross.zwisler@xxxxxxxxxxxxxxx>
References: <1444341481-14139-1-git-send-email-ross.zwisler@xxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Oct 08, 2015 at 03:58:01PM -0600, Ross Zwisler wrote:
> Replace "i_mmap_lock" with "mmap_lock" in the lock ordering comment above
> xfs_filemap_page_mkwrite().  The lock in question is actually the
> XFS_MMAPLOCK_SHARED rw_semaphore (no leading "i"), and this comment is

struct xfs_inode {
....
                mrlock_t                i_mmaplock;     /* inode mmap IO lock */
....

> easily confused with the "i_mmap_lock_[read|write]" functions that operate
> on struct address_space->i_mmap_rwsem.  This clarification is especially
> important because address_space->i_mmap_rwsem is taken down in the DAX
> code as part of this fault path.
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx>
> ---
>  fs/xfs/xfs_file.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index f429662..b190033 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -1477,7 +1477,7 @@ xfs_file_llseek(
>   *
>   * mmap_sem (MM)
>   *   sb_start_pagefault(vfs, freeze)
> - *     i_mmap_lock (XFS - truncate serialisation)
> + *     mmap_lock (XFS - truncate serialisation)

As per above, the XFS lock is "i_mmaplock"...

The lock names are annotated with the subsystem the lock belongs to
to avoid this confusion. Along with the lock ordering (inside
sb_start_pagefault) this should indicate that it's not the
"i_mmap_lock (MM - vma serialisation)" lock... ;)

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>