On Tue, Mar 24, 2015 at 09:50:59PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@xxxxxxxxxx>
>
> Lock ordering for the new mmap lock needs to be:
>
> mmap_sem
> sb_start_pagefault
> i_mmap_lock
> page lock
> <fault processsing>
>
> Right now xfs_vm_page_mkwrite gets this the wrong way around,
> While technically it cannot deadlock due to the current freeze
> ordering, it's still a landmine that might explode if we change
> anything in future. Hence we need to nest the locks correctly.
>
> Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> ---
Reviewed-by: Brian Foster <bfoster@xxxxxxxxxx>
> fs/xfs/xfs_file.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index dc5f609..a4c882e 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -1449,15 +1449,20 @@ xfs_filemap_page_mkwrite(
> struct vm_fault *vmf)
> {
> struct xfs_inode *ip = XFS_I(vma->vm_file->f_mapping->host);
> - int error;
> + int ret;
>
> trace_xfs_filemap_page_mkwrite(ip);
>
> + sb_start_pagefault(VFS_I(ip)->i_sb);
> + file_update_time(vma->vm_file);
> xfs_ilock(ip, XFS_MMAPLOCK_SHARED);
> - error = block_page_mkwrite(vma, vmf, xfs_get_blocks);
> +
> + ret = __block_page_mkwrite(vma, vmf, xfs_get_blocks);
> +
> xfs_iunlock(ip, XFS_MMAPLOCK_SHARED);
> + sb_end_pagefault(VFS_I(ip)->i_sb);
>
> - return error;
> + return block_page_mkwrite_return(ret);
> }
>
> const struct file_operations xfs_file_operations = {
> --
> 2.0.0
>
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
|