xfs
[Top] [All Lists]

Re: [PATCH v6 5/8] fs: xfs: replace BIO_MAX_SECTORS with BIO_MAX_PAGES

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Subject: Re: [PATCH v6 5/8] fs: xfs: replace BIO_MAX_SECTORS with BIO_MAX_PAGES
From: Ming Lei <ming.lei@xxxxxxxxxxxxx>
Date: Thu, 2 Jun 2016 11:32:51 +0800
Cc: Jens Axboe <axboe@xxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, linux-block@xxxxxxxxxxxxxxx, Dave Chinner <david@xxxxxxxxxxxxx>, "supporter:XFS FILESYSTEM" <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20160601134816.GA20963@xxxxxxxxxxxxx>
References: <1464615294-9946-1-git-send-email-ming.lei@xxxxxxxxxxxxx> <1464615294-9946-6-git-send-email-ming.lei@xxxxxxxxxxxxx> <20160601134816.GA20963@xxxxxxxxxxxxx>
On Wed, Jun 1, 2016 at 9:48 PM, Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
> On Mon, May 30, 2016 at 09:34:33PM +0800, Ming Lei wrote:
>> diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
>> index e71cfbd..e5d713b 100644
>> --- a/fs/xfs/xfs_buf.c
>> +++ b/fs/xfs/xfs_buf.c
>> @@ -1157,9 +1157,7 @@ xfs_buf_ioapply_map(
>>
>>  next_chunk:
>>       atomic_inc(&bp->b_io_remaining);
>> -     nr_pages = BIO_MAX_SECTORS >> (PAGE_SHIFT - BBSHIFT);
>> -     if (nr_pages > total_nr_pages)
>> -             nr_pages = total_nr_pages;
>> +     nr_pages = min(total_nr_pages, BIO_MAX_PAGES);
>>
>>       bio = bio_alloc(GFP_NOIO, nr_pages);
>
> While I think this is a useful cleanup on it's own I think
> you'd make everyones life easier if bio_alloc simply clamped down
> the passed nr_pages value to the maximum allowed.

Yes, that looks a good cleanup, but need be careful because the
passed 'nr_pages' can be used after returning from bio_alloc() in
the current function, and it is easy to see this usage.

So we can do that in another patchset instead of this one.

Thanks,

> --
> To unsubscribe from this list: send the line "unsubscribe linux-block" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

<Prev in Thread] Current Thread [Next in Thread>