[PATCH v2] xfs: fix possible overflow in xfs_ioc_trim()
Lukas Czerner
lczerner at redhat.com
Wed Sep 7 05:05:14 CDT 2011
On Tue, 6 Sep 2011, Christoph Hellwig wrote:
> On Tue, Sep 06, 2011 at 05:29:37PM +0200, Lukas Czerner wrote:
> > In xfs_ioc_trim it is possible that start+len might overflow. Fix it by
> > decrementing the len so that start+len equals to the file system size in
> > the worst case.
> >
> > Signed-off-by: Lukas Czerner <lczerner at redhat.com>
>
> > @@ -146,6 +146,7 @@ xfs_ioc_trim(
> > unsigned int granularity = q->limits.discard_granularity;
> > struct fstrim_range range;
> > xfs_fsblock_t start, len, minlen;
> > + xfs_fsblock_t max_blks = mp->m_sb.sb_dblocks;
> > xfs_agnumber_t start_agno, end_agno, agno;
> > __uint64_t blocks_trimmed = 0;
> > int error, last_error = 0;
> > @@ -171,7 +172,8 @@ xfs_ioc_trim(
> > start_agno = XFS_FSB_TO_AGNO(mp, start);
> > if (start_agno >= mp->m_sb.sb_agcount)
> > return -XFS_ERROR(EINVAL);
> > -
> > + if (len > max_blks)
> > + len = max_blks - start;
>
> Is this really the correct check?
>
> Shouldn't it be
>
> if (start + len > max_blks)
> len = max_blks - start;
>
> I'd also just use the mp->m_sb.sb_dblocks value directly instead
> of assigning it to a local variable.
>
Agh, you're right. I am bit too hasty I guess. I thought that
if (start_agno >= mp->m_sb.sb_agcount)
return -XFS_ERROR(EINVAL);
will cover us from the unreasonably big start, however if the file
system has really huge number of AGs than it will fail to prevent the
overflow, I am not sure if that is possible to happen, but what you
proposed is definitely better.
Thanks!
-Lukas
More information about the xfs
mailing list