[Top] [All Lists]

Re: XFS direct IO problem

To: YeYin <eyniy@xxxxxx>
Subject: Re: XFS direct IO problem
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 8 Apr 2015 14:49:55 +1000
Cc: xfs <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <tencent_316A3DE769544D99766FE3F1@xxxxxx>
References: <tencent_316A3DE769544D99766FE3F1@xxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Apr 08, 2015 at 12:21:45PM +0800, YeYin wrote:
> Hi, About 2 months ago, I asked one problem in XFS, see
> here(http://oss.sgi.com/archives/xfs/2015-02/msg00197.html).
> After that, I use direct IO in MySQL, see
> here(https://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_flush_method).â
> However, I found that MySQL performance is still poor sometimes. I
> use some tools(https://github.com/brendangregg/perf-toolsâ;) to
> trace the kernel, I found some problems:


> This will cause bad performance, even direct IO. I still don't
> understand why not truncate_inode_page called?

Because the cached page must be outside the range of the direct IO
that is in progress - direct IO only tries to flush pages over the
range it is doing the IO over.

> Every time, after I run this: echo 1 > /proc/sys/vm/drop_caches
> Immediately enhance performance.

Because that flushes whatever page is in the cache. Can you identify
what offset that cached page is at? Tracing the xfs events will tell
you what pages that operation invalidates on each inode, and knowing
the offset may tell us why that page is not getting flushed.

Alternatively, write a simple C program that deomnstrates the same
problem so we can reproduce it easily, fix the problem and turn it
into a regression test....


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>