sleeps and waits during io_submit

Avi Kivity avi at scylladb.com
Mon Nov 30 08:29:13 CST 2015



On 11/30/2015 04:10 PM, Brian Foster wrote:
>> 2) xfs_buf_lock -> down
>> This is one I truly don't understand. What can be causing contention
>> in this lock? We never have two different cores writing to the same
>> buffer, nor should we have the same core doingCAP_FOWNER so.
>>
> This is not one single lock. An XFS buffer is the data structure used to
> modify/log/read-write metadata on-disk and each buffer has its own lock
> to prevent corruption. Buffer lock contention is possible because the
> filesystem has bits of "global" metadata that has to be updated via
> buffers.
>
> For example, usually one has multiple allocation groups to maximize
> parallelism, but we still have per-ag metadata that has to be tracked
> globally with respect to each AG (e.g., free space trees, inode
> allocation trees, etc.). Any operation that affects this metadata (e.g.,
> block/inode allocation) has to lock the agi/agf buffers along with any
> buffers associated with the modified btree leaf/node blocks, etc.
>
> One example in your attached perf traces has several threads looking to
> acquire the AGF, which is a per-AG data structure for tracking free
> space in the AG. One thread looks like the inode eviction case noted
> above (freeing blocks), another looks like a file truncate (also freeing
> blocks), and yet another is a block allocation due to a direct I/O
> write. Were any of these operations directed to an inode in a separate
> AG, they would be able to proceed in parallel (but I believe they would
> still hit the same codepaths as far as perf can tell).

I guess we can mitigate (but not eliminate) this by creating more 
allocation groups.  What is the default value for agsize?  Are there any 
downsides to decreasing it, besides consuming more memory?

Are those locks held around I/O, or just CPU operations, or a mix?



More information about the xfs mailing list