xfs
[Top] [All Lists]

Re: Inode and dentry cache behavior

To: Shrinand Javadekar <shrinand@xxxxxxxxxxxxxx>
Subject: Re: Inode and dentry cache behavior
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 24 Apr 2015 16:15:54 +1000
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <CABppvi7+Mu78FAM75YvJvekX2CHtKk4yeMrU7j35fvvWRb923Q@xxxxxxxxxxxxxx>
References: <CABppvi55C+vE7Ei8u=_ntC_heDQb4HwUcKom-_9hGkunk84Sfw@xxxxxxxxxxxxxx> <20150423224324.GM15810@dastard> <CABppvi7+Mu78FAM75YvJvekX2CHtKk4yeMrU7j35fvvWRb923Q@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, Apr 23, 2015 at 04:48:51PM -0700, Shrinand Javadekar wrote:
> > from the iostat log:
> >
> > Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz 
> > avgqu-sz   await r_await w_await  svctm  %util
> > .....
> > dm-6              0.00     0.00    0.20   22.40     0.00     0.09    8.00   
> >  22.28  839.01 1224.00  835.57  44.25 100.00
> > dm-7              0.00     0.00    0.00    1.20     0.00     0.00    8.00   
> >   2.82 1517.33    0.00 1517.33 833.33 100.00
> > dm-8              0.00     0.00    0.00  195.20     0.00     0.76    8.00  
> > 1727.51 4178.89    0.00 4178.89   5.12 100.00
> > ...
> > dm-7              0.00     0.00    0.00    0.00     0.00     0.00     0.00  
> >    1.00    0.00    0.00    0.00   0.00 100.00
> > dm-8              0.00     0.00    0.00    0.00     0.00     0.00     0.00  
> > 1178.85    0.00    0.00    0.00   0.00 100.00
> >
> > dm-7 is showing almost a second for single IO wait times, when it is
> > actually completing IO. dm-8 has a massive queue depth - I can only
> > assume you've tuned  sys/block/*/queue/nr_requests to something
> > really large? But like dm-7, it's showing very long IO times, and
> > that's likely the source of your latency problems.
> 
> I see that /sys/block/*/queue/nr_requests is set to 128 which is way
> less than the queue depth shown in the iostat numbers. What gives?

No idea, but it's indicative of a problem below XFS. Work out what
is happening with your storage hardware first, then work your way up
the stack...

> One other observation we had was that xfs shows a large amount of
> directory fragmentation. Directory fragmentation was shown at ~40%
> whereas file fragmentation was very low at 0.1%.

Pretty common. Directories are only accessed a single block at a
time, and sequential offset reads are pretty rare, so fragmentation
makes little difference to performance. You're seeing almost zero
read IO load, so the directory layout is not a concern for this
workload.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>