xfs
[Top] [All Lists]

Re: Fatigue for XFS

To: Andrey Korolyov <andrey@xxxxxxx>
Subject: Re: Fatigue for XFS
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 6 May 2014 06:36:33 +1000
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>, "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <CABYiri9j=2ipPhvRtn1g=omEDpTz5VD8O0fPWrBGCm=UsgNavw@xxxxxxxxxxxxxx>
References: <CABYiri9j=2ipPhvRtn1g=omEDpTz5VD8O0fPWrBGCm=UsgNavw@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Mon, May 05, 2014 at 11:49:05PM +0400, Andrey Korolyov wrote:
> Hello,
> 
> We are currently exploring issue which can be related to Ceph itself
> or to the XFS - any help is very appreciated.
> 
> First, the picture: relatively old cluster w/ two years uptime and ten
> months after fs recreation on every OSD, one of daemons started to
> flap approximately once per day for couple of weeks, with no external
> reason (bandwidth/IOPS/host issues). It looks almost the same every
> time - OSD suddenly stop serving requests for a short period, gets
> kicked out by peers report, then returns in a couple of seconds. Of
> course, small but sensitive amount of requests are delayed by 15-30
> seconds twice, which is bad for us. The only thing which correlates
> with this kick is a peak of I/O, not too large, even not consuming all
> underlying disk utilization, but alone in the cluster and clearly
> visible. Also there are at least two occasions *without* correlated
> iowait peak.

So, actual numbers and traces are the only thing that tell us what
is happening during these events. See here:

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

If it happens at almost the same time every day, then I'd be looking
at the crontabs to find what starts up about that time. output of
top will also probably tell you what process is running, too. topio
might be instructive, and blktrace almost certainly will be....

> I have two versions - we`re touching some sector on disk which is
> about to be marked as dead but not displayed in SMART statistics or (I

Doubt it - SMART doesn't cause OS visible IO dispatch spikes.

> believe so) some kind of XFS fatigue, which can be more likely in this
> case, since near-bad sector should be touched more frequently and
> related impact could leave traces in dmesg/SMART from my experience. I

I doubt that, too, because XFS doesn't have anything that is
triggered on a daily basis inside it. Maybe you've got xfs_fsr set
up on a cron job, though...

> would like to ask if anyone has a simular experience before or can
> suggest to poke existing file system in some way. If no suggestion
> appear, I`ll probably reformat disk and, if problem will remain after
> refill, replace it, but I think less destructive actions can be done
> before.

Yeah, monitoring and determining the process that is issuing the IO
is what you need to find first.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>