xfs
[Top] [All Lists]

Fatigue for XFS

To: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Fatigue for XFS
From: Andrey Korolyov <andrey@xxxxxxx>
Date: Mon, 5 May 2014 23:49:05 +0400
Cc: "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xdel.ru; s=google; h=mime-version:from:date:message-id:subject:to:cc:content-type; bh=udi8p/faX1V1QMSgQ+m7u8rgthdJ3z1pulo51KJ4jYs=; b=WHV+4gG9oml7mXIORAJv7XB5ilzupRg9YNKd/wadTvS4CZTCQoZYMKgCHmOokdmbfm PHBaorSkpKc5ps3LWSrHp/kWUCJCGphSo8M+7EAw91nJPXHt1CFP9Z9MMhN2rJ7+G93J NzWBNQrE+ryZLRvQqp7qTOKtZMUdNhgpJHlTw=
Hello,

We are currently exploring issue which can be related to Ceph itself
or to the XFS - any help is very appreciated.

First, the picture: relatively old cluster w/ two years uptime and ten
months after fs recreation on every OSD, one of daemons started to
flap approximately once per day for couple of weeks, with no external
reason (bandwidth/IOPS/host issues). It looks almost the same every
time - OSD suddenly stop serving requests for a short period, gets
kicked out by peers report, then returns in a couple of seconds. Of
course, small but sensitive amount of requests are delayed by 15-30
seconds twice, which is bad for us. The only thing which correlates
with this kick is a peak of I/O, not too large, even not consuming all
underlying disk utilization, but alone in the cluster and clearly
visible. Also there are at least two occasions *without* correlated
iowait peak.

I have two versions - we`re touching some sector on disk which is
about to be marked as dead but not displayed in SMART statistics or (I
believe so) some kind of XFS fatigue, which can be more likely in this
case, since near-bad sector should be touched more frequently and
related impact could leave traces in dmesg/SMART from my experience. I
would like to ask if anyone has a simular experience before or can
suggest to poke existing file system in some way. If no suggestion
appear, I`ll probably reformat disk and, if problem will remain after
refill, replace it, but I think less destructive actions can be done
before.

XFS is running on 3.10 with almost default create and mount options,
ceph version is the latest cuttlefish (this rack should be upgraded, I
know).

<Prev in Thread] Current Thread [Next in Thread>