| To: | Dave Chinner <david@xxxxxxxxxxxxx> |
|---|---|
| Subject: | Re: definitions for /proc/fs/xfs/stat |
| From: | Mark Seger <mjseger@xxxxxxxxx> |
| Date: | Mon, 17 Jun 2013 10:57:58 -0400 |
| Cc: | Nathan Scott <nathans@xxxxxxxxxx>, xfs@xxxxxxxxxxx |
| Delivered-to: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=mv9NYdk6MJxWIIfOrOCAbBZhnBW6VTlcrRIJx28oC1w=; b=BfFLB2/WIkpdqZiVDyezeszvOFOwqigTYpEUnYff5cVdIAGiwH1mmQ9/GDyboBeQNC OIfcmCFDRXD2Vedj/Vk9z5xuwPs0WjBYgxkeMf2QTNFSN4k+Cw0vMtzPaC5yE5ESxcAk Ma1wBftnfp/ioH4MJI5FF1ZzTsrvzSQ9jBOAheHUGqKvTCpuZCTg4LQ6sRSyN4wcNxXX EhfE20TkUNaCd6tguzKYpC8c71Cr3MG2CnvPBbRQClLThH3LjpyibHs8KyZIHB2enQJs qvZ/BpiTezkO0v08bJE6k8HlFj9vVeY1hIa0NWrdyUSY8tKQ0/w3v0k3xCe1egblpsKK sbwA== |
| In-reply-to: | <20130617111347.GL29338@dastard> |
| References: | <CAC2B=ZFP_Fg34aFpk857stgB7MGcrYs9tybRS-ttw1CXNeU41Q@xxxxxxxxxxxxxx> <20130615020414.GB29338@dastard> <CAC2B=ZEUkd+ADnQLUKj9S-3rdo2=93WbW0tbLbwwHUvkh6v7Rw@xxxxxxxxxxxxxx> <CAC2B=ZGgr5WPWOEehHDHKekM8yHgQ3QS4HMzM8+j217AfEoPyQ@xxxxxxxxxxxxxx> <20130616001130.GE29338@dastard> <CAC2B=ZFZskLnp5baVJK+R1xrpOfTkr1QXpA9jyHvxfk5Wd4yDg@xxxxxxxxxxxxxx> <419435719.1662203.1371431489790.JavaMail.root@xxxxxxxxxx> <20130617024603.GJ29338@dastard> <1597962722.1767244.1371447710942.JavaMail.root@xxxxxxxxxx> <CAC2B=ZENLd7hoP=U08hyb6xFw6ye0nL5MMW+iDnTXTcoKCYEvA@xxxxxxxxxxxxxx> <20130617111347.GL29338@dastard> |
How big is the write cache in your RAID array? If the log is the I asked around and people believe the cache is on the order of a few GB and the test ran I was intentionally large enough to overshadow any cache effects, running for about a minute and doing 100K 1K file creates. The disk write data was close to a sustained 475MB/sec and would have easily filled the cache in the first handful of seconds which would have produced enough backpressure to slow down the write rate which it never did.
On a totally different topic, and if you like we can start a different thread on it, I'd be interested in adding some monitoring stats to collectl for xfs and could use some suggestions of what are the most important.
-mark |
| Previous by Date: | Re: [PATCH 1/3] xfs: don't shutdown log recovery on validation errors, Ben Myers |
|---|---|
| Next by Date: | linux software raid5 on lvm: sunit/swidth required?, Thomas Molszich |
| Previous by Thread: | Re: definitions for /proc/fs/xfs/stat, Dave Chinner |
| Next by Thread: | Re: definitions for /proc/fs/xfs/stat, Stefan Ring |
| Indexes: | [Date] [Thread] [Top] [All Lists] |