xfs
[Top] [All Lists]

Re: definitions for /proc/fs/xfs/stat

To: Nathan Scott <nathans@xxxxxxxxxx>
Subject: Re: definitions for /proc/fs/xfs/stat
From: Mark Seger <mjseger@xxxxxxxxx>
Date: Mon, 17 Jun 2013 06:57:14 -0400
Cc: Dave Chinner <david@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=KriazZIvCdnXbkF1V3tej4SXPFXNJrWUChe7BzeYOC8=; b=KQSrKXer1KWLkhjB2N5qQ4CZXlMOKajPuPpVv3ac83x+UYHdj2b8SoN8vAnH/sTWDn qBzLJtbO6YEN5iTmEAgUf11/OaAXQsmzgTBO3q5H53J/uII9wdLUPpFeTp3P2I7WcsSf Bze+9FHhnkyHEc6ohZ9Ynnk3bxCFMRT4Pjp8IobPQgKG+QRQjxPbKW3ph1ZC5rNbrzXg 3CqV7oCaceQY2F5WuYpuHkxFntdWW13jnd0IRe96RVzTaQmgG30/S1OnBB5UvbAkJzr7 Yalr+3rhdFyLaPqVU5Nr4T+Kb1fXhjoVXe4z8SDnVOPbZQEJPEFOisgOgQrR+kIis34D IQZQ==
In-reply-to: <1597962722.1767244.1371447710942.JavaMail.root@xxxxxxxxxx>
References: <CAC2B=ZFP_Fg34aFpk857stgB7MGcrYs9tybRS-ttw1CXNeU41Q@xxxxxxxxxxxxxx> <20130615020414.GB29338@dastard> <CAC2B=ZEUkd+ADnQLUKj9S-3rdo2=93WbW0tbLbwwHUvkh6v7Rw@xxxxxxxxxxxxxx> <CAC2B=ZGgr5WPWOEehHDHKekM8yHgQ3QS4HMzM8+j217AfEoPyQ@xxxxxxxxxxxxxx> <20130616001130.GE29338@dastard> <CAC2B=ZFZskLnp5baVJK+R1xrpOfTkr1QXpA9jyHvxfk5Wd4yDg@xxxxxxxxxxxxxx> <419435719.1662203.1371431489790.JavaMail.root@xxxxxxxxxx> <20130617024603.GJ29338@dastard> <1597962722.1767244.1371447710942.JavaMail.root@xxxxxxxxxx>
all - good conversation and again, thanks for digging into this.  The comment about me running on an older kernel seems to be the problem and by rerunning my test on precise/3.5.0-23-generic all seems to be operating correctly, so I guess that was it.

However, the one thing that does jump out of this is that proc/fs/xsfstats and pcp were both showing many hundred MB/sec during tests that only ran for a few seconds, which is impossible so it still feels some like sort of accounting bug to me.  On the other hand if the fact that this was an older kernel, and newer kernels are fine, perhaps it's something just to note and not worry about.

thanks again...

-mark


On Mon, Jun 17, 2013 at 1:41 AM, Nathan Scott <nathans@xxxxxxxxxx> wrote:
Hey Dave,

----- Original Message -----
> ...
> Must be an old version of RHEL6, because 6.4 doesn't do any IO at
> all, same as upstream. This test workload is purely a metadata only
> workload (no data is written) and so it all gets gathered up by
> delayed logging.

*nod* - RHEL6.3.

> > I think it is still possible, FWIW.  One could use python ctypes (as in
> > Marks test program) and achieve a page-aligned POSIX memalign,
>
> I wasn't aware you could get memalign() through python at all. I
> went looking for this exact solution a couple of month ago when
> these problems started to be reported and couldn't find anything
> ...

Yes, on reflection it doesn't jive too well with the way python wants
to do reads, in particular - os.read takes a file and a size, there's
no buffer exposed at the API level (for input).

It would need to be a separate python module to the core set I guess
(with a C component), and a slightly different API - or at least some
additional APIs which can take in an aligned buffer, rather than just
allocating one each time - but I believe it's still feasible.

cheers.

--
Nathan

<Prev in Thread] Current Thread [Next in Thread>