xfs
[Top] [All Lists]

Re: page fault scalability (ext3, ext4, xfs)

To: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Subject: Re: page fault scalability (ext3, ext4, xfs)
From: Theodore Ts'o <tytso@xxxxxxx>
Date: Thu, 15 Aug 2013 11:05:06 -0400
Cc: linux-fsdevel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx, linux-ext4@xxxxxxxxxxxxxxx, Jan Kara <jack@xxxxxxx>, LKML <linux-kernel@xxxxxxxxxxxxxxx>, david@xxxxxxxxxxxxx, Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>, Andi Kleen <ak@xxxxxxxxxxxxxxx>, Andy Lutomirski <luto@xxxxxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=thunk.org; s=mail; t=1376579106; bh=URYwhOY4KnDv3I5Eg3hMXXCLBOIWHj6NofgTXfCKwNU=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=ZrCOfSUd3lReVJCshp0Lb4caXZW1GV98nL3nJ3KyW+udnc6LzVDYDDo4/G74O2jlE 0JDYkE3viFnhqlhTZplN94T1aottGYlfxDjZTXROVOejcUQUkWKYJDuqV1SjBVh/AH GZpsBqXQC34hOyaTKQOEjpISvRn7Jlr2z2WGbYDE=
In-reply-to: <520BB9EF.5020308@xxxxxxxxxxxxxxx>
Mail-followup-to: Theodore Ts'o <tytso@xxxxxxx>, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>, linux-fsdevel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx, linux-ext4@xxxxxxxxxxxxxxx, Jan Kara <jack@xxxxxxx>, LKML <linux-kernel@xxxxxxxxxxxxxxx>, david@xxxxxxxxxxxxx, Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>, Andi Kleen <ak@xxxxxxxxxxxxxxx>, Andy Lutomirski <luto@xxxxxxxxxxxxxx>
References: <520BB9EF.5020308@xxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Aug 14, 2013 at 10:10:07AM -0700, Dave Hansen wrote:
> We talked a little about this issue in this thread:
> 
>       http://marc.info/?l=linux-mm&m=137573185419275&w=2
> 
> but I figured I'd follow up with a full comparison.  ext4 is about 20%
> slower in handling write page faults than ext3.

Let's take a step back from the details of whether the benchmark is
measuring what it claims to be measuring, and address this a different
way --- what's the workload which might be run on an 8-socket, 80-core
system, which is heavily modifying mmap'ed pages in such a way that
all or most of the memory writes are to clean pages that require write
page fault handling?

We can talk about isolating the test so that we remove block
allocation, timestamp modifications, etc., but then are we stil
measuring whatever motivated Dave's work in the first place?

IOW, if it really is about write page fault handling, the simplest
test to do is to mmap /dev/zero and then start dirtying pages.  At
that point we will be measuring the VM level write page fault code.

If we start trying to add in file system specific behavior, then we
get into questions about block allocation vs. inode updates
vs. writeback code paths, depending on what we are trying to measure,
which then leads to the next logical question --- why are we trying to
measure this?

Is there a specific scalability problem that is show up in some real
world use case?  Or is this a theoretical exercise?  It's Ok if it's
just theoretical, since then we can try to figure out some kind of
useful scalability limitation which is of practical importance.  But
if there was some original workload which was motivating this
exercise, it would be good if we kept this in mind....

Cheers,

                                         - Ted

<Prev in Thread] Current Thread [Next in Thread>