xfs
[Top] [All Lists]

Re: The XFS real-time subvolume in Linux

To: Jens Axboe <axboe@xxxxxxx>
Subject: Re: The XFS real-time subvolume in Linux
From: Steve Lord <lord@xxxxxxx>
Date: Wed, 05 Oct 2005 10:30:30 -0500
Cc: Andi Kleen <ak@xxxxxxx>, Eric Sandeen <sandeen@xxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <20051005152051.GF3511@xxxxxxx>
References: <BAY110-F272BEC2E5C429160FB4068B4830@xxxxxxx> <200510051624.16213.ak@xxxxxxx> <20051005145808.GC3511@xxxxxxx> <200510051710.36778.ak@xxxxxxx> <20051005152051.GF3511@xxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.7-1.1.fc4 (X11/20050929)
Jens Axboe wrote:
On Wed, Oct 05 2005, Andi Kleen wrote:

On Wednesday 05 October 2005 16:58, Jens Axboe wrote:


There are still unknowns, the HD still being the biggest one of course.
The problem is that you don't know the worst case HD performance, it
might be doing all sorts of rewriting, calibration, error correct etc
that can still screw you. So I think without definitely knowledge of
what the HD will do in case of errors (or a way to control that which
you definitely can on some drives), it's still pretty hazy. It gets
better, but if you are looking for complete guarantees I don't think
it's good enough.

Yes, but GRIO has exactly the same problem. I assume they need custom
calibration for each IO subsystem.


Indeed it does, and yes if they really want to provide the type of
guarantees that Steve listed, then that needs a custom box with either
custom or known disk firmware options. If not you cannot give absolute
guarantees and expect to always honor them. That's in addition to
anything you may need to change in software, if using Linux you would
need to audit/fix lots of things in the io path.


Definitely, from my memory, grio had a tool for measuring a disk subsystem.
I think reality is closer to overspecing the hardware than to providing an
absolute guarantee of bandwidth. Being able to prioritize individual I/O calls
is just part of the picture.

Steve


<Prev in Thread] Current Thread [Next in Thread>