XFS use within multi-threaded apps
Michael Monnerie
michael.monnerie at is.it-management.at
Sun Oct 24 13:22:46 CDT 2010
On Samstag, 23. Oktober 2010 Angelo McComis wrote:
> They quoted having 10+TB databases running OLTP on EXT3 with
> 4-5GB/sec sustained throughput (not XFS).
Which servers and storage are these? This is nothing you can do with
"normal" storages. Using 8Gb/s Fibre Channel gives 1GB/s, if you can do
full speed I/O. So you'd need at least 5 parallel Fibre Channel storages
running without any overhead. Also, a single server can't do that high
rates, so there must be several front-end servers. That again means
their database must be especially organised for that type of load
(shared nothing or so).
On the other hand, if they have these performance numbers on 100 shared
serves, it only needs 51MB/s per server of I/O to get 5GB/s total
throughput. So that is a number without a lot of meaning, as long as you
don't know which hardware is used.
And: how high would be their throughput when using XFS instead EXT3? ;-)
One question comes to my mind: if they do direct I/O, would there still
be a lot of difference between XFS and EXT3, performance wise?
And how many companies run around telling which filesystem they use for
their performance critical business application? Normally they do this
only for marketing, so they get paid or special prices if they say "with
this product we are sooo happy".
--
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc
it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31
****** Radiointerview zum Thema Spam ******
http://www.it-podcast.at/archiv.html#podcast-100716
// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20101024/2fb01f6b/attachment.sig>
More information about the xfs
mailing list