Andrew Klaassen writes:
>On Tue, Jun 19, 2001 at 02:55:26AM +1000,
>Robin Humble wrote:
>> Contrary to popular belief, using both master and slave only
>> gets you a ~5% performance hit.
>(?)!
yeah - it surprised me too...
>That would be very, very good news. Has anybody else tried
>this?
dunno - I'd like to hear about it if they have...
> Have you load tested the machines and seen smooth
>performance degradation, no nasty bottlenecks (or whatever it is
>that is supposed to happen when master and slave are both used
>in an array)?
All bonnie++ numbers with RAID0 over 4 disks were pretty much the same
whether it was all 4 disks on one (2-port) card, or whether each disk
had its own separate controller.
It's easy to try it out for yourself.
The one thing that may be better with each drive having a separate
controller is if a disk dies - when doing master+slave, the controller
may get confused and lose track of the 2nd disk... So in a RAID5
situation there's maybe something to be said for separate controllers.
However IDE and Linux deal pretty inelegantly with any sort of IDE disk
failure so perhaps it's not much worse than normal...
We did the pull-the-power-out-of-the-drive thing and had some real disk
failures also and RAID5 things seemed to work as expected.
A separate consideration is that NFS over a 100Mbit link limits you to
10MB/s anyway, so filesystem performance shouldn't really be an issue;
you may as well save yourself PCI slots and IRQs by using master+slave.
Gigabit eth will probably push hard enough to make filesystem
performance the bottleneck and then any (even small) loss from
master+slave may be unacceptable.
>(Am I asking these question on the wrong forum? Apologies...)
well, maybe :-)
lots of people seem interested though - I guess 'cos these are the sort
of machine configurations that journalling filesystems are required for.
cheers,
robin
|