[Top] [All Lists]

Re: sunit/swidth for HP P4500 Lefthand Networks storage arrays

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: sunit/swidth for HP P4500 Lefthand Networks storage arrays
From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Thu, 12 Jan 2012 08:30:56 +0100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4F0E5E73.9070308@xxxxxxxxxxxxxxxxx>
Organization: it-management http://it-management.at
References: <4F0E5E73.9070308@xxxxxxxxxxxxxxxxx>
User-agent: KMail/1.13.6 (Linux/3.1.5-zmi; KDE/4.6.0; x86_64; ; )
On Donnerstag, 12. Januar 2012 Stan Hoeppner wrote:
> What is the best mkfs.xfs configuration for this scenario?
> I'm guessing it would be best to simply use mostly, if not
> completely, the defaults, due to the way iSCSI packets are
> redirected on the fly to any storage node depending on load, by the
> Lefthand special sauce.

I'd use defaults. We've recently switched to a NetApp storage, and with 
all the specialities it has also use the defaults.

> What about mount options?
> Should I use barriers with the P4500s or disable them?
> TTBOMK the internal PCIe RAID controllers have BBWC, but the ~6GB of
> RAM on the P4500 mobos isn't battery backed, but for the typical
> external UPS.  In this setup, from a physical hardware standpoint,
> iSCSI packets will be making at least 2 ethernet switch hops between
> the ESX nodes and the P4500s, with redundant links between
> everything, if that's a factor at all.

Turn off barriers, I'd say. We use the NetApp over NFS (to VMware 
stores), and turned them off. I guess that's also correct to do.

As I understand them, barriers help to not loose blocks which the 
storage already received, so it doesn't matter how it's connected 
because the packets must have arrived there already. Can someone 

mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531

Attachment: signature.asc
Description: This is a digitally signed message part.

<Prev in Thread] Current Thread [Next in Thread>