xfs
[Top] [All Lists]

sw and su for hardware RAID10 (w/ LVM)

To: xfs@xxxxxxxxxxx
Subject: sw and su for hardware RAID10 (w/ LVM)
From: Ray Van Dolson <rvandolson@xxxxxxxx>
Date: Mon, 10 Mar 2014 21:56:39 -0700
Delivered-to: xfs@xxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)
RHEL6.x + XFS that comes w/ Red Hat's scalable file system add on.  We
have two PowerVault MD3260e's each configured with a 30 disk RAID10 (15
RAID groups) exposed to our server.  Segment size is 128K (in Dell's
world I'm not sure if this means my stripe width is 128K*15?)

Have set up a concatenated LVM volume on top of these two "virtual
disks" (with lvcreate -i 2).

By default LVM says it's used a stripe width of 64K.

# lvs -o path,size,stripes,stripe_size
  Path                           LSize   #Str Stripe
  /dev/agsfac_vg00/lv00          100.00t    2 64.00k

Unsure if these defaults should be adjusted.

I'm trying to figure out the appropriate sw/su values to use per:

  
http://xfs.org/index.php/XFS_FAQ#Q:_How_to_calculate_the_correct_sunit.2Cswidth_values_for_optimal_performance

Am considering either just going with defaults (XFS should pull from
LVM I think) or doing something like sw=2,su=128K.  However, maybe I
should be doing sw=2,su=1920K?  And perhaps my LVM stripe width should
be adjusted?

Thanks,
Ray

<Prev in Thread] Current Thread [Next in Thread>