xfs
[Top] [All Lists]

Alignment: XFS + LVM2

To: xfs@xxxxxxxxxxx
Subject: Alignment: XFS + LVM2
From: Marc Caubet <mcaubet@xxxxxx>
Date: Wed, 7 May 2014 14:43:47 +0200
Delivered-to: xfs@xxxxxxxxxxx
Hi all,

I am trying to setup a storage pool with correct disk alignment and I hope somebody can help me to understand some unclear parts to me when configuring XFS over LVM2.

Actually we have few storage pools with the following settings each:

- LSI Controller with 3xRAID6
- Each RAID6 is configured with 10 data disks + 2 for double-parity.
- Each disk has a capacity of 4TB, 512e and physical sector size of 4K.
- 3x(10+2) configuration was considered in order to gain best performance and data safety (less disks per RAID less probability of data corruption)

From the O.S. side we see:

[root@stgpool01 ~]# fdisk -l /dev/sda /dev/sdb /dev/sdc

Disk /dev/sda: 40000.0 GB, 39999997214720 bytes
255 heads, 63 sectors/track, 4863055 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/sdb: 40000.0 GB, 39999997214720 bytes
255 heads, 63 sectors/track, 4863055 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/sdc: 40000.0 GB, 39999997214720 bytes
255 heads, 63 sectors/track, 4863055 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

The idea is to aggregate the above devices and show only 1 storage space. We did as follows:

vgcreate dcvg_a /dev/sda /dev/sdb /dev/sdc
lvcreate -i 3 -I 4096 -n dcpool -l 100%FREE -v dcvg_a

Hence, stripe of the 3 RAID6 in a LV.

And here is my first question: How can I check if the storage and the LV are correctly aligned?

On the other hand, I have formatted XFS as follows:

mkfs.xfs -d su=256k,sw=10 -l size=128m,lazy-count=1 /dev/dcvg_a/dcpool

So my second question is, are the above 'su' and 'sw' parameters correct on the current LV configuration? If not, which values should I have and why? AFAIK su is the stripe size configured in the controller side, but in this case we have a LV. Also, sw is the number of data disks in a RAID, but again, we have a LV with 3 stripes, and I am not sure if the number of data disks should be 30 instead.

Thanks a lot,
--
Marc Caubet Serrabou
PIC (Port d'Informacià CientÃfica)
Campus UAB, Edificio D
E-08193 Bellaterra, Barcelona
Tel: +34 93 581 33 22
Fax: +34 93 581 41 10
http://www.pic.es
Avis - Aviso - Legal Notice: http://www.ifae.es/legal.html
<Prev in Thread] Current Thread [Next in Thread>