xfs
[Top] [All Lists]

Re: 4k sector drives

To: xfs@xxxxxxxxxxx
Subject: Re: 4k sector drives
From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Sun, 25 Jul 2010 00:35:21 +0200
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxxxxxxxxx>
In-reply-to: <4C4B55F7.2090501@xxxxxxxxxxx>
Organization: it-management http://it-management.at
References: <201007211333.48363.eye.of.the.8eholder@xxxxxxxxx> <20100724084751.GA32006@xxxxxxxxxxxxx> <4C4B55F7.2090501@xxxxxxxxxxx>
User-agent: KMail/1.12.4 (Linux/2.6.34.1-zmi; KDE/4.3.5; x86_64; ; )
On Samstag, 24. Juli 2010 Eric Sandeen wrote:
> > If it really is one using -s size=4096 is the right thing to do.
> 
> Haven't read the whole thread so maybe this is redundant, but make
> sure all partitions (if any) are 4k aligned as well (unless it has
> the secret-handshake sector 63 offset...)
 
Thank you Christoph for clarification, also thanks to Eric - I've been 
aligning all partitions to a multiple of 512 sectors since a long time, 
which fits all stripe set sizes up to 256k. Too bad that even Linux 
fdisk and parted still don't do something like that, today I setup a 
Novell SLES 11 with XEN VM, and everywhere parted would align the first 
partition (when using GPT) to sector 34.. what a number.. Good that 4K 
sector drives arrive now, so the tools will hopefully change for a 
better default.

BTW: AFAIK SSD drives have a (sometimes very large) internal alignment 
of bytes which should be used as a "stripe size" to meet the optimal I/O 
size. Is there a way to find out how big that is yet?

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services
http://proteger.at [gesprochen: Prot-e-schee]
Tel: 0660 / 415 65 31

****** Aktuelles Radiointerview! ******
http://www.it-podcast.at/aktuelle-sendung.html

// Wir haben im Moment zwei Häuser zu verkaufen:
// http://zmi.at/langegg/
// http://zmi.at/haus2009/

Attachment: signature.asc
Description: This is a digitally signed message part.

<Prev in Thread] Current Thread [Next in Thread>