xfs
[Top] [All Lists]

Re: SL's kickstart corrupting XFS by "repairing" GPT labels?

To: xfs@xxxxxxxxxxx
Subject: Re: SL's kickstart corrupting XFS by "repairing" GPT labels?
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Wed, 06 Apr 2011 12:19:59 -0500
In-reply-to: <20110406114146.GF31057@dastard>
References: <4D9C3E14.6030009@xxxxxx> <20110406114146.GF31057@dastard>
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.2.15) Gecko/20110303 Thunderbird/3.1.9
Dave Chinner put forth on 4/6/2011 6:41 AM:
> On Wed, Apr 06, 2011 at 12:19:00PM +0200, Jan Kundrát wrote:

>> zerombr yes
>> clearpart --all --initlabel
>> part swap --size=1024 --asprimary
>> part / --fstype ext3 --size=0 --grow --asprimary
> 
> There your problem - hello random crap....

Yep, most likely.

> Given the nature of the problem, I have to assume you aren't using
> FC zoning to prevent hosts from seeing disks that don't belong to
> them?

Switch soft zoning isn't even required to avoid this kind of mess.  The
Nexsan arrays (as with many/most others) have built in LUN security,
allowing you to unmask a LUN to only specific WWNs of specific HBAs.
With Nexsan products, by default, no LUNs are unmasked after creating a
virtual disk and assigning a LUN to it.  One must then manually unmask a
LUN to one or more HBA WWNs.

Their are basically only 3 scenarios when you would assign more than one
HBA WWN to a given LUN:

1.  multi-pathing between two (or more) FC ports on one host
2.  shared disk cluster filesystem such as CXFS, GFS2, or OCFS.
3.  passive fail over high availability

Some folks assign "everything to everything" on their FC SAN for various
not so well considered reasons, without realizing the consequences.

-- 
Stan

<Prev in Thread] Current Thread [Next in Thread>