<!-- Fragment document type declaration subset:
ArborText, Inc., 1988-1997, v.4001
<!DOCTYPE SET PUBLIC "-//Davenport//DTD DocBook V3.0//EN" [
<!ENTITY ha.cluster.messages SYSTEM "figures/ha.cluster.messages.eps" NDATA eps>
<!ENTITY machine.not.in.ha.cluster SYSTEM "figures/machine.not.in.ha.cluster.eps" NDATA eps>
<!ENTITY ha.cluster.config.info.flow SYSTEM "figures/ha.cluster.config.info.flow.eps" NDATA eps>
<!ENTITY software.layers SYSTEM "figures/software.layers.eps" NDATA eps>
<!ENTITY n1n4 SYSTEM "figures/n1n4.eps" NDATA eps>
<!ENTITY appupgrade.sgml SYSTEM "appupgrade.sgml">
<!ENTITY a1-1.failsafe.components SYSTEM "figures/a1-1.failsafe.components.eps" NDATA eps>
<!ENTITY a1-6.disk.storage.takeover SYSTEM "figures/a1-6.disk.storage.takeover.eps" NDATA eps>
<!ENTITY a2-3.non.shared.disk.config SYSTEM "figures/a2-3.non.shared.disk.config.eps" NDATA eps>
<!ENTITY a2-4.shared.disk.config SYSTEM "figures/a2-4.shared.disk.config.eps" NDATA eps>
<!ENTITY a2-5.shred.disk.2active.cnfig SYSTEM "figures/a2-5.shred.disk.2active.cnfig.eps" NDATA eps>
<!ENTITY a2-1.examp.interface.config SYSTEM "figures/a2-1.examp.interface.config.eps" NDATA eps>
<!ENTITY intro.sgml SYSTEM "intro.sgml">
<!ENTITY overview.sgml SYSTEM "overview.sgml">
<!ENTITY planning.sgml SYSTEM "planning.sgml">
<!ENTITY nodeconfig.sgml SYSTEM "nodeconfig.sgml">
<!ENTITY admintools.sgml SYSTEM "admintools.sgml">
<!ENTITY config.sgml SYSTEM "config.sgml">
<!ENTITY operate.sgml SYSTEM "operate.sgml">
<!ENTITY diag.sgml SYSTEM "diag.sgml">
<!ENTITY recover.sgml SYSTEM "recover.sgml">
<!ENTITY clustproc.sgml SYSTEM "clustproc.sgml">
<!ENTITY appfiles.sgml SYSTEM "appfiles.sgml">
<!ENTITY gloss.sgml SYSTEM "gloss.sgml">
<!ENTITY preface.sgml SYSTEM "preface.sgml">
<!ENTITY index.sgml SYSTEM "index.sgml">
]>
-->
<chapter id="configexample">
<title>Configuration Examples</title>
<para>This chapter provides an example of a Linux FailSafe configuration that
uses a three-node cluster, and some variations of that configuration. It includes
the following sections:<itemizedlist>
<listitem><para><xref linkend="threenode-example"></para>
</listitem>
<listitem><para><xref linkend="threenode-script"></para>
</listitem>
<listitem><para><xref linkend="localfailover-of-ip"></para>
</listitem>
</itemizedlist></para>
<sect1 id="threenode-example">
<title>Linux FailSafe Example with Three-Node Cluster</title>
<para><indexterm><primary>three-node cluster, example</primary></indexterm>The
following illustration shows a three-node Linux FailSafe cluster. This configuration
consists of a pool containing nodes <literal>N1</literal>, <literal>N2</literal>,<literal>
N3</literal>, and <literal>N4</literal>. Nodes <literal>N1</literal>, <literal>
N2</literal>, and <literal>N3</literal> make up the Linux FailSafe cluster.
The nodes in this cluster share disks, and are connected to a serial port
multiplexer, which is also connected to the private control network.</para>
<comment>== REVIEWERS: NEW EXAMPLE TO COME ==</comment><?Pub Caret>
<figure><?Pub Dtl>
<title>Configuration Example</title>
<graphic entityref="n1n4"></graphic>
</figure>
</sect1>
<sect1 id="threenode-script">
<title>cmgr Script</title>
<para>This section provides an example <literal>cmgr</literal> script that
defines a Linux FailSafe three-node cluster as shown in <xref linkend="threenode-example">.
For general information on CLI scripts, see <xref linkend="LE41514-PARENT">.
For information on the CLI template files that you can use to create your
own configuration script, see <xref linkend="LE10673-PARENT">.</para>
<para>This cluster has two resource groups, <literal>RG1</literal> and <literal>
RG2</literal>.</para>
<para>Resource group <literal>RG1</literal> contains the following resources:
</para>
<variablelist><?Pub Dtl>
<varlistentry><term><literal>IP_address</literal></term>
<listitem>
<para>192.26.50.1</para>
</listitem>
</varlistentry>
<varlistentry><term><literal>filesystem</literal></term>
<listitem>
<para><literal>/ha1</literal></para>
</listitem>
</varlistentry>
<varlistentry><term><literal>volume</literal></term>
<listitem>
<para><literal>ha1_vol</literal></para>
</listitem>
</varlistentry>
<varlistentry><term><literal>NFS</literal></term>
<listitem>
<para><literal>/ha1/export</literal></para>
</listitem>
</varlistentry>
</variablelist>
<para>Resource group <literal>RG1</literal> has a failover policy of <literal>
FP1</literal>. <literal>FP1</literal> has the following components:</para>
<variablelist><?Pub Dtl>
<varlistentry><term>script</term>
<listitem>
<para><literal>ordered</literal></para>
</listitem>
</varlistentry>
<varlistentry><term>attributes</term>
<listitem>
<para><literal>Auto_Failback</literal></para>
<para><literal>Auto_Recovery</literal></para>
</listitem>
</varlistentry>
<varlistentry><term>failover domain</term>
<listitem>
<para><literal>N1, N2, N3</literal></para>
</listitem>
</varlistentry>
</variablelist>
<para>Resource group <literal>RG2</literal> contains the following resources:
</para>
<variablelist>
<varlistentry><term><literal>IP_address</literal></term>
<listitem>
<para>192.26.50.2</para>
</listitem>
</varlistentry>
<varlistentry><term><literal>filesystem</literal></term>
<listitem>
<para><literal>/ha2</literal></para>
</listitem>
</varlistentry>
<varlistentry><term><literal>volume</literal></term>
<listitem>
<para><literal>ha2_vol</literal></para>
</listitem>
</varlistentry>
<varlistentry><term><literal>NFS</literal></term>
<listitem>
<para><literal>/ha2/export</literal></para>
</listitem>
</varlistentry>
</variablelist>
<para>Resource group <literal>RG2</literal> has a failover policy of <literal>
FP2</literal>. <literal>FP2</literal> has the following components:</para>
<variablelist>
<varlistentry><term>script</term>
<listitem>
<para><literal>round-robin</literal></para>
</listitem>
</varlistentry>
<varlistentry><term>attributes</term>
<listitem>
<para><literal>Controlled_Failback</literal></para>
<para><literal>Inplace_Recovery</literal></para>
</listitem>
</varlistentry>
<varlistentry><term>failover domain</term>
<listitem>
<para><literal>N2, N3</literal></para>
</listitem>
</varlistentry>
</variablelist>
<para>The <literal>cmgr</literal> script to define this configuration is as
follows:</para>
<programlisting><?Pub Dtl>#!/usr/cluster/bin/cluster_mgr -f
define node N1
set hostname to N1
set sysctrl_type to msc
set sysctrl_status to enabled
set sysctrl_password to none
set sysctrl_owner to N4
set sysctrl_device to /dev/ttydn001
set sysctrl_owner_type to tty
add nic ef2-N1
set heartbeat to true
set ctrl_msgs to true
set priority to 1
done
add nic eth0-N1
set heartbeat to true
set ctrl_msgs to true
set priority to 2
done
add nic eth1-N1
set heartbeat to true
set ctrl_msgs to true
set priority to 3
done
done
define node N2
set hostname to N2
set sysctrl_type to msc
set sysctrl_status to enabled
set sysctrl_password to none
set sysctrl_owner to N4
set sysctrl_device to /dev/ttydn002
set sysctrl_owner_type to tty
add nic ef2-N2
set heartbeat to true
set ctrl_msgs to true
set priority to 1
done
add nic eth0-N2
set heartbeat to true
set ctrl_msgs to true
set priority to 2
done
add nic eth1-N2
set heartbeat to true
set ctrl_msgs to true
set priority to 3
done
done
define node N3
set hostname to N3
set sysctrl_type to msc
set sysctrl_status to enabled
set sysctrl_password to none
set sysctrl_owner to N4
set sysctrl_device to /dev/ttydn003
set sysctrl_owner_type to tty
add nic ef2-N3
set heartbeat to true
set ctrl_msgs to true
set priority to 1
done
add nic eth0-N3
set heartbeat to true
set ctrl_msgs to true
set priority to 2
done
add nic eth1-N3
set heartbeat to true
set ctrl_msgs to true
set priority to 3
done
done
define node N4
set hostname to N4
add nic ef2-N4
set heartbeat to true
set ctrl_msgs to true
set priority to 1
done
add nic eth0-N4
set heartbeat to true
set ctrl_msgs to true
set priority to 2
done
done
define cluster TEST
set notify_cmd to /usr/bin/mail
set notify_addr to failsafe_sysadm@company.com
add node N1
add node N2
add node N3
done
define failover_policy fp1
set attribute to Auto_Failback
set attribute to Auto_Recovery
set script to ordered
set domain to N1 N2 N3
done
define failover_policy fp2
set attribute to Controlled_Failback
set attribute to Inplace_Recovery
set script to round-robin
set domain to N2 N3
done
define resource 192.26.50.1 of resource_type IP_address in cluster TEST
set NetworkMask to 0xffffff00
set interfaces to eth0,eth1
set BroadcastAddress to 192.26.50.255
done
define resource ha1_vol of resource_type volume in cluster TEST
set devname-owner to root
set devname-group to sys
set devname-mode to 666
done
define resource /ha1 of resource_type filesystem in cluster TEST
set volume-name to ha1_vol
set mount-options to rw,noauto
set monitoring-level to 2
done
modify resource /ha1 of resource_type filesystem in cluster TEST
add dependency ha1_vol of type volume
done
define resource /ha1/export of resource_type NFS in cluster TEST
set export-info to rw,wsync
set filesystem to /ha1
done
modify resource /ha1/export of resource_type NFS in cluster TEST
add dependency /ha1 of type filesystem
done
define resource_group RG1 in cluster TEST
set failover_policy to fp1
add resource 192.26.50.1 of resource_type IP_address
add resource ha1_vol of resource_type volume
add resource /ha1 of resource_type filesystem
add resource /ha1/export of resource_type NFS
done
define resource 192.26.50.2 of resource_type IP_address in cluster TEST
set NetworkMask to 0xffffff00
set interfaces to eth0
set BroadcastAddress to 192.26.50.255
done
define resource ha2_vol of resource_type volume in cluster TEST
set devname-owner to root
set devname-group to sys
set devname-mode to 666
done
define resource /ha2 of resource_type filesystem in cluster TEST
set volume-name to ha2_vol
set mount-options to rw,noauto
set monitoring-level to 2
done
modify resource /ha2 of resource_type filesystem in cluster TEST
add dependency ha2_vol of type volume
done
define resource /ha2/export of resource_type NFS in cluster TEST
set export-info to rw,wsync
set filesystem to /ha2
done
modify resource /ha2/export of resource_type NFS in cluster TEST
add dependency /ha2 of type filesystem
done
define resource_group RG2 in cluster TEST
set failover_policy to fp2
add resource 192.26.50.2 of resource_type IP_address
add resource ha2_vol of resource_type volume
add resource /ha2 of resource_type filesystem
add resource /ha2/export of resource_type NFS
done
quit</programlisting>
</sect1>
<sect1 id="localfailover-of-ip">
<title>Local Failover of an IP Address</title>
<para><indexterm><primary>IP address</primary><secondary>local failover</secondary>
</indexterm><indexterm><primary>local failover, IP address</primary></indexterm>You
can configure a Linux FailSafe system to fail over an IP address to a second
interface within the same host. To do this, specify multiple interfaces for
resources of <literal>IP_address</literal> resource type. You can also specify
different interfaces for supporting a heterogeneous cluster. For information
on specifying IP address resources, see <xref linkend="ipattributes">.</para>
<para>The following example configures local failover of an IP address. It
uses the configuration illustrated in <xref linkend="threenode-example">.
</para>
<orderedlist>
<listitem><para>Define an IP address resource with two interfaces:</para>
<programlisting>define resource 192.26.50.1 of resource_type IP_address in cluster TEST
set NetworkMask to 0xffffff00
set interfaces to eth0,eth1
set BroadcastAddress to 192.26.50.255
done</programlisting>
<para>IP address 192.26.50.1 will be locally failed over from interface <literal>
eth0</literal> to interface <literal>eth1</literal> when there is an <literal>
eth0</literal> interface failure.</para>
<para>In nodes <literal>N1</literal>, <literal>N2</literal>, and <literal>
N3</literal>, either <literal>eth0</literal> or <literal>eth1</literal> should
configure up automatically, when the node boots up. Both <literal>eth0</literal>
and <literal>eth1</literal> are physically connected to the same subnet 192.26.50.
Only one network interface connected to the same network should be configured
up in a node.</para>
</listitem>
<listitem><para>Modify the <filename>/etc/conf/netif.options</filename> file
to configure the <literal>eth0</literal> and <literal>eth1</literal> interfaces:
</para>
<programlisting>if1name-eth0 if1addr=192.26.50.10 if2name=eth1 if2addr=192.26.50.11
</programlisting>
</listitem>
<listitem><para>The <filename>etc/init.d/network</filename> script should
configure the network interface <literal>eth1</literal> down in all nodes <literal>
N1</literal>, <literal>N2</literal>, and <literal>N3</literal>. Add the following
line to the file:</para>
<programlisting>ifconfig eth1 down</programlisting>
</listitem>
</orderedlist>
</sect1>
</chapter>
<?Pub *0000014210>