[BACK]Return to config.sgml CVS log [TXT][DIR] Up to [Development] / failsafe / FailSafe-books / LnxFailSafe_AG

File: [Development] / failsafe / FailSafe-books / LnxFailSafe_AG / config.sgml (download)

Revision 1.1, Wed Nov 29 21:58:28 2000 UTC (16 years, 10 months ago) by vasa
Branch: MAIN
CVS Tags: HEAD

New documentation files for the Admin Guide.

<!-- Fragment document type declaration subset:
ArborText, Inc., 1988-1997, v.4001
<!DOCTYPE SET PUBLIC "-//Davenport//DTD DocBook V3.0//EN" [
<!ENTITY ha.cluster.messages SYSTEM "figures/ha.cluster.messages.eps" NDATA eps>
<!ENTITY machine.not.in.ha.cluster SYSTEM "figures/machine.not.in.ha.cluster.eps" NDATA eps>
<!ENTITY ha.cluster.config.info.flow SYSTEM "figures/ha.cluster.config.info.flow.eps" NDATA eps>
<!ENTITY software.layers SYSTEM "figures/software.layers.eps" NDATA eps>
<!ENTITY n1n4 SYSTEM "figures/n1n4.eps" NDATA eps>
<!ENTITY example.sgml SYSTEM "example.sgml">
<!ENTITY appupgrade.sgml SYSTEM "appupgrade.sgml">
<!ENTITY a1-1.failsafe.components SYSTEM "figures/a1-1.failsafe.components.eps" NDATA eps>
<!ENTITY a1-6.disk.storage.takeover SYSTEM "figures/a1-6.disk.storage.takeover.eps" NDATA eps>
<!ENTITY a2-3.non.shared.disk.config SYSTEM "figures/a2-3.non.shared.disk.config.eps" NDATA eps>
<!ENTITY a2-4.shared.disk.config SYSTEM "figures/a2-4.shared.disk.config.eps" NDATA eps>
<!ENTITY a2-5.shred.disk.2active.cnfig SYSTEM "figures/a2-5.shred.disk.2active.cnfig.eps" NDATA eps>
<!ENTITY a2-1.examp.interface.config SYSTEM "figures/a2-1.examp.interface.config.eps" NDATA eps>
<!ENTITY intro.sgml SYSTEM "intro.sgml">
<!ENTITY overview.sgml SYSTEM "overview.sgml">
<!ENTITY planning.sgml SYSTEM "planning.sgml">
<!ENTITY nodeconfig.sgml SYSTEM "nodeconfig.sgml">
<!ENTITY admintools.sgml SYSTEM "admintools.sgml">
<!ENTITY operate.sgml SYSTEM "operate.sgml">
<!ENTITY diag.sgml SYSTEM "diag.sgml">
<!ENTITY recover.sgml SYSTEM "recover.sgml">
<!ENTITY clustproc.sgml SYSTEM "clustproc.sgml">
<!ENTITY appfiles.sgml SYSTEM "appfiles.sgml">
<!ENTITY gloss.sgml SYSTEM "gloss.sgml">
<!ENTITY preface.sgml SYSTEM "preface.sgml">
<!ENTITY index.sgml SYSTEM "index.sgml">
]>
-->
<chapter id="LE94219-PARENT">
<title id="LE94219-TITLE">Linux FailSafe Cluster Configuration</title>
<para>This chapter describes administrative tasks you perform to configure
the components of a Linux FailSafe system. It describes how to perform tasks
using the FailSafe Cluster Manager Graphical User Interface (GUI) and the
FailSafe Cluster Manager Command Line Interface (CLI). The major sections
in this chapter are as follows:</para>
<itemizedlist>
<listitem><para><xref linkend="LE59477-PARENT"></para>
</listitem>
<listitem><para><xref linkend="LE28499-PARENT"></para>
</listitem>
<listitem><para><xref linkend="tv"></para>
</listitem>
<listitem><para><xref linkend="Z957104627glen"></para>
</listitem>
<listitem><para><xref linkend="LE53159-PARENT"></para>
</listitem>
<listitem><para><xref linkend="fs-setlogparams"></para>
</listitem>
<listitem><para><xref linkend="LE40511-PARENT"></para>
</listitem>
<listitem><para><xref linkend="LE40790-PARENT"></para>
</listitem>
</itemizedlist>
<sect1 id="LE59477-PARENT">
<title id="LE59477-TITLE">Setting Configuration Defaults</title>
<para><indexterm id="ITconfig-0"><primary>defaults</primary></indexterm> <indexterm
id="ITconfig-1"><primary>system configuration defaults</primary></indexterm>Before
you configure the components of a FailSafe system, you can set default values
for some of the components that Linux FailSafe will use when defining the
components.</para>
<variablelist>
<varlistentry><term>Default cluster</term>
<listitem>
<para>Certain cluster manager commands require you to specify a cluster. You
can specify a default cluster to use as the default if you do not specify
a cluster explicitly.</para>
</listitem>
</varlistentry>
<varlistentry><term>Default node</term>
<listitem>
<para>Certain cluster manager commands require you to specify a node. With
the Cluster Manager CLI, you can specify a default node to use as the default
if you do not specify a node explicitly.</para>
</listitem>
</varlistentry>
<varlistentry><term>Default resource type</term>
<listitem>
<para>Certain cluster manager commands require you to specify a resource type.
With the Cluster Manager CLI, you can specify a default resource type to use
as the default if you do not specify a resource type explicitly.</para>
</listitem>
</varlistentry>
</variablelist>
<sect2>
<title>Setting Default Cluster with the Cluster Manager GUI</title>
<para>The GUI prompts you to enter the name of the default cluster when you
have not specified one. Alternately, you can set the default cluster by clicking
the &ldquo;Select Cluster...&rdquo; button at the bottom of the FailSafe Manager
window.</para>
<para>When using the GUI, there is no need to set a default node or resource
type.</para>
</sect2>
<sect2>
<title>Setting and Viewing Configuration Defaults with the Cluster Manager
CLI</title>
<para>When you are using the Cluster Manager CLI, you can use the following
commands to specify default values. The default values are in effect only
for the current session of the Cluster Manager CLI.</para>
<para>Use the following command to specify a default cluster:</para>
<screen>cmgr> <userinput>set cluster </userinput><replaceable>A</replaceable></screen>
<para>Use the following command to specify a default node:</para>
<screen>cmgr> <userinput>set node </userinput><replaceable>A</replaceable></screen>
<para>Use the following command to specify a default resource type:</para>
<screen>cmgr> <userinput>set resource_type </userinput><replaceable>A</replaceable></screen>
<para>You can view the current default configuration values of the Cluster
Manager CLI with the following command:</para>
<screen>cmgr> <userinput>show set defaults</userinput></screen>
</sect2>
</sect1>
<sect1 id="LE28499-PARENT">
<title id="LE28499-TITLE">Name Restrictions</title>
<para><indexterm id="ITconfig-2"><primary>name restrictions</primary></indexterm>When
you specify the names of the various components of a Linux FailSafe system,
the name cannot begin with an underscore (_) or include any whitespace characters.
In addition, the name of any Linux FailSafe component cannot contain a space,
an unprintable character, or a *, ?, \, or #.</para>
<para>The following is the list of permitted characters for the name of a
Linux FailSafe component:</para>
<itemizedlist>
<listitem><para>alphanumeric characters</para>
</listitem>
<listitem><para>/</para>
</listitem>
<listitem><para>.</para>
</listitem>
<listitem><para>- (hyphen)</para>
</listitem>
<listitem><para>_ (underscore)</para>
</listitem>
<listitem><para>:</para>
</listitem>
<listitem><para>&ldquo;</para>
</listitem>
<listitem><para>=</para>
</listitem>
<listitem><para>@</para>
</listitem>
<listitem><para>'</para>
</listitem>
</itemizedlist>
<para>These character restrictions hold true whether you are configuring your
system with the Cluster Manager GUI or the Cluster Manager CLI.</para>
</sect1>
<sect1 id="tv">
<title>Configuring Timeout Values and Monitoring Intervals</title>
<para><indexterm><primary>timeout values</primary></indexterm><indexterm>
<primary>monitoring interval</primary></indexterm>When you configure the components
of a Linux FailSafe system, you configure various timeout values and monitoring
intervals that determine the application downtown of a highly-available system
when there is a failure. To determine reasonable values to set for your system,
consider the following equation:</para>
<literallayout><replaceable>application downtime</replaceable> = <replaceable>
failure detection</replaceable> + <replaceable>time to handle failure</replaceable> + <replaceable>
failure recovery</replaceable></literallayout>
<para>Failure detection depends on the type of failure that is detected:</para>
<itemizedlist>
<listitem><para>When a node goes down, there will be a node failure detection
after the node timeout; this is an HA parameter that you can modify. All failures
that translate into a node failure (such as heartbeat failure and OS failure)
fall into this failure category. Node timeout has a default value of 15 seconds.
For information on modifying the node timeout value, see <xref linkend="fs-setfsparameters">.
</para>
</listitem>
<listitem><para>When there is a resource failure, there is a monitor failure
of a resource. The amount of time this will take is determined by the following:
</para>
<itemizedlist>
<listitem><para>The monitoring interval for the resource type</para>
</listitem>
<listitem><para>The monitor timeout for the resource type</para>
</listitem>
<listitem><para>The number of restarts defined for the resource type, if the
restart mode is configured on</para>
</listitem>
</itemizedlist>
<para>For information on setting values for a resource type, see <xref linkend="fs-defineresourcetype">.
</para>
</listitem>
</itemizedlist>
<para>Reducing these values will result in a shorter failover time, but reducing
these values could lead to significant increase in the Linux FailSafe overhead
on the system performance and could also lead to false failovers.</para>
<para><?Pub Dtl>The time to handle a failure is something that the user cannot
control. In general, this should take a few seconds.</para>
<para>The failure recovery time is determined by the total time it takes for
Linux FailSafe to perform the following:</para>
<itemizedlist><?Pub Dtl>
<listitem><para>Execute the failover policy script (approximately five seconds).
</para>
</listitem>
<listitem><para>Run the stop action script for all resources in the resource
group. This is not required for node failure; the failing node will be reset.
</para>
</listitem>
<listitem><para>Run the start action script for all resources in the resource
group</para>
</listitem>
</itemizedlist>
</sect1>
<sect1 id="Z957104627glen">
<title>Cluster Configuration</title>
<para>To set up a Linux FailSafe system, you configure the cluster that will
support the highly available services. This requires the following steps:
</para>
<itemizedlist>
<listitem><para>Defining the local host</para>
</listitem>
<listitem><para>Defining any additional nodes that are eligible to be included
in the cluster</para>
</listitem>
<listitem><para>Defining the cluster</para>
</listitem>
</itemizedlist>
<para>The following subsections describe these tasks.</para>
<sect2 id="fs-definemachine" role="fs-definemachine">
<title id="LE78024-TITLE">Defining Cluster Nodes</title>
<para><indexterm id="ITconfig-3"><primary>cluster node</primary><secondary><emphasis>
See</emphasis>  node</secondary></indexterm> <indexterm id="ITconfig-4"><primary>
node</primary><secondary>definition</secondary></indexterm> <indexterm id="ITconfig-5">
<primary>node</primary><secondary>creation</secondary></indexterm>A  <glossterm>
cluster node</glossterm> is a single Linux image. Usually, a cluster node
is an individual computer. The term <glossterm>node</glossterm> is also used
in this guide for brevity.</para>
<para>The <glossterm>pool</glossterm> is the entire set of nodes available
for clustering.</para>
<para>The first node you define must be the local host, which is the host
you have logged into to perform cluster administration.</para>
<para>When you are defining multiple nodes, it is advisable to wait for a
minute or so between each node definition. When nodes are added to the configuration
database, the contents of the configuration database are also copied to the
node being added. The node definition operation is completed when the new
node configuration is added to the database, at which point the database configuration
is synchronized. If you define two nodes one after another, the second operation
might fail because the first database synchronization is not complete.</para>
<para>To add a logical node definition to the pool of nodes that are eligible
to be included in a cluster, you must provide the following information about
the node:</para>
<itemizedlist>
<listitem><para>Logical name: This name can contain letters and numbers but
not spaces or pound signs. The name must be composed of no more than 255 characters.
Any legal hostname is also a legal node name. For example, for a node whose
hostname is &ldquo;venus.eng.company.com&rdquo; you can use a node name of &ldquo;venus&rdquo;, &ldquo;node1&rdquo;,
or whatever is most convenient.</para>
</listitem>
<listitem><para><indexterm id="ITconfig-7"><primary>hostname</primary></indexterm>
Hostname: The fully qualified name of the host, such as &ldquo;server1.company.com&rdquo;.
Hostnames cannot begin with an underscore, include any whitespace, or be longer
than 255 characters. This hostname should be the same as the output of the
hostname command on the node you are defining. The IP address associated with
this hostname should not be the same as any IP address you define as highly
available when you define a Linux FailSafe IP address resource. Linux FailSafe
will not accept an IP address (such as &ldquo;192.0.2.22&rdquo;) for this
input.</para>
</listitem>
<listitem><para>Node ID: This number must be unique for each node in the pool
and be in the range 1 through 32767.</para>
</listitem>
<listitem><para><indexterm id="ITconfig-9"><primary>system controller</primary>
<secondary>defining for node</secondary></indexterm>System controller information.
If the node has a system controller and you want Linux FailSafe to use the
controller to reset the node, you must provide the following information about
the system controller:</para>
<itemizedlist>
<listitem><para>Type of system controller: <filename>chalL</filename>, <filename>
msc</filename>, <filename>mmsc</filename></para>
</listitem>
<listitem><para>System controller port password (optional)</para>
</listitem>
<listitem><para>Administrative status, which you can set to determine whether
Linux FailSafe can use the port: <filename>enabled</filename>, <filename>
disabled</filename></para>
</listitem>
<listitem><para>Logical node name of system controller owner (i.e. the system
that is physically attached to the system controller)</para>
</listitem>
<listitem><para>Device name of port on owner node that is attached to the
system controller</para>
</listitem>
<listitem><para>Type of owner device: <filename>tty</filename></para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para><indexterm id="ITconfig-10"><primary>control network</primary>
<secondary>defining for node</secondary></indexterm> <indexterm id="ITconfig-11">
<primary>IP address</primary><secondary>control network</secondary></indexterm><indexterm
id="ITconfig-12"><primary>hostname</primary><secondary>control network</secondary>
</indexterm>A list of control networks, which are the networks used for heartbeats,
reset messages, and other Linux FailSafe messages. For each network, provide
the following:</para>
<itemizedlist>
<listitem><para>Hostname or IP address. This address must not be the same
as any IP address you define as highly available when you define a Linux FailSafe
IP address resource, and it must be resolved in the <filename>/etc/hosts</filename>
file.</para>
</listitem>
<listitem><para><indexterm id="ITconfig-13"><primary>heartbeat network</primary>
</indexterm>Flags (<filename>hb</filename> for heartbeats, <filename>ctrl
</filename> for control messages, <filename>priority</filename>). At least
two control networks must use heartbeats, and at least one must use control
messages.</para>
<para>Linux FailSafe requires multiple heartbeat networks. Usually a node
sends heartbeat messages to another node on only one network at a time. However,
there are times when a node might send heartbeat messages to another node
on multiple networks simultaneously. This happens when the sender node does
not know which networks are up and which others are down. This is a transient
state and eventually the heartbeat network converges towards the highest priority
network that is up. </para>
<para>Note that at any time different pairs of nodes might be using different
networks for heartbeats.</para>
<para>Although all nodes in the Linux FailSafe cluster should have two control
networks, it is possible to define a node to add to the pool with one control
network.</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<sect3>
<title>Defining a Node with the Cluster Manager GUI</title>
<para>To define a node with the Cluster Manager GUI, perform the following
steps:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Nodes
&amp; Cluster&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Define
a Node&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs on this screen. Click on &ldquo;Next&rdquo;
at the bottom of the screen and continue inputing information on the second
screen.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3 id="LE15937-PARENT">
<title id="LE15937-TITLE">Defining a Node with the Cluster Manager CLI</title>
<para>Use the following command to add a logical node definition:</para>
<screen>cmgr> <userinput>define node </userinput><replaceable>A</replaceable></screen>
<para>Entering this command specifies the name of the node you are defining
and puts you in a mode that enables you to define the parameters of the node.
These parameters correspond to the items defined in <xref linkend="fs-definemachine">.
The following prompts appear:</para>
<screen>Enter commands, when finished enter either "done" or "cancel"</screen>
<para><replaceable>A</replaceable>?</para>
<para>When this prompt of the node name appears, you enter the node parameters
in the following format:</para>
<screen>set hostname to <replaceable>B</replaceable>
set nodeid to <replaceable>C</replaceable>
set sysctrl_type to <replaceable>D</replaceable>
set sysctrl_password to <replaceable>E</replaceable>
set sysctrl_status to <replaceable>F</replaceable>
set sysctrl_owner to <replaceable>G</replaceable>
set sysctrl_device to <replaceable>H</replaceable>
set sysctrl_owner_type to <replaceable>I</replaceable>
add nic <replaceable>J</replaceable></screen>
<para>You use the <command>add nic</command>&ensp;<replaceable>J</replaceable>
command to define the network interfaces. You use this command for each network
interface to define. When you enter this command, the following prompt appears:
</para>
<screen>Enter network interface commands, when finished enter "done" or "cancel"
NIC - <replaceable>J</replaceable>?</screen>
<para>When this prompt appears, you use the following commands to specify
the flags for the control network:</para>
<screen>set heartbeat to <replaceable>K</replaceable>
set ctrl_msgs to <replaceable>L</replaceable>
set priority to <replaceable>M</replaceable></screen>
<para>After you have defined a network controller, you can use the following
command from the node name prompt to remove it:</para>
<screen>cmgr> <userinput>remove nic</userinput>&ensp;<replaceable>N</replaceable></screen>
<para>When you have finished defining a node, enter <command>done</command>.
</para>
<para>The following example defines a node called cm1a, with one controller:
</para>
<screen>cmgr> <userinput>define node cm1a</userinput>
Enter commands, when finished enter either "done" or "cancel"</screen>
<screen>cm1a? <userinput>set hostname to cm1a</userinput>
cm1a? <userinput>set nodeid to 1</userinput>
cm1a? <userinput>set sysctrl_type to msc</userinput>
cm1a? <userinput>set sysctrl_password to </userinput><replaceable>[ ]</replaceable>
cm1a? <userinput>set sysctrl_status to enabled</userinput>
cm1a? <userinput>set sysctrl_owner to cm2</userinput>
cm1a? <userinput>set sysctrl_device to /dev/ttyd2</userinput>
cm1a? <userinput>set sysctrl_owner_type to tty</userinput>
cm1a? <userinput>add nic cm1</userinput>
Enter network interface commands, when finished enter &ldquo;done&rdquo; 
or &ldquo;cancel&rdquo;

NIC - cm1 > <userinput>set heartbeat to true</userinput>
NIC - cm1 > <userinput>set ctrl_msgs to true</userinput>
NIC - cm1 > <userinput>set priority to 0</userinput>
NIC - cm1 > <userinput>done</userinput>
cm1a? <userinput>done</userinput>
cmgr></screen>
<para>If you have invoked the Cluster Manager CLI with the <command>-p</command>
option,or you entered the <command>set prompting on</command> command, the
display appears as in the following example:</para>
<screen>cmgr> <userinput>define node cm1a</userinput>
Enter commands, when finished enter either "done" or "cancel"</screen>
<screen>Nodename [optional]? <userinput>cm1a</userinput></screen>
<screen>Node ID? <userinput>1</userinput>
Do you wish to define system controller info[y/n]:y
Sysctrl Type &lt;null>?<userinput>&ensp;(null)</userinput>
Sysctrl Password[optional]? ( )
Sysctrl Status &lt;enabled|disabled>? <userinput>enabled</userinput>
Sysctrl Owner? <userinput>cm2</userinput>
Sysctrl Device? <userinput>/dev/ttyd2</userinput>
Sysctrl Owner Type &lt;tty>? (tty) 
Number of Network Interfaces ? (1)
NIC 1 - IP Address? <userinput>cm1</userinput>
NIC 1 - Heartbeat HB (use network for heartbeats) &lt;true|false>? <userinput>
true</userinput>
NIC 1 - Priority &lt;1,2,...>? <userinput>0</userinput>
NIC 2 - IP Address? <userinput>cm2</userinput>
NIC 2 - Heartbeat HB (use network for heartbeats) &lt;true|false>? <userinput>
true</userinput>
NIC 2 - (use network for control messages) &lt;true|false>? <userinput>false
</userinput>
NIC 2 - Priority &lt;1,2,...>? <userinput>1</userinput></screen>
</sect3>
</sect2>
<sect2 id="fs-modifydelmachine" role="fs-modifydelmachine">
<title>Modifying and Deleting Cluster Nodes</title>
<para><indexterm id="ITconfig-14"><primary>node</primary><secondary>deleting
</secondary></indexterm> <indexterm id="ITconfig-15"><primary>node</primary>
<secondary>modifying</secondary></indexterm>After you have defined a cluster
node, you can modify or delete the cluster with the Cluster Manager GUI or
the Cluster Manager CLI. You must remove a node from a cluster before you
can delete the node.</para>
<sect3>
<title>Modifying a Node with the Cluster Manager GUI</title>
<para>To modify a node with the Cluster Manager GUI, perform the following
steps:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Nodes
&amp; Cluster&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Modify
a Node Definition&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Modify the node parameters.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Modifying a Node with the Cluster Manager CLI</title>
<para>You can use the following command to modify an existing node. After
entering this command, you can execute any of the commands you use to define
a node.</para>
<screen>cmgr> <userinput>modify node </userinput><replaceable>A</replaceable></screen>
</sect3>
<sect3>
<title>Deleting a Node with the Cluster Manager GUI</title>
<para>To delete a node with the Cluster Manager GUI, perform the following
steps:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Nodes
&amp; Cluster&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Delete
a Node&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the name of the node to delete.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Deleting a Node with the Cluster Manager CLI</title>
<para>After defining a node, you can delete it with the following command:
</para>
<screen>cmgr> <userinput>delete node </userinput><replaceable>A</replaceable></screen>
<para>You can delete a node only if the node is not currently part of a cluster.
This means that first you must modify a cluster that contains the node so
that it no longer contains that node before you can delete it.</para>
</sect3>
</sect2>
<sect2>
<title>Displaying Cluster Nodes</title>
<para><indexterm id="ITconfig-16"><primary>node</primary><secondary>displaying
</secondary></indexterm>After you define cluster nodes, you can perform the
following display tasks:</para>
<itemizedlist>
<listitem><para>display the attributes of a node</para>
</listitem>
<listitem><para>display the nodes that are members of a specific cluster</para>
</listitem>
<listitem><para>display all the nodes that have been defined</para>
</listitem>
</itemizedlist>
<para>You can perform any of these tasks with the FailSafe Cluster Manager
GUI or the Linux FailSafe Cluster Manager CLI.</para>
<sect3>
<title>Displaying Nodes with the Cluster Manager GUI</title>
<para>The Cluster Manager GUI provides a convenient graphic display of the
defined nodes of a cluster and the attributes of those nodes through the 
FailSafe Cluster View. You can launch the FailSafe Cluster View directly,
or you can bring it up at any time by clicking on &ldquo;FailSafe Cluster
View&rdquo; at the bottom of the &ldquo;FailSafe Manager&rdquo; display.</para>
<para>From the View menu of the FailSafe Cluster View, you can select &ldquo;Nodes
in Pool&rdquo; to view all nodes defined in the Linux FailSafe pool. You can
also select &ldquo;Nodes In Cluster&rdquo; to view all nodes that belong to
the default cluster. Click any node's name or icon to view detailed status
and configuration information about the node.</para>
</sect3>
<sect3>
<title>Displaying Nodes with the Cluster Manager CLI</title>
<para>After you have defined a node, you can display the node's parameters
with the following command:</para>
<screen>cmgr> <userinput>show node </userinput><replaceable>A </replaceable></screen>
<para>A <command>show node</command> command on node cm1a would yield the
following display:</para>
<screen>cmgr> <userinput>show node cm1</userinput>
Logical Node Name: cm1
Hostname: cm1
Nodeid: 1
Reset type: reset
System Controller: msc
System Controller status: enabled
System Controller owner: cm2
System Controller owner device: /dev/ttyd2
System Controller owner type: tty
ControlNet Ipaddr: cm1
ControlNet HB: true
ControlNet Control: true
ControlNet Priority: 0</screen>
<para>You can see a list of all of the nodes that have been defined with the
following command:</para>
<screen>cmgr> <userinput>show nodes in pool</userinput></screen>
<para>You can see a list of all of the nodes that have defined for a specified
cluster with the following command:</para>
<screen>cmgr> <userinput>show nodes </userinput>[<userinput>in cluster </userinput><replaceable>
A</replaceable>]</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster when you use this command and it will display the nodes defined
in the default cluster.</para>
</sect3>
</sect2>
<sect2 id="fs-setfsparameters" role="fs-setfsparameters">
<title id="LE39603-TITLE">Linux FailSafe HA Parameters</title>
<para>There are several parameters that determine the behavior of the nodes
in a cluster of a Linux FailSafe system.</para>
<para>The Linux FailSafe parameters are as follows:</para>
<itemizedlist>
<listitem><para>The tie-breaker node, which is the logical name of a machine
used to compute node membership in situations where 50% of the nodes in a
cluster can talk to each other. If you do not specify a tie-breaker node,
the node with the lowest node ID number is used.</para>
<para>The tie-breaker node is a cluster-wide parameter.</para>
<para>It is recommended that you configure a tie-breaker node even if there
is an odd number of nodes in the cluster, since one node may be deactivated,
leaving an even number of nodes to determine membership.</para>
<para>In a heterogeneous cluster, where the nodes are of different sizes and
capabilities, the largest node in the cluster with the most important application
or the maximum number of resource groups should be configured as the tie-breaker
node.</para>
</listitem>
<listitem><para>Node timeout, which is the timeout period, in milliseconds.
If no heartbeat is received from a node in this period of time, the node is
considered to be dead and is not considered part of the cluster membership.
</para>
<para>The node timeout must be at least 5 seconds. In addition, the node timeout
must be at least 10 times the heartbeat interval for proper Linux FailSafe
operation; otherwise, false failovers may be triggered.</para>
<para>Node timeout is a cluster-wide parameter.</para>
</listitem>
<listitem><para>The interval, in milliseconds, between heartbeat messages.
This interval must be greater than 500 milliseconds and it must not be greater
than one-tenth the value of the node timeout period. This interval is set
to one second, by default. Heartbeat interval is a cluster-wide parameter.
</para>
<para>The higher the number of heartbeats (smaller heartbeat interval), the
greater the potential for slowing down the network. Conversely, the fewer
the number of heartbeats (larger heartbeat interval), the greater the potential
for reducing availability of resources.</para>
</listitem>
<listitem><para>The node wait time<indexterm><primary>node</primary><secondary>
wait time</secondary></indexterm>, in milliseconds, which is the time a node
waits for other nodes to join the cluster before declaring a new cluster membership.
If the value is not set for the cluster, Linux FailSafe assumes the value
to be the node timeout times the number of nodes.</para>
</listitem>
<listitem><para>The powerfail mode, which indicates whether a special power
failure algorithm should be run when no response is received from a system
controller after a reset request. This can be set to <literal>ON</literal>
or <literal>OFF</literal>. Powerfail is a node-specific parameter, and should
be defined for the machine that performs the reset operation.</para>
</listitem>
</itemizedlist>
<sect3>
<title>Resetting Linux FailSafe Parameters with the Cluster Manager GUI </title>
<para>To set Linux FailSafe parameters with the Cluster Manager GUI, perform
the following steps:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager from a menu or the command line.
</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Nodes
&amp; Cluster&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Set Linux
FailSafe HA Parameters&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Resetting Linux FailSafe Parameters with the Cluster Manager CLI</title>
<para>You can modify the Linux FailSafe parameters with the following command:
</para>
<screen>cmgr> <userinput>modify ha_parameters </userinput>[<userinput>on node 
</userinput><replaceable>A</replaceable>] [<userinput>in cluster</userinput><replaceable>
&ensp;B</replaceable>]</screen>
<para>If you have specified a default node or a default cluster, you do not
have to specify a node or a cluster in this command. Linux FailSafe will use
the default.</para>
<screen>Enter commands, when finished enter either "done" or "cancel"</screen>
<para><replaceable>A</replaceable>?</para>
<para>When this prompt of the node name appears, you enter the Linux FailSafe
parameters you wish to modify in the following format:</para>
<screen>set node_timeout to <replaceable>A</replaceable>
set heartbeat to <replaceable>B</replaceable>
set run_pwrfail to <replaceable>C</replaceable>
set tie_breaker to <replaceable>D</replaceable></screen>
</sect3>
</sect2>
<sect2 id="fs-definecluster" role="fs-definecluster">
<title id="LE29161-TITLE">Defining a Cluster</title>
<para>A <glossterm>cluster</glossterm> is a collection of one or more nodes
coupled with each other by networks or other similar interconnects. In Linux
FailSafe, a cluster is identified by a simple name. A given node may be a
member of only one cluster.</para>
<para>To define a cluster, you must provide the following information:</para>
<itemizedlist>
<listitem><para>The logical name of the cluster, with a maximum length of
255 characters.</para>
</listitem>
<listitem><para>The mode of operation: <literal>normal</literal> (the default)
or <literal>experimental</literal>. Experimental mode allows you to configure
a Linux FailSafe cluster in which resource groups do not fail over when a
node failure is detected. This mode can be useful when you are tuning node
timeouts or heartbeat values. When a cluster is configured in normal mode,
Linux FailSafe fails over resource groups when it detects failure in a node
or resource group.</para>
</listitem>
<listitem><para>(Optional) The email address to use to notify the system administrator
when problems occur in the cluster (for example, <command>root@system</command>)
</para>
</listitem>
<listitem><para>(Optional) The email program to use to notify the system administrator
when problems occur in the cluster (for example, <command>/usr/bin/mail</command>).
</para>
<para>Specifying the email program is optional and you can specify only the
notification address in order to receive notifications by mail. If an address
is not specified, notification will not be sent.</para>
</listitem>
</itemizedlist>
<sect3 id="fs-addmachtocluster" role="fs-addmachtocluster">
<title>Adding Nodes to a Cluster</title>
<para>After you have added nodes to the pool and defined a cluster, you must
provide the names of the nodes to include in the cluster.</para>
</sect3>
<sect3>
<title>Defining a Cluster with the Cluster Manager GUI</title>
<para>To define a cluster with the Cluster Manager GUI, perform the following
steps:</para>
<orderedlist>
<listitem><para>Launch the Linux FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on &ldquo;Guided Configuration&rdquo;.
</para>
</listitem>
<listitem><para>On the right side of the display click on &ldquo;Set Up a
New Cluster&rdquo; to launch the task link.</para>
</listitem>
<listitem><para>In the resulting window, click each task link in turn, as
it becomes available. Enter the selected inputs for each task.</para>
</listitem>
<listitem><para>When finished, click &ldquo;OK&rdquo; to close the taskset
window.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Defining a Cluster with the Cluster Manager CLI</title>
<para>When you define a cluster with the CLI, you define and cluster and add
nodes to the cluster with the same command.</para>
<para>Use the following cluster manager CLI command to define a cluster:</para>
<screen>cmgr> <userinput>define cluster </userinput><replaceable>A</replaceable></screen>
<para>Entering this command specifies the name of the node you are defining
and puts you in a mode that allows you to add nodes to the cluster. The following
prompt appears:</para>
<screen>cluster A?</screen>
<para>When this prompt appears during cluster creation, you can specify nodes
to include in the cluster and you can specify an email address to direct messages
that originate in this cluster.</para>
<para>You specify nodes to include in the cluster with the following command:
</para>
<screen>cluster A? <userinput>add node </userinput><emphasis>C</emphasis>
cluster A? </screen>
<para>You can add as many nodes as you want to include in the cluster.</para>
<para>You specify an email program to use to direct messages with the following
command:</para>
<screen>cluster A? <userinput>set notify_cmd to </userinput><emphasis>B</emphasis>
cluster A? </screen>
<para>You specify an email address to direct messages with the following command:
</para>
<screen>cluster A? <userinput>set notify_addr to </userinput><emphasis>B</emphasis>
cluster A? </screen>
<para>You specify a mode for the cluster (normal or experimental) with the
following command:</para>
<screen>cluster A? <userinput>set ha_mode to </userinput><emphasis>D</emphasis>
cluster A? </screen>
<para>When you are finished defining the cluster, enter <filename>done</filename>
to return to the <filename>cmgr</filename> prompt.</para>
</sect3>
</sect2>
<sect2 id="fs-modifydelcluster" role="fs-modifydelcluster">
<title>Modifying and Deleting Clusters</title>
<para>After you have defined a cluster, you can modify the attributes of the
cluster or you can delete the cluster. You cannot delete a cluster that contains
nodes; you must move those nodes out of the cluster first.</para>
<sect3>
<title>Modifying and Deleting a Cluster with the Cluster Manager GUI</title>
<para>To modify a cluster with the Cluster Manager GUI, perform the following
procedure:</para>
<orderedlist>
<listitem><para>Launch the Linux FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Nodes
&amp; Cluster&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Modify
a Cluster Definition&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
<para>To delete a cluster with the Cluster Manager GUI, perform the following
procedure:</para>
<orderedlist>
<listitem><para>Launch the Linux FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Nodes
&amp; Cluster&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Delete
a Cluster&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Modifying and Deleting a Cluster with the Cluster Manager CLI</title>
<para>To modify an existing cluster, enter the following command:</para>
<screen>cmgr> <userinput>modify cluster </userinput><replaceable>A</replaceable></screen>
<para>Entering this command specifies the name of the cluster you are modifying
and puts you in a mode that allows you to modify the cluster. The following
prompt appears:</para>
<screen>cluster <replaceable>A</replaceable>?</screen>
<para>When this prompt appears, you can modify the cluster definition with
the following commands:</para>
<screen>cluster <replaceable>A</replaceable>? <userinput>set notify_addr to 
</userinput><replaceable>B</replaceable>
cluster <replaceable>A</replaceable>? <userinput>set notify_cmd to </userinput><replaceable>
B</replaceable>
cluster <replaceable>A</replaceable>? <userinput>add node </userinput><emphasis>
C</emphasis>
cluster <replaceable>A</replaceable>? <userinput>remove node </userinput><replaceable>
D</replaceable>
cluster <replaceable>A</replaceable>? </screen>
<para>When you are finished modifying the cluster, enter <filename>done</filename>
to return to the <filename>cmgr</filename> prompt.</para>
<para>You can delete a defined cluster with the following command:</para>
<screen>cmgr> <userinput>delete cluster </userinput><replaceable>A</replaceable></screen>
</sect3>
</sect2>
<sect2>
<title>Displaying Clusters</title>
<para>You can display defined clusters with the Cluster Manager GUI or the
Cluster Manager CLI.</para>
<sect3>
<title>Displaying a Cluster with the Cluster Manager GUI</title>
<para>The Cluster Manager GUI provides a convenient display of a cluster and
its components through the FailSafe Cluster View. You can launch the FailSafe
Cluster View directly, or you can bring it up at any time by clicking on the &ldquo;FailSafe
Cluster View&rdquo; prompt at the bottom of the &ldquo;FailSafe Manager&rdquo;
display.</para>
<para>From the View menu of the FailSafe Cluster View, you can choose elements
within the cluster to examine. To view details of the cluster, click on the
cluster name or icon. Status and configuration information will appear in
a new window. To view this information within the FailSafe Cluster View window,
select Options. When you then click on the Show Details option, the status
details will appear in the right side of the window.</para>
</sect3>
</sect2>
<sect2>
<title>Displaying a Cluster with the Cluster Manager CLI</title>
<para>After you have defined a cluster, you can display the nodes in that
cluster with the following command:</para>
<screen>cmgr> <userinput>show cluster</userinput> <replaceable>A</replaceable></screen>
<para>You can see a list of the clusters that have been defined with the following
command:</para>
<screen>cmgr> <userinput>show clusters</userinput></screen>
</sect2>
</sect1>
<sect1 id="LE53159-PARENT">
<title id="LE53159-TITLE">Resource Configuration</title>
<para><indexterm id="ITconfig-17"><primary>resource</primary><secondary>configuration
overview</secondary></indexterm>A <glossterm>resource</glossterm> is a single
physical or logical entity that provides a service to clients or other resources.
A resource is generally available for use on two or more nodes in a cluster,
although only one node controls the resource at any given time. For example,
a resource can be a single disk volume, a particular network address, or an
application such as a web node.</para>
<sect2 id="fs-defineresource" role="fs-defineresource">
<title id="LE95071-TITLE">Defining Resources</title>
<para><indexterm id="ITconfig-18"><primary>resource</primary><secondary>definition
overview</secondary></indexterm>Resources are identified by a resource name
and a resource type. A <firstterm>resource name</firstterm> identifies a specific
instance of a resource type. A <firstterm>resource type</firstterm> is a particular
class of resource. All of the resources in a given resource type can be handled
in the same way for the purposes of failover. Every resource is an instance
of exactly one resource type.</para>
<para>A resource type is identified with a simple name. A resource type can
be defined for a specific logical node, or it can be defined for an entire
cluster. A resource type that is defined for a node will override a clusterwide
resource type definition of the same name; this allows an individual node
to override global settings from a clusterwide resource type definition.</para>
<para>The Linux FailSafe software includes many predefined resource types.
If these types fit the application you want to make into a highly available
service, you can reuse them. If none fit, you can define additional resource
types.</para>
<para>To define a resource, you provide the following information:</para>
<itemizedlist>
<listitem><para>The name of the resource to define, with a maximum length
of 255 characters.</para>
</listitem>
<listitem><para>The type of resource to define. The Linux FailSafe system
contains some pre-defined resource types (template and <filename>IP_Address
</filename>). You can define your own resource type as well.</para>
</listitem>
<listitem><para>The name of the cluster that contains the resource.</para>
</listitem>
<listitem><para>The logical name of the node that contains the resource (optional).
If you specify a node, a local version of the resource will be defined on
that node.</para>
</listitem>
<listitem><para>Resource type-specific attributes for the resource. Each resource
type may require specific parameters to define for the resource, as described
in the following subsections.</para>
</listitem>
</itemizedlist>
<para>You can define up to 100 resources in a Linux FailSafe configuration.
</para>
<sect3 id="ipattributes">
<title>IP Address Resource Attributes</title>
<para><indexterm id="ITconfig-23"><primary>resource</primary><secondary>IP
address</secondary></indexterm> <indexterm id="ITconfig-24"><primary>IP address
</primary><secondary>resource</secondary></indexterm>The IP Address resources
are the IP addresses used by clients to access the highly available services
within the resource group. These IP addresses are moved from one node to another
along with the other resources in the resource group when a failure is detected.
</para>
<para>You specify the resource name of an IP address in dotted decimal notation.
 IP names that require name resolution should not be used. For example, 192.26.50.1
is a valid resource name of the IP Address resource type.</para>
<para>The IP address you define as a Linux FailSafe resource must not be the
same as the IP address of a node hostname or the IP address of a node's control
network.</para>
<para>When you define an IP address, you can optionally specifying the following
parameters. If you specify any of these parameters, you must specify all of
them.</para>
<itemizedlist>
<listitem><para>The broadcast address for the IP address.</para>
</listitem>
<listitem><para>The network mask of the IP address.</para>
</listitem>
<listitem><para>A comma-separated list of interfaces on which the IP address
can be configured. This ordered list is a superset of all the interfaces on
all nodes where this IP address might be allocated. Hence, in a mixed cluster
with different ethernet drivers, an IP address might be placed on eth0 on
one system and ln0 on a another. In this case the <filename>interfaces</filename>
field would be  <filename>eth0,ln0</filename> or <filename>ln0,eth0</filename>.
</para>
<para>The order of the list of interfaces determines the priority order for
determining which IP address will be used for local restarts of the node.
</para>
</listitem>
</itemizedlist>
</sect3>
</sect2>
<sect2 id="fs-adddeptoresource" role="fs-adddeptoresource">
<title>Adding Dependency to a Resource</title>
<para><indexterm id="ITconfig-33"><primary>resource</primary><secondary>dependencies
</secondary></indexterm>One resource can be dependent on one or more other
resources; if so, it will not be able to start (that is, be made available
for use) unless the dependent resources are started as well. Dependent resources
must be part of the same resource group.</para>
<para>Like resources, a resource type can be dependent on one or more other
resource types. If such a dependency exists, at least one instance of each
of the dependent resource types must be defined. For example, a resource type
named <command>Netscape_web</command> might have resource type dependencies
on a resource types named <command>IP_address</command> and <command>volume
</command>. If a resource named <command>ws1</command> is defined with the <command>
Netscape_web</command> resource type, then the resource group containing <command>
ws1</command> must also contain at least one resource of the type <command>
IP_address</command> and one resource of the type <command>volume</command>.
</para>
<para>You cannot make resources mutually dependent. For example, if resource
A is dependent on resource B, then you cannot make resource B dependent on
resource A. In addition, you cannot define cyclic dependencies. For example,
if resource A is dependent on resource B, and resource B is dependent on resource
C, then resource C cannot be dependent on resource A.</para>
<para>When you add a dependency to a resource definition, you provide the
following information:</para>
<itemizedlist>
<listitem><para>The name of the existing resource to which you are adding
a dependency.</para>
</listitem>
<listitem><para>The resource type of the existing resource to which you are
adding a dependency.</para>
</listitem>
<listitem><para>The name of the cluster that contains the resource.</para>
</listitem>
<listitem><para>Optionally, the logical node name of the node in the cluster
that contains the resource. If specified, resource dependencies are added
to the node's definition of the resource. If this is not specified, resource
dependencies are added to the cluster-wide resource definition.</para>
</listitem>
<listitem><para>The resource name of the resource dependency.</para>
</listitem>
<listitem><para>The resource type of the resource dependency.</para>
</listitem>
</itemizedlist>
<sect3>
<title>Defining a Resource with the Cluster Manager GUI</title>
<para>To define a resource with the Cluster Manager GUI, perform the following
steps:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Resources
&amp; Resource Types&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Define
a New Resource&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task.</para>
</listitem>
<listitem><para>On the right side of the display, click on the &ldquo;Add/Remove
Dependencies for a Resource Definition&rdquo; to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task.</para>
</listitem>
</orderedlist>
<para>When you use this command to define a resource, you define a cluster-wide
resource that is not specific to a node. For information on defining a node-specific
resource, see <xref linkend="fs-definemachspecresource">.</para>
</sect3>
<sect3 id="LE42004-PARENT">
<title id="LE42004-TITLE">Defining a Resource with the Cluster Manager CLI
</title>
<para>Use the following CLI command to define a clusterwide resource:</para>
<screen>cmgr> <userinput>define resource</userinput>&ensp;<replaceable>A</replaceable> [<userinput>
of resource_type</userinput>&ensp;<replaceable>B</replaceable>] [<userinput>
in cluster</userinput>&ensp;<replaceable>C</replaceable>]</screen>
<para>Entering this command specifies the name and resource type of the resource
you are defining within a specified cluster. If you have specified a default
cluster or a default resource type, you do not need to specify a resource
type or a cluster in this command and the CLI will use the default.</para>
<para>When you use this command to define a resource, you define a clusterwide
resource that is not specific to a node. For information on defining a node-specific
resource, see <xref linkend="fs-definemachspecresource">.</para>
<para>The following prompt appears:</para>
<screen>resource A?</screen>
<para>When this prompt appears during resource creation, you can enter the
following commands to specify the attributes of the resource you are defining
and to add and remove dependencies from the resource:</para>
<screen>resource A? <userinput>set</userinput>&ensp;<replaceable>key</replaceable>&ensp;<userinput>
to</userinput>&ensp;<replaceable>value</replaceable>
resource A? <userinput>add dependency</userinput>&ensp;<replaceable>E</replaceable>&ensp;<userinput>
of type</userinput>&ensp;<replaceable>F</replaceable>
resource A? <userinput>remove dependency</userinput>&ensp;<replaceable>E</replaceable>&ensp;<userinput>
of type</userinput>&ensp;<replaceable>F</replaceable></screen>
<para>The attributes you define with the <command>set </command><replaceable>
key</replaceable><command>&ensp;to </command><replaceable>value</replaceable>
command will depend on the type of resource you are defining, as described
in <xref linkend="fs-defineresource">.</para>
<para>For detailed information on how to determine the format for defining
resource attributes, see <xref linkend="LE20812-PARENT">.</para>
<para>When you are finished defining the resource and its dependencies, enter <filename>
done</filename> to return to the <filename>cmgr</filename> prompt.</para>
</sect3>
<sect3 id="LE20812-PARENT">
<title id="LE20812-TITLE">Specifying Resource Attributes with Cluster Manager
CLI</title>
<para>To see the format in which you can specify the user-specific attributes
that you need to set for a particular resource type, you can enter the following
command to see the full definition of that resource type:</para>
<screen>cmgr> <userinput>show resource_type </userinput><replaceable>A</replaceable>&ensp;<userinput>
in cluster</userinput>&ensp;<replaceable>B</replaceable></screen>
<para>For example, to see the <replaceable>key</replaceable> attributes you
define for a resource of a defined resource type <filename>IP_address</filename>,
you would enter the following command:</para>
<screen>cmgr>  <userinput>show resource_type IP_address in cluster nfs-cluster
</userinput>

Name: IP_address
Predefined: true
Order: 401
Restart mode: 1
Restart count: 2

Action name: stop
        Executable: /usr/lib/failsafe/resource_types/IP_address/stop
        Maximum execution time: 80000ms
        Monitoring interval: 0ms
        Start monitoring time: 0ms
Action name: exclusive
        Executable: /usr/lib/failsafe/resource_types/IP_address/exclusive
        Maximum execution time: 100000ms
        Monitoring interval: 0ms
        Start monitoring time: 0ms
Action name: start
        Executable: /usr/lib/failsafe/resource_types/IP_address/start
        Maximum execution time: 80000ms
        Monitoring interval: 0ms
        Start monitoring time: 0ms
Action name: restart
        Executable: /usr/lib/failsafe/resource_types/IP_address/restart
        Maximum execution time: 80000ms
        Monitoring interval: 0ms
        Start monitoring time: 0ms
Action name: monitor
        Executable: /usr/lib/failsafe/resource_types/IP_address/monitor
        Maximum execution time: 40000ms
        Monitoring interval: 20000ms
        Start monitoring time: 50000ms

Type specific attribute: NetworkMask
        Data type: string
Type specific attribute: interfaces
        Data type: string
Type specific attribute: BroadcastAddress
        Data type: string

No resource type dependencies</screen>
<para>The display reflects the format in which you can specify the group id,
the device owner, and the device file permissions for the volume. In this
case, the <filename>devname-group</filename> key specifies the group id of
the device file, the <filename>devname_owner</filename> key specifies the
owner of the device file, and the <filename>devname_mode</filename> key specifies
the device file permissions.</para>
<para>For example, to set the group id to <filename>sys</filename>, enter
the following command:</para>
<screen>resource A? <userinput>set devname-group to sys</userinput></screen>
<para>This remainder of this section summarizes the attributes you specify
for the predefined Linux FailSafe resource types with the <replaceable>set
key to value</replaceable> command of the Cluster Manger CLI.</para>
<para><indexterm id="ITconfig-38"><primary>IP address</primary><secondary>
resource</secondary></indexterm> <indexterm id="ITconfig-39"><primary>resource
</primary><secondary>IP address</secondary></indexterm>When you define an
IP address, you specify the following attributes:</para>
<variablelist>
<varlistentry><term><literal>NetworkMask</literal></term>
<listitem>
<para>The subnet mask of the IP address</para>
</listitem>
</varlistentry>
<varlistentry><term><literal>interfaces</literal></term>
<listitem>
<para>A comma-separated list of interfaces on which the IP address can be
configured</para>
</listitem>
</varlistentry>
<varlistentry><term><literal>BroadcastAddress</literal></term>
<listitem>
<para>The broadcast address for the IP address</para>
</listitem>
</varlistentry>
</variablelist>
</sect3>
</sect2>
<sect2 id="fs-definemachspecresource" role="fs-definemachspecresource">
<title id="LE41493-TITLE">Defining a Node-Specific Resource</title>
<para><indexterm id="ITconfig-44"><primary>resource</primary><secondary>node-specific
</secondary></indexterm> <indexterm id="ITconfig-45"><primary>node-specific
resource</primary></indexterm>You can redefine an existing resource with a
resource definition that applies only to a particular node. Only existing
clusterwide resources can be redefined; resources already defined for a specific
cluster node cannot be redefined.</para>
<para><comment>== REVIEWERS: NEW &ndash; NEEDS TO BE CORRECTED? == </comment>You
use this feature when you configure heterogeneous clusters for an <literal>
IP_address</literal> resource. For example, <literal>IP_address</literal>
192.26.50.2 can be configured on et0 on an SGI Challenge node and on eth0
on all other Linux servers. <comment>Do we support mixing IRIX and Linux nodes?
if not, then this reference needs to be changed to some other Linux interface
name.</comment>The clusterwide resource definition for 192.26.50.2 will have
the <literal>interfaces</literal> field set to eth0 and the node-specific
definition for the Challenge node will have et0 as the <literal>interfaces
</literal> field.</para>
<sect3>
<title>Defining a Node-Specific Resource with the Cluster Manager GUI</title>
<para>Using the Cluster Manager GUI, you can take an existing clusterwide
resource definition and redefine it for use on a specific node in the cluster:
</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Resources
&amp; Resource Types&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Redefine
a Resource For a Specific Node&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task.</para>
</listitem>
</orderedlist>
</sect3>
<sect3 id="LE39339-PARENT">
<title id="LE39339-TITLE">Defining a Node-Specific Resource with the Cluster
Manager CLI</title>
<para>You can use the Cluster Manager CLI to redefine a clusterwide resource
to be specific to a node just as you define a clusterwide resource, except
that you specify a node on the <command>define resource</command> command.
</para>
<para>Use the following CLI command to define a node-specific resource:</para>
<screen>cmgr> <userinput>define resource</userinput>&ensp;<replaceable>A</replaceable>&ensp;<userinput>
of resource_type</userinput>&ensp;<replaceable>B</replaceable>&ensp;<userinput>
on node</userinput>&ensp;<replaceable>C</replaceable> [<userinput>in cluster
</userinput>&ensp;<replaceable>D</replaceable>]</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
</sect3>
</sect2>
<sect2 id="fs-modifydelresource" role="fs-modifydelresource">
<title>Modifying and Deleting Resources</title>
<para><indexterm id="ITconfig-46"><primary>resource</primary><secondary>modifying
</secondary></indexterm> <indexterm id="ITconfig-47"><primary>resource</primary>
<secondary>deleting</secondary></indexterm>After you have defined resources,
you can modify and delete them.</para>
<para>You can modify only the type-specific attributes for a resource. You
cannot rename a resource once it has been defined.</para>
<note>
<para>There are some resource attributes whose modification does not take
effect until the resource group containing that resource is brought online
again. For example, if you modify the export options of a resource of type
NFS, the modifications do not take effect immediately; they take effect when
the resource is brought online.</para>
</note>
<sect3>
<title>Modifying and Deleting Resources with the Cluster Manager GUI</title>
<para>To modify a resource with the Cluster Manager GUI, perform the following
procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Resources
&amp; Resource Types&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Modify
a Resource Definition&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
<para>To delete a resource with the Cluster Manager GUI, perform the following
procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Resources
&amp; Resource Types&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Delete
a Resource&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Modifying and Deleting Resources with the Cluster Manager CLI</title>
<para>Use the following CLI command to modify a resource:</para>
<screen>cmgr> <userinput>modify resource</userinput>&ensp;<replaceable>A</replaceable>&ensp;<userinput>
of resource_type</userinput>&ensp;<replaceable>B</replaceable> [<userinput>
in cluster</userinput>&ensp;<replaceable>C</replaceable>]</screen>
<para>Entering this command specifies the name and resource type of the resource
you are modifying within a specified cluster. If you have specified a default
cluster, you do not need to specify a cluster in this command and the CLI
will use the default.</para>
<para>You modify a resource using the same commands you use to define a resource.
</para>
<para>You can use the following command to delete a resource definition:</para>
<screen>cmgr> <userinput>delete resource</userinput>&ensp;<replaceable>A</replaceable>&ensp;<userinput>
of resource_type</userinput>&ensp;<replaceable>B</replaceable> [<userinput>
in cluster</userinput>&ensp;<replaceable>D</replaceable>]</screen>
</sect3>
</sect2>
<sect2>
<title>Displaying Resources</title>
<para><indexterm id="ITconfig-48"><primary>resource</primary><secondary>displaying
</secondary></indexterm>You can display resources in various ways. You can
display the attributes of a particular defined resource, you can display all
of the defined resources in a specified resource group, or you can display
all the defined resources of a specified resource type.</para>
<sect3>
<title>Displaying Resources with the Cluster Manager GUI</title>
<para>The Cluster Manager GUI provides a convenient display of resources through
the FailSafe Cluster View. You can launch the FailSafe Cluster View directly,
or you can bring it up at any time by clicking on the &ldquo;FailSafe Cluster
View&rdquo; button at the bottom of the &ldquo;FailSafe Manager&rdquo; display.
</para>
<para>From the View menu of the FailSafe Cluster View, select Resources to
see all defined resources. The status of these resources will be shown in
the icon (green indicates online, grey indicates offline). Alternately, you
can select &ldquo;Resources of Type&rdquo; from the View menu to see resources
organized by resource type, or you can select &ldquo;Resources by Group&rdquo;
to see resources organized by resource group.</para>
</sect3>
<sect3>
<title>Displaying Resources with the Cluster Manager CLI</title>
<para>Use the following command to view the parameters of a defined resource:
</para>
<screen>cmgr> <userinput>show resource </userinput><replaceable>A</replaceable><userinput>
&ensp;of resource_type </userinput><replaceable>B</replaceable></screen>
<para>Use the following command to view all of the defined resources in a
resource group:</para>
<screen>cmgr> <userinput>show resources in resource_group</userinput>&ensp;<replaceable>
A</replaceable> [<userinput>in cluster</userinput>&ensp;<replaceable>B</replaceable>]
</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
<para>Use the following command to view all of the defined resources of a
particular resource type in a specified cluster:</para>
<screen>cmgr> <userinput>show resources of resource_type</userinput>&ensp;<replaceable>
A</replaceable> [<userinput>in cluster</userinput>&ensp;<replaceable>B</replaceable>]
</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
</sect3>
</sect2>
<sect2 id="fs-defineresourcetype" role="fs-defineresourcetype">
<title>Defining a Resource Type</title>
<para><indexterm id="ITconfig-49"><primary>resource type</primary><secondary>
definition</secondary></indexterm>The Linux FailSafe software includes many
predefined resource types. If these types fit the application you want to
make into a highly available service, you can reuse them. If none fits, you
can define additional resource types.</para>
<para>Complete information on defining resource types is provided in the <citetitle>
Linux FailSafe Programmer's Guide</citetitle>. This manual provides a summary
of that information.</para>
<para>To define a new resource type, you must have the following information:
</para>
<itemizedlist>
<listitem><para>Name of the resource type, with a maximum length of 255 characters.
</para>
</listitem>
<listitem><para>Name of the cluster to which the resource type will apply.
</para>
</listitem>
<listitem><para>Node on which the resource type will apply, if the resource
type is to be restricted to a specific node.</para>
</listitem>
<listitem><para>Order of performing the action scripts for resources of this
type in relation to resources of other types:</para>
<itemizedlist>
<listitem><para>Resources are started in the increasing order of this value
</para>
</listitem>
<listitem><para>Resources are stopped in the decreasing order of this value
</para>
<para>See the <citetitle>Linux FailSafe Programmer's Guide</citetitle> for
a full description of the order ranges available.</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Restart mode, which can be one of the following values:</para>
<itemizedlist>
<listitem><para>0 = Do not restart on monitoring failures</para>
</listitem>
<listitem><para>1 = Restart a fixed number of times</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Number of local restarts (when restart mode is 1).</para>
</listitem>
<listitem><para>Location of the executable script. This is always  <?Pub _nolinebreak><filename>
/usr/lib/failsafe/resources_types/</filename><replaceable>rtname</replaceable><?Pub /_nolinebreak>,
where <replaceable>rtname</replaceable> is the resource type name.</para>
</listitem>
<listitem><para>Monitoring interval, which is the time period (in milliseconds)
between successive executions of the <command>monitor</command> action script;
this is only valid for the <command>monitor</command> action script.</para>
</listitem>
<listitem><para>Starting time for monitoring. When the resource group is made
in online in a cluster node, Linux FailSafe will start monitoring the resources
after the specified time period (in milliseconds).</para>
</listitem>
<listitem><para>Action scripts to be defined for this resource type, You must
specify scripts for <command>start</command>, <command>stop</command>, <command>
exclusive</command>, and <command>monitor</command>, although the <command>
monitor</command> script may contain only a return-success function if you
wish. If you specify 1 for the restart mode, you must specify a <command>
restart</command> script. </para>
</listitem>
<listitem><para>Type-specific attributes to be defined for this resource type.
The action scripts use this information to start, stop, and monitor a resource
of this resource type. For example, NFS requires the following resource keys:
</para>
<itemizedlist>
<listitem><para><filename>export-point</filename>, which takes a value that
defines the export disk name. This name is used as input to the <command>
exportfs</command> command. For example:</para>
<screen>export-point = /this_disk</screen>
</listitem>
<listitem><para><filename>export-info</filename>, which takes a value that
defines the export options for the filesystem. These options are used in the <command>
exportfs</command> command. For example:</para>
<screen>export-info = rw,sync,no_root_squash</screen>
</listitem>
<listitem><para><filename>filesystem</filename>, which takes a value that
defines the raw filesystem. This name is used as input to the <command>mount
</command>) command. For example:</para>
<screen>filesystem = /dev/sda1</screen>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<para>To define a new resource type, you use the Cluster Manager GUI or the
Cluster Manager CLI.</para>
<sect3>
<title>Defining a Resource Type with the Cluster Manager GUI</title>
<para>To define a resource type with the Cluster Manager GUI, perform the
following steps:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Resources
&amp; Resource Types&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Define
a Resource Type&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Defining a Resource Type with the Cluster Manager CLI</title>
<para>The following steps show the use of <command>cluster_mgr</command> interactively
to define a resource type called <literal>test_rt</literal>.</para>
<orderedlist>
<listitem><para>Log in as <command>root</command>.</para>
</listitem>
<listitem><para>Execute the <command>cluster_mgr </command>command using the <command>
-p</command> option to prompt you for information (the command name can be
abbreviated to <command>cmgr</command>):</para>
<screen># <userinput>/usr/lib/failsafe/bin/cluster_mgr -p</userinput>
Welcome to Linux FailSafe Cluster Manager Command-Line Interface

cmgr></screen>
</listitem>
<listitem><para>Use the <command>set</command> subcommand to specify the default
cluster used for <command>cluster_mgr</command> operations. In this example,
we use a cluster named <literal>test</literal>:</para>
<screen>cmgr> <userinput>set cluster test</userinput></screen>
<note>
<para>If you prefer, you can specify the cluster name as needed with each
subcommand.</para>
</note>
</listitem>
<listitem><para>Use the <command>define resource_type</command> subcommand.
By default, the resource type will apply across the cluster; if you wish to
limit the resource_type to a specific node, enter the node name when prompted.
If you wish to enable restart mode, enter 1 when prompted.</para>
<note>
<para>&ensp;The following example only shows the prompts and answers for two
action scripts (<command>start</command> and <command>stop</command>) for
a new resource type named <command>test_rt</command>.</para>
</note>
<screen>cmgr> <userinput>define resource_type test_rt</userinput>

(Enter "cancel" at any time to abort)

Node[optional]?
Order ? <userinput>300</userinput>
Restart Mode ? (0)

DEFINE RESOURCE TYPE OPTIONS

&ensp;       0) Modify Action Script.
&ensp;       1) Add Action Script.
&ensp;       2) Remove Action Script.
&ensp;       3) Add Type Specific Attribute.
&ensp;       4) Remove Type Specific Attribute.
&ensp;       5) Add Dependency.
&ensp;       6) Remove Dependency.
&ensp;       7) Show Current Information.
&ensp;       8) Cancel. (Aborts command)
&ensp;       9) Done. (Exits and runs command)

Enter option:<userinput>1</userinput>

No current resource type actions

Action name ? <userinput>start</userinput>
Executable Time? <userinput>40000</userinput>
Monitoring Interval? <userinput>0</userinput>
Start Monitoring Time? <userinput>0</userinput>

&ensp;       0) Modify Action Script.
&ensp;       1) Add Action Script.
&ensp;       2) Remove Action Script.
&ensp;       3) Add Type Specific Attribute.
&ensp;       4) Remove Type Specific Attribute.
&ensp;       5) Add Dependency.
&ensp;       6) Remove Dependency.
&ensp;       7) Show Current Information.
&ensp;       8) Cancel. (Aborts command)
&ensp;       9) Done. (Exits and runs command)

Enter option:<userinput>1</userinput>

Current resource type actions:
&ensp;       Action - 1: start

Action name <userinput>stop</userinput>
Executable Time? <userinput>40000</userinput>
Monitoring Interval? <userinput>0</userinput>
Start Monitoring Time? <userinput>0</userinput>&ensp;

&ensp;       0) Modify Action Script.
&ensp;       1) Add Action Script.
&ensp;       2) Remove Action Script.
&ensp;       3) Add Type Specific Attribute.
&ensp;       4) Remove Type Specific Attribute.
&ensp;       5) Add Dependency.
&ensp;       6) Remove Dependency.
&ensp;       7) Show Current Information.
&ensp;       8) Cancel. (Aborts command)
&ensp;       9) Done. (Exits and runs command)

Enter option:<userinput>3</userinput>

No current type specific attributes

Type Specific Attribute ? <userinput>integer-att</userinput>
Datatype ? <userinput>integer</userinput>
Default value[optional] ? <userinput>33</userinput>

&ensp;       0) Modify Action Script.
&ensp;       1) Add Action Script.
&ensp;       2) Remove Action Script.
&ensp;       3) Add Type Specific Attribute.
&ensp;       4) Remove Type Specific Attribute.
&ensp;       5) Add Dependency.
&ensp;       6) Remove Dependency.
&ensp;       7) Show Current Information.
&ensp;       8) Cancel. (Aborts command)
&ensp;       9) Done. (Exits and runs command)

Enter option:<userinput>3</userinput>

Current type specific attributes:
&ensp;       Type Specific Attribute - 1: export-point

Type Specific Attribute ? <userinput>string-att</userinput>
Datatype ? <userinput>string</userinput>
Default value[optional] ? <userinput>rw</userinput>

&ensp;       0) Modify Action Script.
&ensp;       1) Add Action Script.
&ensp;       2) Remove Action Script.
&ensp;       3) Add Type Specific Attribute.
&ensp;       4) Remove Type Specific Attribute.
&ensp;       5) Add Dependency.
&ensp;       6) Remove Dependency.
&ensp;       7) Show Current Information.
&ensp;       8) Cancel. (Aborts command)
&ensp;       9) Done. (Exits and runs command)Enter option:<userinput>5</userinput>

No current resource type dependencies

Dependency name ? <userinput>filesystem</userinput>

&ensp;       0) Modify Action Script.
&ensp;       1) Add Action Script.
&ensp;       2) Remove Action Script.
&ensp;       3) Add Type Specific Attribute.
&ensp;       4) Remove Type Specific Attribute.
&ensp;       5) Add Dependency.
&ensp;       6) Remove Dependency.
&ensp;       7) Show Current Information.
&ensp;       8) Cancel. (Aborts command)
&ensp;       9) Done. (Exits and runs command)

Enter option:<userinput>7</userinput>

Current resource type actions:
&ensp;       Action - 1: start
&ensp;       Action - 2: stop

Current type specific attributes:
&ensp;       Type Specific Attribute - 1: integer-att
&ensp;       Type Specific Attribute - 2: string-att

No current resource type dependencies

Resource dependencies to be added:
&ensp;       Resource dependency - 1: filesystem

&ensp;       0) Modify Action Script.
&ensp;       1) Add Action Script.
&ensp;       2) Remove Action Script.
&ensp;       3) Add Type Specific Attribute.
&ensp;       4) Remove Type Specific Attribute.
&ensp;       5) Add Dependency.
&ensp;       6) Remove Dependency.
&ensp;       7) Show Current Information.
&ensp;       8) Cancel. (Aborts command)
&ensp;       9) Done. (Exits and runs command)

Enter option:<userinput>9</userinput>
Successfully created resource_type test_rt

cmgr> <userinput>show resource_types</userinput>

NFS
template
Netscape_web
test_rt
statd
Oracle_DB
MAC_address
IP_address
INFORMIX_DB
filesystem
volume

cmgr> <userinput>exit</userinput>
#</screen>
</listitem>
</orderedlist>
</sect3>
</sect2>
<sect2 id="fs-definemachspecrestype" role="fs-definemachspecrestype">
<title>Defining a Node-Specific Resource Type</title>
<para><indexterm id="ITconfig-50"><primary>resource type</primary><secondary>
node-specific</secondary></indexterm> <indexterm id="ITconfig-51"><primary>
node-specific resource type</primary></indexterm>You can redefine an existing
resource type with a resource definition that applies only to a particular
node. Only existing clusterwide resource types can be redefined; resource
types already defined for a specific cluster node cannot be redefined.</para>
<para>A resource type that is defined for a node overrides a cluster-wide
resource type definition with the same name; this allows an individual node
to override global settings from a clusterwide resource type definition. You
can use this feature if you want to have different script timeouts for a node
or you want to restart a resource on only one node in the cluster.</para>
<para>For example, the <literal>IP_address</literal> resource has local restart
enabled by default. If you would like to have an IP address type without local
restart for a particular node, you can make a copy of the <literal>IP_address
</literal> clusterwide resource type with all of the parameters the same except
for restart mode, which you set to 0.</para>
<sect3>
<title>Defining a Node-Specific Resource Type with the Cluster Manager GUI
</title>
<para>Using the Cluster Manager GUI, you can take an existing clusterwide
resource type definition and redefine it for use on a specific node in the
cluster. Perform the following tasks:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Resources
&amp; Resource Types&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Redefine
a Resource Type For a Specific Node&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Defining a Node-Specific Resource Type with the Cluster Manager CLI
</title>
<para>With the Cluster Manager CLI, you redefine a node-specific resource
type just as you define a cluster-wide resource type, except that you specify
a node on the <command>define resource_type</command> command.</para>
<para>Use the following CLI command to define a node-specific resource type:
</para>
<screen>cmgr> <userinput>define resource_type</userinput>&ensp;<replaceable>
A</replaceable>&ensp;<userinput>on node</userinput>&ensp;<replaceable>B</replaceable> [<userinput>
in cluster</userinput>&ensp;<replaceable>C</replaceable>]</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
</sect3>
</sect2>
<sect2 id="fs-adddeptorestype" role="fs-adddeptorestype">
<title>Adding Dependencies to a Resource Type</title>
<para><indexterm id="ITconfig-52"><primary>resource type</primary><secondary>
dependencies</secondary></indexterm>Like resources, a resource type can be
dependent on one or more other resource types. If such a dependency exists,
at least one instance of each of the dependent resource types must be defined.
For example, a resource type named <command>Netscape_web</command> might have
resource type dependencies on a resource type named <command>IP_address</command>
and <command>volume</command>. If a resource named <command>ws1</command>
is defined with the <command>Netscape_web</command> resource type, then the
resource group containing <command>ws1</command> must also contain at least
one resource of the type <command>IP_address</command> nd one resource of
the type <command>volume</command>.</para>
<para>When using the Cluster Manager GUI, you add or remove dependencies for
a resource type by selecting the &ldquo;Add/Remove Dependencies for a Resource
Type&rdquo; from the &ldquo;Resources &amp; Resource Types&rdquo; display
and providing the indicated input. When using the Cluster Manager CLI, you
add or remove dependencies when you define or modify the resource type.</para>
</sect2>
<sect2 id="fs-modifyrestype" role="fs-modifyrestype">
<title>Modifying and Deleting Resource Types</title>
<para><indexterm id="ITconfig-53"><primary>resource type</primary><secondary>
modifying</secondary></indexterm> <indexterm id="ITconfig-54"><primary>resource
type</primary><secondary>deleting</secondary></indexterm>After you have defined
a resource types, you can modify and delete them.</para>
<sect3>
<title>Modifying and Deleting Resource Types with the Cluster Manager GUI
</title>
<para>To modify a resource type with the Cluster Manager GUI, perform the
following procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Resources
&amp; Resource Types&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Modify
a Resource Type Definition&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
<para>To delete a resource type with the Cluster Manager GUI, perform the
following procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Resources
&amp; Resource Types&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Delete
a Resource Type&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Modifying and Deleting Resource Types with the Cluster Manager CLI
</title>
<para>Use the following CLI command to modify a resource:</para>
<screen>cmgr> <userinput>modify resource_type</userinput>&ensp;<replaceable>
A</replaceable> [<userinput>in cluster</userinput>&ensp;<replaceable>B</replaceable>]
</screen>
<para>Entering this command specifies the resource type you are modifying
within a specified cluster. If you have specified a default cluster, you do
not need to specify a cluster in this command and the CLI will use the default.
</para>
<para>You modify a resource type using the same commands you use to define
a resource type.</para>
<para>You can use the following command to delete a resource type:</para>
<screen>cmgr> <userinput>delete resource_type</userinput>&ensp;<replaceable>
A</replaceable> [<userinput>in cluster</userinput>&ensp;<replaceable>B</replaceable>]
</screen>
</sect3>
</sect2>
<sect2 id="fs-loadresourcetype" role="fs-loadresourcetype">
<title>Installing (Loading) a Resource Type on a Cluster</title>
<para><indexterm id="ITconfig-55"><primary>installing resource type</primary>
</indexterm> <indexterm id="ITconfig-56"><primary>resource type</primary>
<secondary>installing</secondary></indexterm>When you define a cluster, Linux
FailSafe installs a set of resource type definitions that you can use that
include default values. If you need to install additional standard Silicon
Graphics-supplied resource type definitions on the cluster, or if you delete
a standard resource type definition and wish to reinstall it, you can load
that resource type definition on the cluster.</para>
<para>The resource type definition you are installing cannot exist on the
cluster.</para>
<sect3>
<title>Installing a Resource Type with the Cluster Manager GUI</title>
<para>To install a resource type using the GUI, select the &ldquo;Load a Resource&rdquo;
task from the &ldquo;Resources &amp; Resource Types&rdquo; task page and enter
the resource type to load.</para>
</sect3>
<sect3>
<title>Installing a Resource Type with the Cluster Manager CLI</title>
<para>Use the following CLI command to install a resource type on a cluster:
</para>
<screen>cmgr> <userinput>install resource_type</userinput>&ensp;<replaceable>
A</replaceable> [<userinput>in cluster</userinput>&ensp;<replaceable>B</replaceable>]
</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
</sect3>
</sect2>
<sect2>
<title>Displaying Resource Types</title>
<para><indexterm id="ITconfig-57"><primary>resource type</primary><secondary>
displaying</secondary></indexterm>After you have defined a resource types,
you can display them.</para>
<sect3>
<title>Displaying Resource Types with the Cluster Manager GUI</title>
<para>The Cluster Manager GUI provides a convenient display of resource types
through the FailSafe Cluster View. You can launch the FailSafe Cluster View
directly, or you can bring it up at any time by clicking on the &ldquo;FailSafe
Cluster View&rdquo; prompt at the bottom of the &ldquo;FailSafe Manager&rdquo;
display.</para>
<para>From the View menu of the FailSafe Cluster View, select Types to see
all defined resource types. You can then click on any of the resource type
icons to view the parameters of the resource type.</para>
</sect3>
<sect3>
<title>Displaying Resource Types with the Cluster Manager CLI</title>
<para>Use the following command to view the parameters of a defined resource
type in a specified cluster:</para>
<screen>cmgr> <userinput>show resource_type </userinput><replaceable>A </replaceable>[<userinput>
in cluster</userinput>&ensp;<replaceable>B</replaceable>]</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
<para>Use the following command to view all of the defined resource types
in a cluster:</para>
<screen>cmgr> <userinput>show resource_types</userinput> [<userinput>in cluster
</userinput>&ensp;<replaceable>A</replaceable>]</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
<para>Use the following command to view all of the defined resource types
that have been installed:</para>
<screen>cmgr> <userinput>show resource_types</userinput>&ensp;<userinput>
installed</userinput></screen>
</sect3>
</sect2>
<sect2 id="fs-definefailover" role="fs-definefailover">
<title id="LE75286-TITLE"> Defining a Failover Policy</title>
<para>Before you can configure your resources into a resource group, you must
determine which failover policy to apply to the resource group. To define
a failover policy, you provide the following information:<indexterm id="ITconfig-58">
<primary>failover policy</primary><secondary>definition</secondary></indexterm></para>
<itemizedlist>
<listitem><para>The name of the failover policy, with a maximum length of
63 characters, which must be unique within the pool.</para>
</listitem>
<listitem><para>The name of an existing failover script.</para>
</listitem>
<listitem><para>The initial failover domain, which is an ordered list of the
nodes on which the resource group may execute. The administrator supplies
the initial failover domain when configuring the failover policy; this is
input to the failover script, which generates the runtime failover domain.
</para>
</listitem>
<listitem><para>The failover attributes, which modify the behavior of the
failover script.</para>
</listitem>
</itemizedlist>
<para>Complete information on failover policies and failover scripts, with
an emphasis on writing your own failover policies and scripts, is provided
in the <citetitle>Linux FailSafe Programmer's Guide</citetitle>.</para>
<sect3>
<title>Failover Scripts</title>
<para><indexterm id="ITconfig-59"><primary>failover policy</primary><secondary>
failover script</secondary></indexterm> <indexterm id="ITconfig-60"><primary>
failover script</primary></indexterm>A <glossterm>failover script</glossterm>
helps determine the node that is chosen for a failed resource group. The failover
script takes the initial failover domain and transforms it into the runtime
failover domain. Depending upon the contents of the script, the initial and
the runtime domains may be identical.</para>
<para>The <filename>ordered</filename> failover script is provided with the
Linux FailSafe release. The <filename>ordered</filename> script never changes
the initial domain; when using this script, the initial and runtime domains
are equivalent.</para>
<para>The <filename>round-robin</filename> failover script is also provided
with the Linux FailSafe release. The <filename>round-robin</filename> cript
selects the resource group owner in a round-robin (circular) fashion. This
policy can be used for resource groups that can be run in any node in the
cluster.</para>
<para>Failover scripts are stored in the  <command>/usr/lib/failsafe/policies
</command> directory. If the  <command>ordered</command> script does not meet
your needs, you can define a new failover script and place it in the  <filename>
/usr/lib/failsafe/policies</filename> directory. When you are using the FailSafe
GUI, the GUI automatically detects your script and presents it to you as a
choice for you to use. You can configure the Linux FailSafe database to use
your new failover script for the required resource groups. For information
on defining failover scripts, see the <citetitle>Linux FailSafe Programmer's
Guide</citetitle>.</para>
</sect3>
<sect3>
<title>Failover Domain</title>
<para>A <glossterm>failover domain</glossterm> is the ordered list of nodes
on which a given resource group can be allocated. The nodes listed in the
failover domain must be within the same cluster; however, the failover domain
does not have to include every node in the cluster. The failover domain can
be used to statically load balance the resource groups in a cluster.</para>
<para>Examples:</para>
<itemizedlist>
<listitem><para>In a four-node cluster, two nodes might share a volume. The
failover domain of the resource group containing the volume will be the two
nodes that share the volume.</para>
</listitem>
<listitem><para>If you have a cluster of nodes named venus, mercury, and pluto,
you could configure the following initial failover domains for resource groups
RG1 and RG2:<indexterm id="ITconfig-61"><primary>failover policy</primary>
<secondary>failover domain</secondary></indexterm> <indexterm id="ITconfig-62">
<primary>domain</primary></indexterm> <indexterm id="ITconfig-63"><primary>
failover domain</primary></indexterm></para>
<itemizedlist>
<listitem><para>venus, mercury, pluto for RG1</para>
</listitem>
<listitem><para>pluto, mercury for RG2</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
<para>When you define a failover policy, you specify the <glossterm>initial
failover domain</glossterm>. The initial failover domain is used when a cluster
is first booted. The ordered list specified by the initial failover domain
is transformed into a <glossterm>runtime failover domain</glossterm><indexterm
id="ITconfig-64"><primary>run-time failover domain</primary></indexterm> by
the failover script. With each failure, the failover script takes the current
run-time failover domain and potentially modifies it; the initial failover
domain is never used again. Depending on the run-time conditions and contents
of the failover script, the initial and run-time failover domains may be identical.
</para>
<para>Linux FailSafe stores the run-time failover domain and uses it as input
to the next failover script invocation.</para>
</sect3>
<sect3>
<title>Failover Attributes</title>
<para><indexterm id="ITconfig-65"><primary>failover policy</primary><secondary>
failover attributes</secondary></indexterm> <indexterm id="ITconfig-66"><primary>
failover attributes</primary></indexterm>A failover attribute is a value that
is passed to the failover script and used by Linux FailSafe for the purpose
of modifying the run-time failover domain used for a specific resource group.
You can specify a failover attribute of <filename>Auto_Failback</filename>, <filename>
Controlled_Failback, Auto_Recovery,</filename> or <filename>InPlace_Recovery.
Auto_Failback</filename> and <?Pub _nolinebreak><filename>Controlled_Failback
</filename><?Pub /_nolinebreak> are mutually exclusive, but you must specify
one or the other. <filename>Auto_Recovery</filename> and <filename>InPlace_Recovery
</filename> are mutually exclusive, but whether you specify one or the other
is optional.</para>
<para>A failover attribute of <filename>Auto_Failback</filename> specifies
that the resource group will be run on the first available node in the runtime
failover domain. If the first node fails, the next available node will be
used; when the first node reboots, the resource group will return to it. This
attribute is best used when some type of load balancing is required.</para>
<para>A failover attribute of <filename>Controlled_Failback</filename> specifies
that the resource group will be run on the first available node in the runtime
failover domain, and will remain running on that node until it fails. If the
first node fails, the next available node will be used; the resource group
will remain on this new node even after the first node reboots.This attribute
is best used when client/server applications have expensive recovery mechanisms,
such as databases or any application that uses <command>tcp</command> to communicate.
</para>
<para>The recovery attributes <filename>Auto_Recovery</filename> and <filename>
InPlace_Recovery</filename> determine the node on which a resource group will
be allocated when its state changes to online and a member of the group is
already allocated (such as when volumes are present). <filename>Auto_Recovery
</filename> specifies that the failover policy will be used to allocate the
resource group; this is the default recovery attribute if you have specified
the <filename>Auto_Failback</filename> attribute. <filename>InPlace_Recovery
</filename> specifies that the resource group will be allocated on the node
that already contains part of the resource group; this is the default recovery
attribute if you have specified the <filename>Controlled_Failback</filename>
attribute.</para>
<para>See the <citetitle>Linux FailSafe Programmer's Guide</citetitle> for
a full discussions of example failover policies.</para>
</sect3>
<sect3>
<title>Defining a Failover Policy with the Cluster Manager GUI</title>
<para>To define a failover policy using the GUI, perform the following steps:
</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Failover
Policies &amp; Resource Groups&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Define
a Failover Policy&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Defining a Failover Policy with the Cluster Manager CLI</title>
<para>To define a failover policy, enter the following command at the <filename>
cmgr</filename> prompt to specify the name of the failover policy:</para>
<screen>cmgr> <userinput>define failover_policy </userinput><replaceable>
A</replaceable></screen>
<para>The following prompt appears:</para>
<screen>failover_policy <replaceable>A</replaceable>?</screen>
<para>When this prompt appears you can use the following commands to specify
the components of a failover policy:</para>
<screen>failover_policy <replaceable>A</replaceable>? <userinput>set attribute to 
</userinput><replaceable>B</replaceable>
failover policy <replaceable>A</replaceable>? <userinput>set script to </userinput><replaceable>
C</replaceable>
failover policy <replaceable>A</replaceable>? <userinput>set domain to </userinput><replaceable>
D</replaceable>
failover_policy <replaceable>A</replaceable>? </screen>
<para>When you define a failover policy, you can set as many attributes and
domains as your setup requires, but executing the <command>add attribute</command>
and <command>add domain</command> commands with different values. The CLI
also allows you to specify multiple domains in one command of the following
format:</para>
<screen>failover_policy <replaceable>A</replaceable>? <userinput>set domain to 
</userinput><replaceable>A B C</replaceable> ..<replaceable>.</replaceable></screen>
<para>The components of a failover policy are described in detail in the <citetitle>
Linux FailSafe Programmer's Guide</citetitle> and in summary in <xref linkend="fs-definefailover">.
</para>
<para>When you are finished defining the failover policy, enter <filename>
done</filename> to return to the <filename>cmgr</filename> prompt.</para>
</sect3>
</sect2>
<sect2 id="fs-modifydelfailoverpolicy" role="fs-modifydelfailoverpolicy">
<title>Modifying and Deleting Failover Policies</title>
<para>After you have defined a failover policy, you can modify or delete it.
</para>
<sect3>
<title>Modifying and Deleting Failover Policies with the Cluster Manager GUI
</title>
<para>To modify a failover policy with the Cluster Manager GUI, perform the
following procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Failover
Policies &amp; Resource Groups&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Modify
a Failover Policy Definition&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
<para>To delete a failover policy with the Cluster Manager GUI, perform the
following procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Failover
Policies &amp; Resource Groups&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Delete
a Failover Policy&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Modifying and Deleting Failover Policies with the Cluster Manager CLI
</title>
<para>Use the following CLI command to modify a failover policy:</para>
<screen>cmgr> <userinput>modify failover_policy</userinput>&ensp;<replaceable>
A</replaceable>&ensp;</screen>
<para>You modify a failover policy using the same commands you use to define
a failover policy.</para>
<para>You can use the following command to delete a failover policy definition:
</para>
<screen>cmgr> <userinput>delete failover_policy</userinput>&ensp;<replaceable>
A</replaceable></screen>
</sect3>
</sect2>
<sect2>
<title>Displaying Failover Policies</title>
<para>You can use Linux FailSafe to display any of the following:</para>
<itemizedlist>
<listitem><para>The components of a specified failover policy</para>
</listitem>
<listitem><para>All of the failover policies that have been defined</para>
</listitem>
<listitem><para>All of the failover policy attributes that have been defined
</para>
</listitem>
<listitem><para>All of the failover policy scripts that have been defined
</para>
</listitem>
</itemizedlist>
<sect3>
<title>Displaying Failover Policies with the Cluster Manager GUI</title>
<para>The Cluster Manager GUI provides a convenient display of failover policies
through the FailSafe Cluster View. You can launch the FailSafe Cluster View
directly, or you can bring it up at any time by clicking on the &ldquo;FailSafe
Cluster View&rdquo; prompt at the bottom of the &ldquo;FailSafe Manager&rdquo;
display.</para>
<para>From the View menu of the FailSafe Cluster View, select Failover Policies
to see all defined failover policies.</para>
</sect3>
<sect3>
<title>Displaying Failover Policies with the Cluster Manager CLI</title>
<para>Use the following command to view the parameters of a defined failover
policy:</para>
<screen>cmgr> <userinput>show failover_policy </userinput><replaceable>A</replaceable></screen>
<para>Use the following command to view all of the defined failover policies:
</para>
<screen>cmgr> <userinput>show failover policies</userinput></screen>
<para>Use the following command to view all of the defined failover policy
attributes:</para>
<screen>cmgr> <userinput>show failover_policy attributes</userinput></screen>
<para>Use the following command to view all of the defined failover policy
scripts:</para>
<screen>cmgr> <userinput>show failover_policy scripts</userinput></screen>
</sect3>
</sect2>
<sect2 id="fs-defineresgroup" role="fs-defineresgroup">
<title id="LE25366-TITLE">Defining Resource Groups</title>
<para><indexterm id="ITconfig-67"><primary>resource group</primary><secondary>
definition</secondary></indexterm>Resources are configured together into <firstterm>
resource group</firstterm>s. A resource group is a collection of interdependent
resources. If any individual resource in a resource group becomes unavailable
for its intended use, then the entire resource group is considered unavailable.
Therefore, a resource group is the unit of failover for Linux FailSafe.</para>
<para>For example, a resource group could contain all of the resources that
are required for the operation of a web node, such as the web node itself,
the IP address with which it communicates to the outside world, and the disk
volumes containing the content that it serves.</para>
<para>When you define a resource group, you specify a <firstterm>failover
policy</firstterm>. A failover policy controls the behavior of a resource
group in failure situations.</para>
<para>To define a resource group, you provide the following information:</para>
<itemizedlist>
<listitem><para>The name of the resource group, with a maximum length of 63
characters.</para>
</listitem>
<listitem><para>The name of the cluster to which the resource group is available
</para>
</listitem>
<listitem><para>The resources to include in the resource group, and their
resource types</para>
</listitem>
<listitem><para>The name of the failover policy that determines which node
will take over the services of the resource group on failure</para>
</listitem>
</itemizedlist>
<para>Linux FailSafe does not allow resource groups that do not contain any
resources to be brought online.</para>
<para>You can define up to 100 resources configured in any number of resource
groups.</para>
<sect3 id="fs-addresourcestoresgroup" role="fs-addresourcestoresgroup">
<title>Defining a Resource Group with the Cluster Manager GUI</title>
<para>To define a resource group with the Cluster Manager GUI, perform the
following steps:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on &ldquo;Guided Configuration&rdquo;.
</para>
</listitem>
<listitem><para>On the right side of the display click on &ldquo;Set Up Highly
Available Resource Groups&rdquo; to launch the task link.</para>
</listitem>
<listitem><para>In the resulting window, click each task link in turn, as
it becomes available. Enter the selected inputs for each task.</para>
</listitem>
<listitem><para>When finished, click &ldquo;OK&rdquo; to close the taskset
window.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Defining a Resource Group with the Cluster Manager CLI</title>
<para>To configure a resource group, enter the following command at the <filename>
cmgr</filename> prompt to specify the name of a resource group and the cluster
to which the resource group is available:</para>
<screen>cmgr> <userinput>define resource_group </userinput><replaceable>A 
</replaceable>[<userinput>in cluster</userinput><replaceable>&ensp;B</replaceable>]
</screen>
<para>Entering this command specifies the name of the resource group you are
defining within a specified cluster. If you have specified a default cluster,
you do not need to specify a cluster in this command and the CLI will use
the default.</para>
<para>The following prompt appears:</para>
<screen>Enter commands, when finished enter either "done" or "cancel"
resource_group <replaceable>A</replaceable>?</screen>
<para>When this prompt appears you can use the following commands to specify
the resources to include in the resource group and the failover policy to
apply to the resource group:</para>
<screen>resource_group <replaceable>A</replaceable>? <userinput>add resource
</userinput>&ensp;<replaceable>B</replaceable>&ensp;<userinput>of resource_type
</userinput>&ensp;<replaceable>C</replaceable>
resource_group <replaceable>A</replaceable>? <userinput>set failover_policy to 
</userinput><replaceable>D</replaceable></screen>
<para>After you have set the failover policy and you have finished adding
resources to the resource group, enter <filename>done</filename> to return
to the <filename>cmgr</filename> prompt.</para>
<para>For a full example of resource group creation using the Cluster Manager
CLI, see <xref linkend="LE40511-PARENT">.</para>
</sect3>
</sect2>
<sect2 id="fs-modifyresgroup" role="fs-modifyresgroup">
<title>Modifying and Deleting Resource Groups</title>
<para><indexterm id="ITconfig-68"><primary>resource group</primary><secondary>
modifying</secondary></indexterm> <indexterm id="ITconfig-69"><primary>resource
group</primary><secondary>deleting</secondary></indexterm>After you have defined
resource groups, you can modify and delete the resource groups. You can change
the failover policy of a resource group by specifying a new failover policy
associated with that resource group, and you can add or delete resources to
the existing resource group. Note, however, that since you cannot have a resource
group online that does not contain any resources, Linux FailSafe does not
allow you to delete all resources from a resource group once the resource
group is online. Likewise, Linux FailSafe does not allow you to bring a resource
group online if it has no resources. Also, resources must be added and deleted
in atomic units; this means that resources which are interdependent must be
added and deleted together.</para>
<sect3>
<title>Modifying and Deleting Resource Groups with the Cluster Manager GUI
</title>
<para>To modify a failure policy with the Cluster Manager GUI, perform the
following procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Failover
Policies &amp; Resource Groups&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Modify
a Resource Group Definition&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
<para>To add or delete resources to a resource group definition with the Cluster
Manager GUI, perform the following procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Failover
Policies &amp; Resource Groups&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Add/Remove
Resources in Resource Group&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
<para>To delete a resource group with the Cluster Manager GUI, perform the
following procedure:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Failover
Policies &amp; Resource Groups&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Delete
a Resource Group&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task, or click on &ldquo;Cancel&rdquo; to cancel.</para>
</listitem>
</orderedlist>
</sect3>
<sect3>
<title>Modifying and Deleting Resource Groups with the Cluster Manager CLI
</title>
<para>Use the following CLI command to modify a resource group:</para>
<screen>cmgr> <userinput>modify resource_group </userinput><replaceable>A 
</replaceable>[<userinput>in cluster</userinput><replaceable>&ensp;B</replaceable>]
</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default. You modify a resource
group using the same commands you use to define a failover policy:</para>
<screen>resource_group <replaceable>A</replaceable>? <userinput>add resource
</userinput>&ensp;<replaceable>B</replaceable>&ensp;<userinput>of resource_type
</userinput>&ensp;<replaceable>C</replaceable>
resource_group <replaceable>A</replaceable>? <userinput>set failover_policy to 
</userinput><replaceable>D</replaceable></screen>
<para>You can use the following command to delete a resource group definition:
</para>
<screen>cmgr> <userinput>delete resource_group </userinput><replaceable>A 
</replaceable>[<userinput>in cluster</userinput><replaceable>&ensp;B</replaceable>]
</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
</sect3>
</sect2>
<sect2>
<title>Displaying Resource Groups</title>
<para><indexterm id="ITconfig-70"><primary>resource group</primary><secondary>
displaying</secondary></indexterm>You can display the parameters of a defined
resource group, and you can display all of the resource groups defined for
a cluster.</para>
<sect3>
<title>Displaying Resource Groups with the Cluster Manager GUI</title>
<para>The Cluster Manager GUI provides a convenient display of resource groups
through the FailSafe Cluster View. You can launch the FailSafe Cluster View
directly, or you can bring it up at any time by clicking on the &ldquo;FailSafe
Cluster View&rdquo; prompt at the bottom of the &ldquo;FailSafe Manager&rdquo;
display.</para>
<para>From the View menu of the FailSafe Cluster View, select Groups to see
all defined resource groups.</para>
<para>To display which nodes are currently running which groups, select &ldquo;Groups
owned by Nodes.&rdquo; To display which groups are running which failover
policies, select &ldquo;Groups by Failover Policies.&rdquo;</para>
</sect3>
<sect3>
<title>Displaying Resource Groups with the Cluster Manager CLI</title>
<para>Use the following command to view the parameters of a defined resource
group:</para>
<screen>cmgr> <userinput>show resource_group </userinput><replaceable>A </replaceable>[<userinput>
in cluster</userinput><replaceable>&ensp;B</replaceable>]</screen>
<para>If you have specified a default cluster, you do not need to specify
a cluster in this command and the CLI will use the default.</para>
<para>Use the following command to view all of the defined failover policies:
</para>
<screen>cmgr> <userinput>show resource_groups</userinput> [<userinput>in cluster
</userinput>&ensp;<replaceable>A</replaceable>]</screen>
</sect3>
</sect2>
</sect1>
<sect1 id="fs-setlogparams" role="fs-setlogparams">
<title id="LE22037-TITLE">Linux FailSafe System Log Configuration</title>
<para><indexterm id="ITconfig-71"><primary>log groups</primary></indexterm>Linux
FailSafe maintains system logs for each of the Linux FailSafe daemons. You
can customize the system logs according to the level of logging you wish to
maintain.</para>
<para>A log group is a set of processes that log to the same log file according
to the same logging configuration. All Linux FailSafe daemons make one log
group each. Linux FailSafe maintains the following log groups:</para>
<variablelist>
<varlistentry><term><filename><indexterm id="ITconfig-72"><primary>cli log
</primary></indexterm>cli</filename></term>
<listitem>
<para>Commands log</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-73"><primary>crsd log
</primary></indexterm>crsd</filename></term>
<listitem>
<para>Cluster reset services (<filename>crsd</filename>) log</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-74"><primary>diags log
</primary></indexterm>diags</filename></term>
<listitem>
<para>Diagnostics log</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-75"><primary>ha_agent
log</primary></indexterm>ha_agent</filename></term>
<listitem>
<para>HA monitoring agents (<filename>ha_ifmx2</filename>) log</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-76"><primary>ha_cmsd
log</primary></indexterm>ha_cmsd</filename></term>
<listitem>
<para>Cluster membership daemon (<filename>ha_cmsd</filename>) log</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-77"><primary>ha_fsd
log</primary></indexterm>ha_fsd</filename></term>
<listitem>
<para>Linux FailSafe daemon (<filename>ha_fsd</filename>) log</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-78"><primary>ha_gcd
log</primary></indexterm>ha_gcd</filename></term>
<listitem>
<para>Group communication daemon (<filename>ha_gcd</filename>) log</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-79"><primary>ha_ifd
log</primary></indexterm>ha_ifd</filename></term>
<listitem>
<para>network interface monitoring daemon (<filename>ha_ifd</filename>) log
</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-80"><primary>ha_script
log</primary></indexterm>ha_script</filename></term>
<listitem>
<para>Action and Failover policy scripts log</para>
</listitem>
</varlistentry>
<varlistentry><term><filename><indexterm id="ITconfig-81"><primary>ha_srmd
log</primary></indexterm>ha_srmd</filename></term>
<listitem>
<para>System resource manager (<filename>ha_srmd</filename>) log</para>
</listitem>
</varlistentry>
</variablelist>
<para>Log group configuration information is maintained for all nodes in the
pool for the <filename>cli</filename> and <filename>crsd</filename> log groups
or for all nodes in the cluster for all other log groups.You can also customize
the log group configuration for a specific node in the cluster or pool.</para>
<para>When you configure a log group, you specify the following information:
</para>
<itemizedlist>
<listitem><para>The log level, specified as character strings with the CUI
and numerically (1 to 19) with the CLI, as described below</para>
</listitem>
<listitem><para>The log file to log to</para>
</listitem>
<listitem><para>The node whose specified log group you are customizing (optional)
</para>
</listitem>
</itemizedlist>
<para><indexterm id="ITconfig-82"><primary>log level</primary></indexterm>
The log level specifies the verbosity of the logging, controlling the amount
of log messages that Linux FailSafe will write into an associated log group's
file. There are 10 debug level. <xref linkend="LE32420-PARENT">, shows the
logging levels as you specify them with the GUI and the CLI.</para>
<table frame="topbot" id="LE32420-PARENT">
<title id="LE32420-TITLE">Log Levels</title>
<tgroup cols="3" colsep="0" rowsep="0">
<colspec colwidth="92*">
<colspec colwidth="92*">
<colspec colwidth="302*">
<thead>
<row rowsep="1"><entry align="left" valign="bottom"><para>GUI level</para></entry>
<entry align="left" valign="bottom"><para>CLI level</para></entry><entry align="left"
valign="bottom"><para>Meaning</para></entry></row>
</thead>
<tbody>
<row>
<entry align="left" valign="top"><para><literal>Off</literal></para></entry>
<entry align="left" valign="top"><para>0</para></entry>
<entry align="left" valign="top"><para>No logging</para></entry>
</row>
<row>
<entry align="left" valign="top"><para><literal>Minimal</literal></para></entry>
<entry align="left" valign="top"><para>1</para></entry>
<entry align="left" valign="top"><para>Logs notification of critical errors
and normal operation</para></entry>
</row>
<row>
<entry align="left" valign="top"><para><literal>Info</literal></para></entry>
<entry align="left" valign="top"><para>2</para></entry>
<entry align="left" valign="top"><para>Logs minimal notification plus warning
</para></entry>
</row>
<row>
<entry align="left" valign="top"><para><literal>Default</literal></para></entry>
<entry align="left" valign="top"><para>5</para></entry>
<entry align="left" valign="top"><para>Logs all Info messages plus additional
notifications</para></entry>
</row>
<row>
<entry align="left" valign="top"><para><literal>Debug0</literal></para></entry>
<entry align="left" valign="top"><para>10</para></entry>
<entry align="left" valign="top"><para></para></entry>
</row>
<row>
<entry align="left" valign="top"><para>...</para></entry>
<entry align="left" valign="top"><para></para></entry>
<entry align="left" valign="top"><para><literal>Debug0</literal> through <literal>
Debug9</literal> (11 -19 in CLI) log increasingly more debug information,
including data structures. Many megabytes of disk space can be consumed on
the server when debug levels are used in a log configuration.</para></entry>
</row>
<row>
<entry align="left" valign="top"><para><literal>Debug9</literal></para></entry>
<entry align="left" valign="top"><para>19</para></entry>
<entry align="left" valign="top"><para></para></entry>
</row>
</tbody>
</tgroup>
</table>
<note>
<para>Notifications of critical errors and normal operations are always sent
to <filename>/var/log/failsafe/</filename>. Changes you make to the log level
for a log group do not affect <filename>SYSLOG</filename>.</para>
</note>
<para>The Linux FailSafe software appends the node name to the name of the
log file you specify. For example, when you specify the log file name for
a log group as <filename>/var/log/failsafe/cli</filename>, the file name will
be <filename>/var/log/failsafe/cli_</filename><command>nodename</command>.
</para>
<para><indexterm id="ITconfig-83"><primary>log files</primary></indexterm>The
default log file names are as follows.</para>
<variablelist>
<varlistentry><term><filename>/var/log/failsafe/cmsd_</filename><replaceable>
nodename</replaceable></term>
<listitem>
<para>log file for cluster membership services daemon in node <replaceable>
nodename</replaceable></para>
</listitem>
</varlistentry>
<varlistentry><term><filename>/var/log/failsafe/gcd_</filename><replaceable>
nodename</replaceable></term>
<listitem>
<para>log file for group communication daemon in node <replaceable>nodename
</replaceable></para>
</listitem>
</varlistentry>
<varlistentry><term><filename>/var/log/failsafe/srmd_</filename><replaceable>
nodename</replaceable></term>
<listitem>
<para>log file for system resource manager daemon in node <replaceable>nodename
</replaceable></para>
</listitem>
</varlistentry>
<varlistentry><term><filename>/var/log/failsafe/failsafe_</filename> <replaceable>
nodename</replaceable></term>
<listitem>
<para>log file for Linux FailSafe daemon, a policy implementor for resource
groups, in node  <replaceable>nodename</replaceable></para>
</listitem>
</varlistentry>
<varlistentry><term><filename>/var/log/failsafe/</filename><replaceable>agent_nodename
</replaceable></term>
<listitem>
<para>log file for monitoring agent named <replaceable>agent</replaceable>
in node <replaceable>nodename</replaceable>. For example, <filename>ifd_</filename><replaceable>
nodename</replaceable> is the log file for the interface daemon monitoring
agent that monitors interfaces and IP addresses and performs local failover
of IP addresses.</para>
</listitem>
</varlistentry>
<varlistentry><term><filename>/var/log/failsafe/crsd_</filename><replaceable>
nodename</replaceable></term>
<listitem>
<para>log file for reset daemon in node <replaceable>nodename</replaceable></para>
</listitem>
</varlistentry>
<varlistentry><term><filename>/var/log/failsafe/</filename>script_<replaceable>
nodename</replaceable></term>
<listitem>
<para>log file for scripts in node <replaceable>nodename</replaceable></para>
</listitem>
</varlistentry>
<varlistentry><term><filename>/var/log/failsafe/cli</filename>_<replaceable>
nodename</replaceable></term>
<listitem>
<para>log file or internal administrative commands in node <replaceable>nodename
</replaceable> invoked by the Cluster Manager GUI and Cluster Manager CLI
</para>
</listitem>
</varlistentry>
</variablelist>
<para>For information on using log groups in system recovery, see <xref linkend="LE28716-PARENT">.
</para>
<sect2>
<title>Configuring Log Groups with the Cluster Manager GUI</title>
<para>To configure a log group with the Cluster Manager GUI, perform the following
steps:</para>
<orderedlist>
<listitem><para>Launch the FailSafe Manager.</para>
</listitem>
<listitem><para>On the left side of the display, click on the &ldquo;Nodes
&amp; Clusters&rdquo; category.</para>
</listitem>
<listitem><para>On the right side of the display click on the &ldquo;Set Log
Configuration&rdquo; task link to launch the task.</para>
</listitem>
<listitem><para>Enter the selected inputs.</para>
</listitem>
<listitem><para>Click on &ldquo;OK&rdquo; at the bottom of the screen to complete
the task.</para>
</listitem>
</orderedlist>
</sect2>
<sect2>
<title>Configuring Log Groups with the Cluster Manager CLI</title>
<para>You can configure a log group with the following CLI command:</para>
<screen> cmgr> <userinput>define log_group </userinput><replaceable>A</replaceable> [<userinput>
on node </userinput><replaceable>B</replaceable>] [<userinput>in cluster 
</userinput><replaceable>C</replaceable>]</screen>
<para>You specify the node if you wish to customize the log group configuration
for a specific node only. If you have specified a default cluster, you do
not have to specify a cluster in this command; Linux FailSafe will use the
default.</para>
<para>The following prompt appears:</para>
<screen>Enter commands, when finished enter either "done" or "cancel"
log_group<replaceable>&ensp;A</replaceable>?</screen>
<para>When this prompt of the node name appears, you enter the log group parameters
you wish to modify in the following format:</para>
<screen>log_group<replaceable>&ensp;A</replaceable>? <userinput>set log_level to 
</userinput><replaceable>A</replaceable>
log_group<replaceable>&ensp;A</replaceable>? <userinput>add log_file</userinput>&ensp;<replaceable>
A</replaceable>
log_group<replaceable>&ensp;A</replaceable>? <userinput>remove log_file</userinput>&ensp;<replaceable>
A</replaceable></screen>
<para>When you are finished configuring the log group, enter <filename>done
</filename> to return to the <filename>cmgr</filename> prompt.</para>
</sect2>
<sect2>
<title>Modifying Log Groups with the Cluster Manager CLI</title>
<para>Use the following CLI command to modify a log group:</para>
<screen>cmgr> <userinput>modify log_group </userinput><replaceable>A</replaceable><userinput>
&ensp;on </userinput>[<userinput>node </userinput><replaceable>B</replaceable>] [<userinput>
in cluster </userinput><replaceable>C</replaceable>]</screen>
<para>You modify a log group using the same commands you use to define a log
group.</para>
</sect2>
<sect2>
<title>Displaying Log Group Definitions with the Cluster Manager GUI</title>
<para>To display log group definitions with the Cluster Manager GUI, run &ldquo;Set
Log Configuration&rdquo; and choose the log group to display from the rollover
menu. The current log level and log file for that log group will be displayed
in the task window, where you can change those settings if you desire.</para>
</sect2>
<sect2>
<title>Displaying Log Group Definitions with the Cluster Manager CLI</title>
<para>Use the following command to view the parameters of a defined resource:
</para>
<screen>cmgr> <userinput>show log_groups</userinput></screen>
<para>This command shows all of the log groups currently defined, with the
log group name, the logging levels and the log files.</para>
<para>For information on viewing the contents of the log file, see <xref linkend="LE28716-PARENT">.
</para>
</sect2>
</sect1>
<sect1 id="LE40511-PARENT">
<title id="LE40511-TITLE">Resource Group Creation Example</title>
<para><indexterm id="ITconfig-84"><primary>resource group</primary><secondary>
creation example</secondary></indexterm>Use the following procedure to create
a resource group using the Cluster Manager CLI:</para>
<orderedlist>
<listitem><para>Determine the list of resources that belong to the resource
group you are defining. The list of resources that belong to a resource group
are the resources that move from one node to another as one unit.</para>
<para><indexterm id="ITconfig-85"><primary>resource</primary><secondary>NFS
</secondary></indexterm> <indexterm id="ITconfig-86"><primary>resource type
</primary><secondary>NFS</secondary></indexterm>A resource group that provides
NFS services would contain a resource of each of the following types:</para>
<itemizedlist>
<listitem><para><filename>IP_address</filename></para>
</listitem>
<listitem><para><filename>volume</filename></para>
</listitem>
<listitem><para><filename>filesystem</filename></para>
</listitem>
<listitem><para><filename>NFS</filename></para>
<para>All resource and resource type dependencies of resources in a resource
group must be satisfied. For example, the <filename>NFS</filename> resource
type depends on the <filename>filesystem</filename> resource type, so a resource
group containing a resource of <filename>NFS</filename> resource type should
also contain a resource of <filename>filesystem</filename> resource type.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Determine the failover policy to be used by the resource group.
</para>
</listitem>
<listitem><para>Use the template <filename>cluster_mgr</filename> script available
in the <?Pub _nolinebreak><filename>/usr/lib/failsafe/cmgr-templates/cmgr-create-resource_group
</filename><?Pub /_nolinebreak><?Pub Caret> file.</para>
<para>This example shows a script that creates a resource group with the following
characteristics:</para>
<itemizedlist>
<listitem><para>The resource group is named <filename>nfs-group</filename></para>
</listitem>
<listitem><para>The resource group is in cluster <filename>HA-cluster</filename></para>
</listitem>
<listitem><para>The resource group uses the failover policy</para>
</listitem>
<listitem><para>the resource group contains <filename>IP_Address</filename>, <filename>
volume</filename>, <filename>filesystem</filename>, and <filename>NFS</filename>
resources</para>
<para>The following script can be used to create this resource group:</para>
<programlisting>define resource_group nfs-group in cluster HA-cluster
&ensp;       set failover_policy to n1_n2_ordered
&ensp;       add resource 192.0.2.34 of resource_type IP_address
&ensp;       add resource havol1 of resource_type volume
&ensp;       add resource /hafs1 of resource_type filesystem
&ensp;       add resource /hafs1 of resource_type NFS
done</programlisting>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>Run this script using the <command>-f</command> option of
the <command>cluster_mgr</command> command.</para>
</listitem>
</orderedlist>
</sect1>
<sect1 id="LE40790-PARENT">
<title id="LE40790-TITLE">Linux FailSafe Configuration Example CLI Script
</title>
<para>The following Cluster Manager CLI script provides an example which shows
how to configure a cluster in the cluster database. The script illustrates
the CLI commands that you execute when you define a cluster. You will use
the parameters of your own system when you configure your cluster. After you
create a CLI script, you can set the execute permissions and execute the script
directly.</para>
<para>For general information on CLI scripts, see <xref linkend="LE41514-PARENT">.
For information on the CLI template files that you can use to create your
own configuration script, see <xref linkend="LE10673-PARENT">.</para>
<programlisting>#!/usr/lib/failsafe/bin/cluster_mgr -f


#################################################################
#                                                               #
# Sample cmgr script to create a 2-node cluster in the cluster  #
# database (cdb).                                               #
# This script is created using cmgr template files under        #
#          /usr/lib/failsafe/cmgr-scripts directory.          #
# The cluster has 2 resource groups:                            #
# 1. nfs-group - Has 2 NFS, 2 filesystem, 2 volume, 1 statd and #
#                1 IP_address resources.                        #
# 2. web-group - Has 1 Netscape_web and 1 IP_address resources. #
#                                                               #
# NOTE: After running this script to define the cluster in the  #
# cdb, the user has to enable the two resource groups using the #
# cmgr admin online resource_group command.                     #
#                                                               #
#################################################################

#
# Create the first node.
# Information to create a node is obtained from template script:
#	/usr/lib/failsafe/cmgr-templates/cmgr-create-node
#

#
#
# logical name of the node. It is recommended that logical name of the # node be 
output of hostname(1) command.
#
define node sleepy
#
# Hostname of the node. This is optional. If this field is not
# specified,logical name of the node is assumed to be hostname.
# This value has to be
# the output of hostname(1) command.
#
&ensp;      set hostname to sleepy
# 
# Node identifier. Node identifier is a 16 bit integer that uniquely
# identifies the node. This field is optional. If value is
# not provided,cluster software generates node identifier.
# Example value: 1
&ensp;      set nodeid to 101
#
# Description of the system controller of this node.
# System controller can be &ldquo;chalL&rdquo; or &ldquo;msc&rdquo; or &ldquo;mmsc&rdquo;. If the node is a
# Challenge DM/L/XL, then system controller type is &ldquo;chalL&rdquo;. If the
# node is Origin 200 or deskside Origin 2000, then the system
# controller type is &ldquo;msc&rdquo;. If the node is rackmount Origin 2000, the
# system controller type is &ldquo;mmsc&rdquo;.
# Possible values: msc, mmsc, chalL
#
&ensp;       set sysctrl_type to msc
#
# You can enable or disable system controller definition. Users are 
# expected to enable system controller definition after verify the
# serial reset cables connected to this node.
# Possible values: enabled, disabled
#
&ensp;       set sysctrl_status to enabled
# 
# The system controller password for doing privileged system controller
# commands.
# This field is optional.
#
&ensp;       set sysctrl_password to none
#
# System controller owner. The node name of the machine that is 
# connected using serial cables to system controller of this node.
# System controller node also has to be defined in the CDB.
#
&ensp;       set sysctrl_owner to grumpy
#
# System controller device. The absolute device path name of the tty
# to which the serial cable is connected in this node.
# Example value: /dev/ttyd2
#
&ensp;       set sysctrl_device to /dev/ttyd2
#
# Currently, the system controller owner can be connected to the system
# controller on this node using &ldquo;tty&rdquo; device.  
# Possible value: tty
#
&ensp;       set sysctrl_owner_type to tty
#
# List of control networks. There can be multiple control networks
# specified for a node. HA cluster software uses these control 
# networks for communication between nodes.  At least two control
# networks should be specified for heartbeat messages and one
# control network for failsafe control messages.
# For each control network for the node, please add one more
# control network section.
#
# Name of control network IP address. This IP address must
# be configured on the network interface in /etc/rc.config
# file in the node.
# It is recommended that the IP address in internet dot notation
# is provided.
# Example value: 192.26.50.3
#
&ensp;       add nic 192.26.50.14
#
# Flag to indicate if the control network can be used for sending 
# heartbeat messages.
# Possible values: true, false
#
&ensp;           set heartbeat to true
#
# Flag to indicate if the control network can be used for sending 
# failsafe control messages.
# Possible values: true, false
#
&ensp;           set ctrl_msgs to true
#
# Priority of the control network. Higher the priority value, lower the
# priority of the control network.
# Example value: 1
#
&ensp;           set priority to 1
#
# Control network information complete
#
&ensp;       done
#
# Add more control networks information here.
#

# Name of control network IP address. This IP address must be
# configured on the network interface in /etc/rc.config
# file in the node.
# It is recommended that the IP address in internet dot
# notation is provided.
# Example value: 192.26.50.3
#
&ensp;       add nic 150.166.41.60
#
# Flag to indicate if the control network can be used for sending 
# heartbeat messages.
# Possible values: true, false
#
&ensp;           set heartbeat to true
#
# Flag to indicate if the control network can be used for sending 
# failsafe control messages.
# Possible values: true, false
#
&ensp;           set ctrl_msgs to false
#
# Priority of the control network. Higher the priority value, lower the
# priority of the control network.
# Example value: 1
#
&ensp;           set priority to 2
#
# Control network information complete
#
&ensp;       done
#
# Node definition complete
#
done


#
# Create the second node.
# Information to create a node is obtained from template script:
# 	/usr/lib/failsafe/cmgr-templates/cmgr-create-node
#

#
#
# logical name of the node. It is recommended that logical name of 
# the node be output of hostname(1) command.
#
define node grumpy
#
# Hostname of the node. This is optional. If this field is not
# specified,logical name of the node is assumed to be hostname. 
# This value has to be
# the output of hostname(1) command.
#
&ensp;      set hostname to grumpy
# 
# Node identifier. Node identifier is a 16 bit integer that uniquely
# identifies the node. This field is optional. If value is 
# not provided,cluster software generates node identifier.
# Example value: 1
&ensp;      set nodeid to 102
#
# Description of the system controller of this node.
# System controller can be &ldquo;chalL&rdquo; or &ldquo;msc&rdquo; or &ldquo;mmsc&rdquo;. If the node is a
# Challenge DM/L/XL, then system controller type is &ldquo;chalL&rdquo;. If the
# node is Origin 200 or deskside Origin 2000, then the system
# controller type is &ldquo;msc&rdquo;. If the node is rackmount Origin 2000,
# the system controller type is &ldquo;mmsc&rdquo;.
# Possible values: msc, mmsc, chalL
#
&ensp;       set sysctrl_type to msc
#
# You can enable or disable system controller definition. Users are 
# expected to enable system controller definition after verify the
# serial reset cables connected to this node.
# Possible values: enabled, disabled
#
&ensp;       set sysctrl_status to enabled
# 
# The system controller password for doing privileged system controller
# commands.
# This field is optional.
#
&ensp;       set sysctrl_password to none
#
# System controller owner. The node name of the machine that is 
# connected using serial cables to system controller of this node.
# System controller node also has to be defined in the CDB.
#
&ensp;       set sysctrl_owner to sleepy
#
# System controller device. The absolute device path name of the tty
# to which the serial cable is connected in this node.
# Example value: /dev/ttyd2
#
&ensp;       set sysctrl_device to /dev/ttyd2
#
# Currently, the system controller owner can be connected to the system
# controller on this node using &ldquo;tty&rdquo; device.  
# Possible value: tty
#
&ensp;       set sysctrl_owner_type to tty
#
# List of control networks. There can be multiple control networks
# specified for a node. HA cluster software uses these control
# networks for communication between nodes.  At least two control
# networks should be specified for heartbeat messages and one
# control network for failsafe control messages.
# For each control network for the node, please add one more 
# control network section.
#
# Name of control network IP address. This IP address must be
# configured on the network interface in /etc/rc.config
# file in the node.
# It is recommended that the IP address in internet dot notation
# is provided.
# Example value: 192.26.50.3
#
&ensp;       add nic 192.26.50.15
#
# Flag to indicate if the control network can be used for sending 
# heartbeat messages.
# Possible values: true, false
#
&ensp;           set heartbeat to true
#
# Flag to indicate if the control network can be used for sending 
# failsafe control messages.
# Possible values: true, false
#
&ensp;           set ctrl_msgs to true
#
# Priority of the control network. Higher the priority value, lower the
# priority of the control network.
# Example value: 1
#
&ensp;           set priority to 1
#
# Control network information complete
#
&ensp;       done
#
# Add more control networks information here.
#

# Name of control network IP address. This IP address must be
# configured on the network interface in /etc/rc.config
# file in the node.
# It is recommended that the IP address in internet dot notation
# is provided.
# Example value: 192.26.50.3
#
&ensp;       add nic 150.166.41.61
#
# Flag to indicate if the control network can be used for sending 
# heartbeat messages.
# Possible values: true, false
#
&ensp;           set heartbeat to true
#
# Flag to indicate if the control network can be used for sending 
# failsafe control messages.
# Possible values: true, false
#
&ensp;           set ctrl_msgs to false
#
# Priority of the control network. Higher the priority value, lower the
# priority of the control network.
# Example value: 1
#
&ensp;           set priority to 2
#
# Control network information complete
#
&ensp;       done
#
# Node definition complete
#
done


#
# Define (create) the cluster.
# Information to create the cluster is obtained from template script:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-cluster
#

#
# Name of the cluster.  
#
define cluster failsafe-cluster
#
# Notification command for the cluster. This is optional. If this 
# field is not specified,  /usr/bin/mail command is used for
# notification. Notification is sent when there is change in status of
# cluster, node and resource group.
#
&ensp;      set notify_cmd to /usr/bin/mail
# 
# Notification address for the cluster. This field value is passed as
# argument to the notification command. Specifying the notification
# command is optional and user can specify only the notification
# address in order to receive notifications by mail. If address is
# not specified, notification will not be sent.
# Example value: failsafe_alias@sysadm.company.com
&ensp;      set notify_addr to robinhood@sgi.com princejohn@sgi.com
#
# List of nodes added to the cluster.
# Repeat the following line for each node to be added to the cluster.
# Node should be already defined in the CDB and logical name of the
# node has to be specified.
&ensp;       add node sleepy
#
# Add more nodes to the cluster here.
#
&ensp;       add node grumpy

#
# Cluster definition complete
#
done


#
# Create failover policies
# Information to create the failover policies is obtained from
# template script:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-cluster
#

#
# Create the first failover policy.
#

#
# Name of the failover policy.  
#
define failover_policy sleepy-primary
#
# Failover policy attribute. This field is mandatory.
# Possible values: Auto_Failback, Controlled_Failback, Auto_Recovery,
# InPlace_Recovery
#

&ensp;       set attribute to Auto_Failback

&ensp;       set attribute to Auto_Recovery

# 
# Failover policy script. The failover policy scripts have to
# be present in
# /usr/lib/failsafe/policies directory. This field is mandatory.
# Example value: ordered (file name not the full path name).
&ensp;       set script to ordered
#
# Failover policy domain. Ordered list of nodes in the cluster
# separated by spaces. This field is mandatory.
#
&ensp;       set domain to sleepy grumpy
#
# Failover policy definition complete
#
done

#
# Create the second failover policy.
#

#
# Name of the failover policy.  
#
define failover_policy grumpy-primary
#
# Failover policy attribute. This field is mandatory. 
# Possible values: Auto_Failback, Controlled_Failback, Auto_Recovery,
# InPlace_Recovery
#

&ensp;       set attribute to Auto_Failback

&ensp;       set attribute to InPlace_Recovery

# 
# Failover policy script. The failover policy scripts have
# to be present in
# /usr/lib/failsafe/policies directory. This field is mandatory.
# Example value: ordered (file name not the full path name).
&ensp;       set script to ordered
#
# Failover policy domain. Ordered list of nodes in the cluster
# separated by spaces. This field is mandatory.
#
&ensp;       set domain to  grumpy sleepy
#
# Failover policy definition complete
#
done


#
# Create the IP_address resources.
# Information to create an IP_address resource is obtained from:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-resource-IP_address
#

#
# If multiple resources of resource type IP_address have to be created,
# repeat the following IP_address definition template.
#
# Name of the IP_address resource.  The name of the resource has to
# be IP address in the internet &ldquo;.&rdquo; notation. This IP address is used
# by clients to access highly available resources.
# Example value: 192.26.50.140
#
define resource 150.166.41.179 of resource_type IP_address in cluster failsafe-cluster

#
# The network mask for the IP address. The network mask value is used
# to configure the IP address on the network interface.
# Example value: 0xffffff00
&ensp;       set NetworkMask to 0xffffff00
# 
# The ordered list of interfaces that can be used to configure the IP
# address.The list of interface names are separated by comma. 
# Example value: eth0, eth1
&ensp;       set interfaces to eth1
#
# The broadcast address for the IP address.
# Example value: 192.26.50.255
&ensp;       set BroadcastAddress to 150.166.41.255

#
# IP_address resource definition for the cluster complete
#
done

#
# Name of the IP_address resource.  The name of the resource has to be
# IP address in the internet &ldquo;.&rdquo; notation. This IP address is used by 
# clients to access highly available resources.
# Example value: 192.26.50.140
#
define resource 150.166.41.99 of resource_type IP_address in cluster failsafe-cluster

#
# The network mask for the IP address. The network mask value is used
# to configure the IP address on the network interface.
# Example value: 0xffffff00
&ensp;       set NetworkMask to 0xffffff00
# 
# The ordered list of interfaces that can be used to configure the IP
# address.
# The list of interface names are separated by comma. 
# Example value: eth0, eth1
&ensp;       set interfaces to eth1
#
# The broadcast address for the IP address.
# Example value: 192.26.50.255
&ensp;       set BroadcastAddress to 150.166.41.255

#
# IP_address resource definition for the cluster complete
#
done


#
# Create the volume resources.
# Information to create a volume resource is obtained from:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-resource-volume
#

#
# If multiple resources of resource type volume have to be created,
# repeat the following volume definition template.
#
# Name of the volume.  The name of the volume has to be:
# Example value: HA_vol (not /dev/xlv/HA_vol)
#
define resource bagheera of resource_type volume in cluster failsafe-cluster

#
# The user name of the device file name. This field is optional. If
# this field is not specified, value ``root'' is used.
# Example value: oracle
&ensp;       set devname-owner to root
# 
# The group name of the device file name. This field is optional.
# If this field is not specified, value ``sys&rdquo; is used.
# Example value: oracle
&ensp;       set devname-group to sys
#
# The device file permissions. This field is optional. If this
# field is not specified, value ``666&rdquo; is used. The file permissions
# have to be specified in octal notation. See chmod(1) for more
# information.
# Example value: 666
&ensp;       set devname-mode to 666

#
# Volume resource definition for the cluster complete
#
done

#
# Name of the volume.  The name of the volume has to be:
# Example value: HA_vol (not /dev/xlv/HA_vol)
#
define resource bhaloo of resource_type volume in cluster failsafe-cluster

#
# The user name of the device file name. This field is optional. If this
# field is not specified, value &ldquo;root&rdquo; is used.
# Example value: oracle
&ensp;       set devname-owner to root
# 
# The group name of the device file name. This field is optional.
# If this field is not specified, value &ldquo;sys&rdquo; is used.
# Example value: oracle
&ensp;       set devname-group to sys
#
# The device file permissions. This field is optional. If this field is
# not specified, value &ldquo;666&rdquo; is used. The file permissions
# have to be specified in octal notation. See chmod(1) for more
# information.
# Example value: 666
&ensp;       set devname-mode to 666

#
# Volume resource definition for the cluster complete
#
done


#
# Create the filesystem resources.
# Information to create a filesystem resource is obtained from:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-resource-filesystem
#

#
# filesystem resource type is for XFS filesystem only.

# If multiple resources of resource type filesystem have to be created,
# repeat the following filesystem definition template.
#
# Name of the filesystem.  The name of the filesystem resource has 
# to be absolute path name of the filesystem mount point.
# Example value: /shared_vol 
#
define resource /haathi of resource_type filesystem in cluster failsafe-cluster

#
# The name of the volume resource corresponding to the filesystem. This 
# resource should be the same as the volume dependency, see below.
# This field is mandatory.
# Example value: HA_vol
&ensp;       set volume-name to bagheera
# 
# The options to be used when mounting the filesystem. This field is
# mandatory. For the list of mount options, see fstab(4).
# Example value: &ldquo;rw&rdquo;
&ensp;       set mount-options to rw
#
# The monitoring level for the filesystem. This field is optional. If
# this field is not specified, value &ldquo;1&rdquo; is used. 
# Monitoring level can be 
# 1 - Checks if filesystem exists in the mtab file (see mtab(4)). This
# is a lightweight check compared to monitoring level 2.
# 2 - Checks if the filesystem is mounted using stat(1m) command.
#
&ensp;       set monitoring-level to 2
done

#
# Add filesystem resource type dependency
#
modify resource /haathi of resource_type filesystem in cluster failsafe-cluster
#
# The filesystem resource type definition also contains a resource
# dependency on a volume resource.
# This field is mandatory.
# Example value: HA_vol
&ensp;       add dependency bagheera of type volume
#
# filesystem resource definition for the cluster complete
#
done

#
# Name of the filesystem.  The name of the filesystem resource has 
# to be absolute path name of the filesystem mount point.
# Example value: /shared_vol 
#
define resource /sherkhan of resource_type filesystem in cluster failsafe-cluster

#
# The name of the volume resource corresponding to the filesystem. This 
# resource should be the same as the volume dependency, see below.
# This field is mandatory.
# Example value: HA_vol
&ensp;       set volume-name to bhaloo
# 
# The options to be used when mounting the filesystem. This field is
# mandatory.For the list of mount options, see fstab(4).
# Example value: &ldquo;rw&rdquo;
&ensp;       set mount-options to rw
#
# The monitoring level for the filesystem. This field is optional. If
# this field is not specified, value &ldquo;1&rdquo; is used. 
# Monitoring level can be 
# 1 - Checks if filesystem exists in the mtab file (see mtab(4)). This
# is a lightweight check compared to monitoring level 2.
# 2 - Checks if the filesystem is mounted using stat(1m) command.
#
&ensp;       set monitoring-level to 2
done

#
# Add filesystem resource type dependency
#
modify resource /sherkhan of resource_type filesystem in cluster failsafe-cluster
#
# The filesystem resource type definition also contains a resource
# dependency on a volume resource.
# This field is mandatory.
# Example value: HA_vol
&ensp;       add dependency bhaloo of type volume
#
# filesystem resource definition for the cluster complete
#
done


#
# Create the statd resource.
# Information to create a filesystem resource is obtained from:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-resource-statd
#

#
# If multiple resources of resource type statd have to be created,
# repeat the following filesystem definition template.
#
# Name of the statd.  The name of the resource has to be the location
# of the NFS/lockd directory.
# Example value: /disk1/statmon
#

define resource /haathi/statmon of resource_type statd in cluster failsafe-cluster

#
# The IP address on which the NFS clinets connect, this resource should
# be the same as the IP_address dependency, see below.
# This field is mandatory.
# Example value: 128.1.2.3
&ensp;       set InterfaceAddress to 150.166.41.99
done


#
# Add the statd resource type dependencies
#
modify resource /haathi/statmon of resource_type statd in cluster failsafe-cluster
#
# The statd resource type definition also contains a resource
# dependency on a IP_address resource.
# This field is mandatory.
# Example value: 128.1.2.3
&ensp;       add dependency 150.166.41.99 of type IP_address
#
# The statd resource type definition also contains a resource
# dependency on a filesystem resource. It defines the location of
# the NFS lock directory filesystem.
# This field is mandatory.
# Example value: /disk1
&ensp;       add dependency /haathi of type filesystem
#
# statd resource definition for the cluster complete
#
done


#
# Create the NFS resources.
# Information to create a NFS resource is obtained from:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-resource-NFS
#

#
# If multiple resources of resource type NFS have to be created, repeat
# the following NFS definition template.
#
# Name of the NFS export point. The name of the NFS resource has to be
# export path name of the filesystem mount point.
# Example value: /disk1
#
define resource /haathi of resource_type NFS in cluster failsafe-cluster

#
# The export options to be used when exporting the filesystem. For the
# list of export options, see exportfs(1M).
# This field is mandatory.
# Example value: &ldquo;rw,wsync,anon=root&rdquo;
&ensp;       set export-info to rw
#
# The name of the filesystem resource corresponding to the export
# point. This resource should be the same as the filesystem dependency,
# see below.
# This field is mandatory.
# Example value: /disk1
&ensp;       set filesystem to /haathi
done

#
# Add the resource type dependency
#
modify resource /haathi of resource_type NFS in cluster failsafe-cluster
#
# The NFS resource type definition also contains a resource dependency
# on a filesystem resource.
# This field is mandatory.
# Example value: /disk1
&ensp;       add dependency /haathi of type filesystem
#
# The NFS resource type also contains a pseudo resource dependency
# on a statd resource. You really must have a statd resource associated
# with a NFS resource, so the NFS locks can be failed over.
# This field is mandatory.
# Example value: /disk1/statmon
&ensp;       add dependency /haathi/statmon of type statd

#
# NFS resource definition for the cluster complete
#
done

#
# Name of the NFS export point. The name of the NFS resource has to be 
# export path name of the filesystem mount point.
# Example value: /disk1
#
define resource /sherkhan of resource_type NFS in cluster failsafe-cluster

# 
# The export options to be used when exporting the filesystem. For the
# list of export options, see exportfs(1M).
# This field is mandatory.
# Example value: &ldquo;rw,wsync,anon=root&rdquo;
&ensp;       set export-info to rw
#
# The name of the filesystem resource corresponding to the export
# point. This 
# resource should be the same as the filesystem dependency, see below.
# This field is mandatory.
# Example value: /disk1
&ensp;       set filesystem to /sherkhan
done

#
# Add the resource type dependency
#
modify resource /sherkhan of resource_type NFS in cluster failsafe-cluster
#
# The NFS resource type definition also contains a resource dependency
# on a filesystem resource.
# This field is mandatory.
# Example value: /disk1
&ensp;       add dependency /sherkhan of type filesystem
#
# The NFS resource type also contains a pseudo resource dependency
# on a statd resource. You really must have a statd resource associated
# with a NFS resource, so the NFS locks can be failed over.
# This field is mandatory.
# Example value: /disk1/statmon
&ensp;       add dependency /haathi/statmon of type statd

#
# NFS resource definition for the cluster complete
#
done


#
# Create the Netscape_web resource.
# Information to create a Netscape_web resource is obtained from:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-resource-Netscape_web
#

#
# If multiple resources of resource type Netscape_web have to be
# created, repeat the following filesystem definition template.
#
# Name of the Netscape WEB server.  The name of the resource has to be
# a unique identifier.
# Example value: ha80 
#
define resource web-server of resource_type Netscape_web in cluster failsafe-cluster

#
# The locations of the servers startup and stop scripts.
# This field is mandatory.
# Example value: /usr/ns-home/ha86
&ensp;       set admin-scripts to /var/netscape/suitespot/https-control3
#
# the TCP port number with the server listens on.
# This field is mandatory.
# Example value: 80
&ensp;       set port-number to 80
#
# The desired monitoring level, the user can specify either;
#       1 - checks for process existence
#       2 - issues an HTML query to the server.
# This field is mandatory.
# Example value: 2
&ensp;       set monitor-level to 2
#
# The locations of the WEB servers initial HTML page
# This field is mandatory.
# Example value: /var/www/htdocs
&ensp;       set default-page-location to /var/www/htdocs
#
# The WEB servers IP address, this must be a configured IP_address
# resource. 
# This resource should be the same as the IP_address dependency, see
# below.
# This field is mandatory.
# Example value: 28.12.9.5
&ensp;       set web-ipaddr to 150.166.41.179
done

#
# Add the resource dependency
#
modify resource web-server of resource_type Netscape_web in cluster failsafe-cluster
#
# The Netscape_web resource type definition also contains a resource
# dependency on a IP_address resource.
# This field is mandatory.
# Example value: 28.12.9.5
&ensp;       add dependency 150.166.41.179 of type IP_address
#
# Netscape_web resource definition for the cluster complete
#
done


#
# Create the resource groups.
# Information to create a resource group is obtained from:
#       /usr/lib/failsafe/cmgr-templates/cmgr-create-resource_group
#

#
# Name of the resource group. Name of the resource group must be unique
# in the cluster.
#
define resource_group nfs-group in cluster failsafe-cluster
#
# Failover policy for the resource group. This field is mandatory. 
# Failover policy should be already defined in the CDB.
#
&ensp;       set failover_policy to sleepy-primary
#
# List of resources in the resource group.
# Repeat the following line for each resource to be added to the
# resource group.
&ensp;       add resource 150.166.41.99 of resource_type IP_address
#
# Add more resources to the resource group here.
#
&ensp;       add resource bagheera of resource_type volume

&ensp;       add resource bhaloo of resource_type volume

&ensp;       add resource /haathi of resource_type filesystem

&ensp;       add resource /sherkhan of resource_type filesystem

&ensp;       add resource /haathi/statmon of resource_type statd

&ensp;       add resource /haathi of resource_type NFS

&ensp;       add resource /sherkhan of resource_type NFS

#
# Resource group definition complete
#
done

#
# Name of the resource group. Name of the resource group must be unique
# in the cluster.
#
define resource_group web-group in cluster failsafe-cluster
#
# Failover policy for the resource group. This field is mandatory. 
# Failover policy should be already defined in the CDB.
#
&ensp;       set failover_policy to grumpy-primary
#
# List of resources in the resource group.
# Repeat the following line for each resource to be added to the
# resource group.
&ensp;       add resource 150.166.41.179 of resource_type IP_address

#
# Add more resources to the resource group here.
#

	 add resource web-server of resource_type Netscape_web

#
# Resource group definition complete
#
done


#
# Script complete. This should be last line of the script
#
quit</programlisting>
</sect1>
</chapter>
<?Pub *0000158852>