Linux FailSafe Recovery
This chapter provides information on FailSafe system recovery, and includes
sections on the following topics:
Overview of FailSafe System Recovery
recoveryoverview
When a FailSafe system experiences problems, you can
use some of the FailSafe features and commands to determine where the problem
is.
FailSafe provides the following tools to evaluate and recover from system
failure:
Log files
Commands to monitor status of system components
Commands to start, stop, and fail over highly available services
Keep in mind that the FailSafe logs may not detect system problems that
do not translate into FailSafe problems. For example, if a CPU goes bad, or
hardware maintenance is required, FailSafe may not be able to detect and log
these failures.
In general, when evaluating system problems of any nature on a FailSafe
configuration, you should determine whether you need to shut down a node to
address those problems. When you shut down a node, perform the following steps:
Stop FailSafe services on that node
Shut down the node to perform needed maintenance and repair
Start up the node
Start FailSafe services on that node
It is important that you explicitly stop FailSafe services before shutting
down a node, where possible, so that FailSafe does not interpret the node
shutdown as node failure. If FailSafe interprets the service interruption
as node failure, there could be unexpected ramifications, depending on how
you have configured your resource groups and your application failover domain.
When you shut down a node to perform maintenance, you may need to change
your FailSafe configuration to keep your system running.
FailSafe Log Files
log filesLinux
FailSafe maintains system logs for each of the FailSafe daemons. You can customize
the system logs according to the level of logging you wish to maintain.
For information on setting up log configurations, see .
Log messages can be of the following types:
Normal
Normal messages report on the successful completion of a task. An example
of a normal message is as follows:
Wed Sep 2 11:57:25.284 <N ha_gcd cms 10185:0>
Delivering TOTAL membership (S# 1, GS# 1)
Error/Warning
Error or warning messages indicate that an error has occurred or may
occur soon. These messages may result from using the wrong command or improper
syntax. An example of a warning message is as follows:
Wed Sep 2 13:45:47.199 <W crsd crs 9908:0
crs_config.c:634> CI_ERR_NOTFOUND, safer - no
such node
Syslog Messages
All normal and error messages are also logged to syslog.
Syslog messages include the symbol <CI> in the header
to indicate they are cluster-related messages. An example of a syslog message
is as follows:
Wed Sep 2 12:22:57 6X:safe syslog: <<CI>
ha_cmsd misc 10435:0> CI_FAILURE, I am not part
of the enabled cluster anymore
Debug
Debug messages appear in the log group file when the logging level is
set to debug0 or higher (using the GUI) or 10 or higher (using the CLI).
Many megabytes of disk space can be consumed on the server when debug
levels are used in a log configuration.
Examining the log files should enable you to see the nature of the system
error. Noting the time of the error and looking at the log files to note the
activity of the various daemons immediately before error occurred, you may
be able to determine what situation existed that caused the failure.
Node Membership and Resets
In looking over the actions of a FailSafe system on failure to determine
what has gone wrong and how processes have transferred, it is important to
consider the concept of node membership. When failover occurs, the runtime
failover domain can include only those nodes that are in the cluster membership.membershipcluster
cluster
membershipmembership
node
nodemembership
Node Membership and Tie-Breaker Node
nodemembership
Nodes can enter into the cluster membership only when they are
not disabled and they are in a known state. This ensures that data integrity
is maintained because only nodes within the cluster membership can access
the shared storage. If nodes outside the membership and not controlled by
FailSafe were able to access the shared storage, two nodes might try to access
the same data at the same time, a situation that would result in data corruption.
For this reason, disabled nodes do not participate in the membership computation.
Note that no attempt is made to reset nodes that are configured disabled before
confirming the cluster membership.
Node membership in a cluster is based on a quorum majority.
For a cluster to be enabled, more than 50% of the nodes in the cluster must
be in a known state, able to talk to each other, using heartbeat control networks.
This quorum determines which nodes are part of the cluster membership that
is formed.
tie-breaker nodeIf
there are an even number of nodes in the cluster, it is possible that there
will be no majority quorum; there could be two sets of nodes, each consisting
of 50% of the total number of node, unable to communicate with the other set
of nodes. In this case, FailSafe uses the node that has been configured as
the tie-breaker node when you configured your FailSafe parameters. If no tie-breaker
node was configured, FailSafe uses the enabled node with the lowest node id
number.
For information on setting tie-breaker nodes, see .
node
reset resetting
nodesThe nodes in a quorum attempt to reset the nodes
that are not in the quorum. Nodes that can be reset are declared
DOWN in the membership, nodes that could not be reset are declared
UNKNOWN. Nodes in the quorum are UP.
If a new majority quorum is computed, a new membership is
declared whether any node could be reset or not.
If at least one node in the current quorum has a current membership,
the nodes will proceed to declare a new membership if they can reset at least
one node.
If all nodes in the new tied quorum are coming up for the
first time, they will try to reset and proceed with a new membership only
if the quorum includes the tie-breaker node.
If a tied subset of nodes in the cluster had no previous membership,
then the subset of nodes in the cluster with the tie-breaker node attempts
to reset nodes in the other subset of nodes in the cluster. If at least one
node reset succeeds, a new membership is confirmed.
If a tied subset of nodes in the cluster had previous membership,
the nodes in one subset of nodes in the cluster attempt to reset nodes in
the other subset of nodes in the cluster. If at least one node reset succeeds,
a new membership is confirmed. The subset of nodes in the cluster with the
tie-breaker node resets immediately, the other subset of nodes in the cluster
attempts to reset after some time.
Resets are done through system controllers connected to tty
ports through serial lines. Periodic serial line monitoring never stops. If
the estimated serial line monitoring failure interval and the estimated heartbeat
loss interval overlap, we suspect a power failure at the node being reset.
No Membership Formed
cluster membershipWhen no
cluster membership is formed, you should check the following areas for possible
problems:
Is the cluster membership daemon, ha_cmsd
running? Is the database daemon, cdbd, running?
Can the nodes communicate with each other?
Are the control networks configured as heartbeat networks?
Can the control network addresses be pinged from peer nodes?
Are the quorum majority or tie rules satisfied?
Look at the cmsd log to determine membership status.
If a reset is required, are the following conditions met?
Is the node control daemon, crsd,
up and running?
Is the reset serial line in good health?
You can look at the crsd log for the node you are
concerned with, or execute an admin ping and
admin reset command on the node to check this.
No Membership Formed
When no cluster membership is formed, you should check the following
areas for possible problems:
Is the cluster membership daemon, ha_cmsd
running? Is the database daemon, cdbd, running?
Can the nodes communicate with each other?
Are the control networks configured as heartbeat networks?
Can the control network addresses be pinged from peer nodes?
Are the quorum majority or tie rules satisfied?
Look at the cmsd log to determine membership status.
If a reset is required, are the following conditions met?
Is the node control daemon, crsd,
up and running?
Is the reset serial line in good health?
You can look at the crsd log for the node you are
concerned with, or execute an admin ping and
admin reset command on the node to check this.
Status Monitoring
FailSafe allows you to monitor and check the status of specified clusters,
nodes, resources, and resource groups. You can use this feature to isolate
where your system is encountering problems.
With the FailSafe Cluster Manager GUI Cluster View, you can monitor
the status of the FailSafe components continuously through their visual representation.
Using the FailSafe Cluster Manger CLI, you can display the status of the individual
components by using the show command.
For information on status monitoring and on the meaning of the states
of the FailSafe components, see .
Dynamic Control of FailSafe Services
FailSafe allows you to perform a variety of administrative tasks that
can help you troubleshoot a system with problems without bringing down the
entire system. These tasks include the following:
You can add or delete nodes from a cluster without affecting
the FailSafe services and the applications running in the cluster
You can add or delete a resource group without affecting other
online resource groups
You can add or delete resources from a resource group while
it is still online
You can change FailSafe parameters such as the heartbeat interval
and the node timeout and have those values take immediate affect while the
services are up and running
You can start and stop FailSafe services on specified nodes
You can move a resource group online, or take it offline
You can stop the monitoring of a resource group by putting
the resource group into maintenance mode. This is not an expensive operation,
as it does not stop and start the resource group, it just puts the resource
group in a state where it is not available to FailSafe.
You can reset individual nodes
For information on how to perform these tasks, see ,
and .
Recovery Procedures
recoveryprocedures
The following sections describe various recovery procedures
you can perform when different failsafe components fail. Procedures for the
following situations are provided:
Cluster Error Recovery
Follow this procedure if status of the cluster is UNKNOWN in all nodes
in the cluster.cluster
error recovery
Check to see if there are control networks that have failed
(see ).
At least 50% of the nodes in the cluster must be able to communicate
with each other to have an active cluster (Quorum requirement). If there are
not sufficient nodes in the cluster that can communicate with each other using
control networks, stop HA services on some of the nodes so that the quorum
requirement is satisfied.
If there are no hardware configuration problems, detach all
resource groups that are online in the cluster (if any), stop HA services
in the cluster, and restart HA services in the cluster.
The following cluster_mgr command detaches the resource
group web-rg in
cluster web-cluster:
cmgr> admin detach resource_group web-rg in cluster web-cluster
To stop HA services in the cluster web-cluster
and ignore errors (force option), use the following
cluster_mgr command:
cmgr> stop ha_services for cluster web-cluster force
To start HA services in the cluster web-cluster,
use the following cluster_mgr command:
cmgr> start ha_services for cluster web-cluster
Node Error recovery
Follow this procedure if the status of a node is UNKNOWN in an active
cluster:nodeerror
recovery
Check to see if the control networks in the node are working
(see ).
Check to see if the serial reset cables to reset the node
are working (see ).
If there are no hardware configuration problems, stop HA services
in the node and restart HA services.
To stop HA services in the node web-node3 in the
cluster web-cluster, ignoring errors (force
option), use the following cluster_mgr command
cmgr> stop ha_services in node web-node3 for cluster web-cluster
force
To start HA services in the node web-node3 in the
cluster web-cluster, use the following cluster_mgr
command:
cmgr> start ha_services in node web-node3 for cluster web-cluster
Resource Group Maintenance and Error Recovery
To do simple maintenance on an application that is part of the resource
group, use the following procedure. This procedure stops monitoring the resources
in the resource group when maintenance mode is on. You need to turn maintenance
mode off when application maintenance is done.
resource grouprecovery
If there is node failure on the node where resource group maintenance
is being performed, the resource group is moved to another node in the failover
policy domain.
To put a resource group web-rg in maintenance
mode, use the following cluster_mgr command:
cmgr> admin maintenance_on resource_group web-rg in cluster web-cluster
The resource group state changes to ONLINE_MAINTENANCE
. Do whatever application maintenance is required. (Rotating application
logs is an example of simple application maintenance).
To remove a resource group web-rg from
maintenance mode, use the following cluster_mgr
command:
cmgr> admin maintenance_off resource_group web-rg in cluster
web-cluster
The resource group state changes back to ONLINE.
You perform the following procedure when a resource group is in an
ONLINE state and has an SRMD EXECUTABLE ERROR.
Look at the SRM logs (default location: /var/log/failsafe/srmd_
node name) to determine the cause of failure
and the resource that has failed.
Fix the cause of failure. This might require changes to resource
configuration or changes to resource type stop/start/failover action timeouts.
After fixing the problem, move the resource group offline
with the force option and then move the resource group
online.
The following cluster_mgr command moves the resource
group web-rg in the cluster web-cluster
offline and ignores any errors:
cmgr> admin offline resource_group web-rg in cluster web-cluster
force
The following cluster_mgr command moves the resource
group web-rg in the cluster web-cluster
online:
cmgr> admin online resource_group web-rg in cluster web-cluster
The resource group web-rg should be in an
ONLINE state with no error.
You use the following procedure when a resource group is not online
but is in an error state. Most of these errors occur as a result of the exclusivity
process. This process, run when a resource group is brought online, determines
if any resources are already allocated somewhere in the failure domain of
a resource group. Note that exclusivity scripts return that a resource is
allocated on a node if the script fails in any way. In other words, unless
the script can determine that a resource is not present, it returns a value
indicating that the resource is allocated.
Some possible error states include: SPLIT RESOURCE GROUP (EXCLUSIVITY)
, NODE NOT AVAILABLE (EXCLUSIVITY),
NO AVAILABLE NODES in failure domain. See ,
for explanations of resource group error codes.
Look at the failsafe and SRM logs (default
directory: /var/log/failsafe, files: failsafe_
nodename, srmd_
nodename) to determine the cause of the failure and the resource
that failed.
For example, say the task of moving a resource group online results
in a resource group with error state SPLIT RESOURCE GROUP (EXCLUSIVITY)
. This means that parts of a resource group are allocated on at
least two different nodes. One of the failsafe logs will have the description
of which nodes are believed to have the resource group partially allocated.
At this point, look at the srmd logs on each of
these machines to see what resources are believed to be allocated. In some
cases, a misconfigured resource will show up as a resource which is allocated.
This is especially true for Netscape_web resources.
Fix the cause of the failure. This might require changes to
resource configuration or changes to resource type start/stop/exclusivity
timeouts.
After fixing the problem, move the resource group offline
with the force option and then move the resource group
online.
There are a few double failures that can occur in the cluster which
will cause resource groups to remain in a non-highly-available state. At times
a resource group might get stuck in an offline state. A resource group might
also stay in an error state on a node even when a new node joins the cluster
and the resource group can migrate to that node to clear the error.
When these circumstances arise, the correct action should be as follows:
Try to move the resource group online if it is offline.
If the resource group is stuck on a node, detach the resource
group, then bring it online again. This should clear many errors.
If detaching the resource group does not work, force the resource
group offline, then bring it back online.
If commands appear to be hanging or not working properly,
detach all resource groups, then shut down the cluster and bring all resource
groups back online.
See , for information on detaching
resource groups and forcing resource groups offline.
Resource Error Recovery
You use this procedure when a resource that is not part of a resource
group is in an ONLINE state with error. This can happen
when the addition or removal of resources from a resource group fails.resourcerecovery
Look at the SRM logs (default location: /var/log/failsafe/srmd_
nodename) to determine the cause of
failure and the resource that has failed.
Fix the cause of failure. This might require changes to resource
configuration or changes to resource type stop/start/failover action timeouts.
After fixing the problem, move the resource offline with the
force option of the Cluster Manager CLI admin offline
command:
cmgr> admin offline_force resource web-srvr of resource_type
Netscape_Web in cluster web-cluster
Executing this command removes the error state of resource
web-srvr of type Netscape_Web, making it available
to be added to a resource group.
You can also use the Cluster Manager GUI to clear the error state for
the resource. To do this, you select the “Recover a Resource”
task from the “Resources and Resource Types” category of the FailSafe
Manager.
Control Network Failure Recovery
Control network failures are reported in cmsd logs.
The default location of cmsd log is
/var/log/failsafe/cmsd_nodename.
Follow this procedure when the control network fails:
control networkrecovery
Use the ping command to check whether the
control network IP address is configured in the node.
Check node configuration to see whether the control network
IP addresses are correctly specified.
The following cluster_mgr command displays node configuration
for web-node3:
cmgr> show node web-node3
If IP names are specified for control networks instead of
IP addresses in XX.XX.XX.XX notation, check to see whether IP names can be
resolved using DNS. It is recommended that IP addresses are used instead of
IP names.
Check whether the heartbeat interval and node timeouts are
correctly set for the cluster. These HA parameters can seen using
cluster_mgr show ha_parameters command.
Serial Cable Failure Recovery
Serial cables are used for resetting a node when there is a node failure.
Serial cable failures are reported in crsd logs. The
default location for the crsd log is
/var/log/failsafe/crsd_nodename.serial cable recovery
Check the node configuration to see whether serial cable connection
is correctly configured.
The following cluster_mgr command displays node configuration
for web-node3
cmgr> show node web-node3
Use the cluster_mgr admin ping command to verify
the serial cables.
cmgr> admin ping node web-node3
The above command reports serial cables problems in node
web-node3.
CDB Maintenance and Recovery
When the entire configuration database (CDB) must be reinitialized,
execute the following command:CDB
recovery
CDBmaintenance
# /usr/cluster/bin/cdbreinit /var/cluster/cdb/cdb.db
This command will restart all cluster processes. The contents of the
configuration database will be automatically synchronized with other nodes
if other nodes in the pool are available.
Otherwise, the CDB will need to be restored from backup at this point.
For instructions on backing up and restoring the CDB, see .
FailSafe Cluster Manager GUI and CLI Inconsistencies
If the FailSafe Cluster Manager GUI is displaying information that is
inconsistent with the FailSafe cluster_mgr command, restart
cad process on the node to which Cluster Manager GUI is connected to by executing
the following command:
# killall cad
The cluster administration daemon is restarted automatically by the
cmond process.