There are two general types of configuration, each of which can have from 2 through 8 nodes:
N nodes that can potentially failover their applications to any of the other nodes in the cluster.
N primary nodes that can failover to M backup nodes. For example, you could have 3 primary nodes and 1 backup node.
This section shows examples of failover policies for the following types of configuration, each of which can have from 2 through 8 nodes:
N primary nodes and one backup node (N+1)
N primary nodes and two backup nodes (N+2)
N primary nodes and M backup nodes (N+M)
Note: The diagrams in the following sections illustrate the configuration concepts discussed here, but they do not address all required or supported elements, such as reset hubs. For configuration details, see the Linux FailSafe Installation and Maintenance Instructions.
Figure 3-1 shows a specific instance of an N+1 configuration in which there are three primary nodes and one backup node. (This is also known as a star configuration.) The disks shown could each be disk farms.
You could configure the following failover policies for load balancing:
Failover policy for RG1:
Initial failover domain = A, D
Failover attribute = Auto_Failback
Failover script = ordered
Failover policy for RG2:
Initial failover domain = B, D
Failover attribute = Auto_Failback
Failover script = ordered
Failover policy for RG3:
Initial failover domain = C, D
Failover attribute = Auto_Failback
Failover script = ordered
If node A fails, RG1 will fail over to node D. As soon as node A reboots, RG1 will be moved back to node A.
If you change the failover attribute to Controlled_Failback for RG1 and node A fails, RG1 will fail over to node D and will remain running on node D even if node A reboots.
Figure 3-2 shows a specific instance of an N+2 configuration in which there are four primary nodes and two backup nodes. The disks shown could each be disk farms.
You could configure the following failover policy for resource groups RG7 and RG8:
Failover policy for RG7:
Initial failover domain = A, E, F
Failover attribute = Controlled_Failback
Failover script = ordered
Failover policy for RG8:
Initial failover domain = B, F, E
Failover attribute = Auto_Failback
Failover script = ordered
If node A fails, RG7 will fail over to node E. If node E also fails, RG7 will fail over to node F. If A is rebooted, RG7 will remain on node F.
If node B fails, RG8 will fail over to node F. If B is rebooted, RG8 will return to node B.
Figure 3-3 shows a specific instance of an N+M configuration in which there are four primary nodes and each can serve as a backup node. The disk shown could be a disk farm.
You could configure the following failover policy for resource groups RG5 and RG6:
Failover policy for RG5:
Initial failover domain = A, B, C, D
Failover attribute = Controlled_Failback
Failover script = ordered
Failover policy for RG6:
Initial failover domain = C, A, D
Failover attribute = Controlled_Failback
Failover script = ordered
If node C fails, RG6 will fail over to node A. When node C reboots, RG6 will remain running on node A. If node A then fails, RG6 will return to node C and RG5 will move to node B. If node B then fails, RG5 moves to node C.