Figure 1-1, shows an example of Linux FailSafe hardware components, in this case for a two-node system.
The hardware components of the Linux FailSafe system are as follows:
Up to eight Linux nodes
Two or more interfaces on each node to control networks (Ethernet, FDDI, or any other available network interface)
At least two network interfaces on each node are required for the control network heartbeat connection, by which each node monitors the state of other nodes. The Linux FailSafe software also uses this connection to pass control messages between nodes. These interfaces have distinct IP addresses.
A mechanism for remote reset of nodes
A reset ensures that the failed node is not using the shared disks when the replacement node takes them over.
Disk storage and SCSI bus shared by the nodes in the cluster
The nodes in the Linux FailSafe system can share dual-hosted disk storage over a shared fast and wide SCSI bus where this is supported by the SCSI controller and Linux driver.
Note: Note that few Linux drivers are currently known to implement this correctly. Please check hardware compatibility lists if this is a configuration you plan to use. Fibre Channel solutions should universally support this.
Note: The Linux FailSafe system is designed to survive a single point of failure. Therefore, when a system component fails, it must be restarted, repaired, or replaced as soon as possible to avoid the possibility of two or more failed components.