How heartbeats and failover work in a cluster on Windows or Linux

The basic mechanism for synchronizing two servers and detecting server failures is the heartbeat, which is a monitoring data flow on a network shared by a pair of servers.

The SafeKit software supports as many heartbeats as there are networks shared by two servers. The heartbeat mechanism is used to implement Windows and Linux clusters. It is integrated within the SafeKit mirror cluster with real-time file replication and failover.

In normal operation, the two servers exchange their states (PRIM, SECOND, the resource states) through the heartbeat channels and synchronize their application start and stop procedures. In particular, in case of an application failover because of a software failure or a manual operation, the stop script which stops the application is first executed on the primary server, before executing the start script on the secondary server. Thus, replicated data on the secondary server are in a safe state corresponding to a clean stop of the application.

If all heartbeats are lost, it is interpreted as if the other server was down, and the local server switches to the ALONE state. If it is the SECOND server which goes to the ALONE state, then there is an application failover with restart of the application on the secondary server. Although not mandatory, it is better to have two heartbeat channels on two different networks for synchronizing the two servers in order to separate the network failure case from the server failure one.

Cluster quorum problem when servers are in two remote computer rooms

Most often, a HA cluster securing a critical application in a data center is implemented with two servers in two geographically remote computer rooms to support the disaster of a full room.

In situation of transient network isolation between both computer rooms, the split brain problem arises. Both servers may start the critical application.

With a hardware failover cluster, this situation must not arise because a double execution means a concurrent access on a shared storage and a potential corruption of the critical application data. That's why a cluster quorum is implemented with a third quorum server or a special quorum disk or even a remote hardware reset when possible to avoid this concurrent execution of the critical application.

Unfortunately this new quorum devices add cost and complexity to the overall clustering architecture. And the system is not immune to a freeze of an OS: when the OS resumes from the freeze, there are a double execution of the application, even with the aforementioned mechanisms and potentially with corruption of data on the shared storage.

Simple cluster quorum with SafeKit

With the SafeKit HA software, the quorum within a Windows or Linux cluster requires no third quorum server, no quorum disk and no remote hardware reset. A simple split brain checker is sufficient for the SafeKit quorum to avoid the double execution of an application.

The split brain checker, on the the loss of all heartbeats between servers, selects only one server to become the primary. The other server is not up-to-date anymore and goes into the WAIT state, until it receives the other server's heartbeats again, in which case it automatically resynchronizes the replicated data from the other server.

The primary server election is based on the ping of an IP address, called the witness. The witness is typically a router that does not crash. In case of network isolation, only the server with access to the witness will be primary ALONE, the other will go to WAIT. The witness is not tested permanently but only when the system switches over. If at the time of failover, the witness is down, the cluster goes into the WAIT-WAIT state and an administrator can choose to restart one of the nodes as primary through the SafeKit web interface.

Consider the critical case of an OS freeze or a network isolation without a split brain checker configured. A SafeKit high-availability cluster supports dual execution of a critical application without data corruption. In this case, the primary server continues to run the application in the ALONE state. And the secondary server restarts the application and also goes into the ALONE state. The replicated directories are isolated and each application is working on its own data in its own directory.

When the network is reconnected, a sacrifice must be made by shutting down the application on one of the two servers. This sacrifice shutdowns the application on one server and causes data reintegration from the primary one. After this reintegration, the data are once again in mirror mode between a primary and a secondary server.

All these operations are automatic with SafeKit. The complexity of the heartbeat, failover and quorum management within the cluster is integrated inside the SafeKit product and transparent for users of SafeKit. Thus, people deploying SafeKit without specific skill can do it on two standard servers in any configuration, local or remote. In addition, the configuration is the same for a Windows or a Linux cluster.

Important: if you choose another solution based on a shared or replicated disk, make sure that after an OS freeze, the server that comes out of the freeze can no longer access the shared or replicated disk, because two servers accessing the same disk via its file system leads to data corruption.

Other differentatiors to consider when choosing a high availability cluster with heartbeat, failover and quorum

Best practices of a mirror cluster with replication and failover

Evidian SafeKit mirror cluster with real-time file replication and failover

All clustering features All clustering features

Like  The solution includes all clustering features: server failure monitoring, network failure monitoring, software failure monitoring, automatic application restart with a quick recovery time, a virtual IP address switched in case of failure to automatically reroute clients

Dislike  This is not the case with replication-only solutions like replication at the database level

Dislike  Quick application restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with an unknown recovery time

Like   The cluster configuration is very simple and made by means of a high availability application module. There is no domain controller or active directory to configure on Windows. The solution works on Windows and Linux

Synchronous replication Synchronous replication

Like  The real-time replication is synchronous with no data loss on failure

Dislike  This is not the case with asynchronous replication

Fully automated failback procedure Automatic failback

Like  After a failure when a server reboots, the replication failback procedure is fully automatic and the failed server reintegrates the cluster without stopping the application on the only remaining server

Dislike  This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server. The application may even be stopped on the only remaining server during the resynchonization of the failed server

Replication of any type of data

Like  The replication is working for databases but also for any files which shall be replicated

Dislike  This not the case for replication at the database level

File replication vs disk replication File replication vs disk replication

Like  The replication is based on file directories that can be located anywhere (even in the system disk)

Disike  This is not the case with disk replication where special application configuration must be made to put the application data in a special disk

File replication vs shared disk File replication vs shared disk

Like  The servers can be put in two remote sites

Dislike  This is not the case with shared disk solutions

Remote sites Remote sites

Like  All SafeKit clustering features are working for 2 servers in remote sites. Performances of replication depends on the interconnect latency for real-time synchronous replication and on the bandwidth for resynchronizing data on a failed server

Like  If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2

Like  If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer. SafeKit offers a health check: the load balancer is configured with a URL managed by SafeKit which returns OK on the primary server and NOT FOUND else. This solution is implemented for SafeKit in the Cloud but it can be also implemented with a load balancer on premise

Quorum Quorum

Like  With remote sites, the solution works with only 2 servers and for the quorum (network isolation), a simple split brain checker to a router is offered to support a single execution

Like  This is not the case for most clustering solutions where a 3rd server is required for the quorum

Uniform high availability solution Uniform high availability solution

Like  SafeKit implements a mirror cluster with replication and failover. But it imlements also a farm cluster with load balancing and failover. Thus a N-tiers architecture can be made highly available and load balanced with the same solution on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market

Dislike  This is not the case with an architecture mixing different technologies for load balancing, replication and failover

High availability architectures comparison

(click on the feature for more information)

FeatureSafeKit clusterOther clusters
Software clustering vs hardware clustering A software cluster with SafeKit installed on two servers
A simple software cluster with the SafeKit package just installed on two servers
Hardware clustering with external shared storage Network load balancers or dedicated proxy servers

Complex hardware clustering with external storage or network load balancers
Shared nothing vs a shared disk cluster SafeKit shared-nothing cluster: easy to deploy even in remote sites
SafeKit is a shared-nothing cluster: easy to deploy even in remote sites
Shared disk cluster: complex to deploy
A shared disk cluster is complex to deploy
Application High Availability vs Full Virtual Machine High Availability
SafeKit application HA supports hardware failure, software failure, human errors with quick recovery time
Virtual machines high availability supports only hardware failure with an unknown recovery time
Full virtual machines HA supports only hardware failure with a VM reboot and an unknown recovery time if the OS reboot does not work
Synchronous replication vs asynchronous replication
SafeKit implements real-time synchronous replication with no data loss in case of failure
Asynchronous replication with data loss on failure
With asynchronous replication, there is data loss on failure
Byte-level file replication vs block-level disk replication SafeKit cluster with byte-level file replication: simply replicates directories even in the system disk
SafeKit implements real-time byte-level file replication and is simply configured with application directories to replicate even in the system disk
Cluster with block-level disk replication: complex and require to put application data in a special disk
Block-level disk replication is complex to configure and requires to put application data in a special disk
Heartbeat, failover and quorum to avoid 2 master nodes Simple quorum in a SafeKit cluster with a split brain checker configured on a router
To avoid 2 masters, SafeKit proposes a simple split brain checker configured on a router
Complex quorum in other clusters: third machine, special quorum disk, remote hardware reset
To avoid 2 masters, other clusters require a complex configuration with a third machine, a special quorum disk, a special interconnect
Network load balancing No special network configuration in a SafeKit cluster
No dedicated server and no special network configuration are required in a SafeKit cluster for network load balancing
Special network configuration in other clusters
Special network configuration is required in other clusters for network load balancing

SafeKit Modules for Plug&Play High Availability Solutions

SafeKit miror and farm high availability modules

SafeKit HA cluster architectures

Free high availability farm modules

Deploy a farm module on N servers.
And implement a network load balancing cluster with application failover.
The target is an application with web services to load balance between servers and with an automatic restart in case of failure.

Click on the blue buttons for a full description of the solution and a step-by-step installation procedure

Farm modules (load balancing and failover)

Windows

Linux

IIS module-
Apache module
Generic farm module for any application

Free high availability mirror modules

Deploy a mirror module on 2 servers.
And implement a mirror cluster with real-time file replication and application failover.
The target is an application with a database or flat files to replicate and with an automatic restart in case of failure.

Click on the blue buttons for a full description of the solution and a step-by-step installation procedure

Mirror modules (replication and failover)

Windows

Linux

Microsoft SQL Server-
Oracle
MySQL
PostgreSQL
Firebird
Hyper-V-
Milestone XProtect (based on Microsoft SQL Server)-
Hanwha SSM (based on PostgreSQL)-
Generic mirror module for any application

Demonstrations of SafeKit High Availability Software

SafeKit Webinar

This webinar presents in 10 minutes Evidian SafeKit.

In this webinar, you will understand:

  • mirror and farm clusters
  • cost savings against hardware clustering solutions
  • best use cases
  • the integration process for a new application

Microsoft SQL Server Cluster

This video shows a mirror module configuration with synchronous real-time replication and failover.

The file replication and the failover are configured for Microsoft SQL Server but it works in the same manner for other databases.

Free trial here

Apache Cluster

This video shows a farm module configuration with load balancing and failover.

The load balancing and the failover are configured for Apache but it works in the same manner for other web services.

Free trial here

Hyper-V Cluster

This video shows a Hyper-V cluster with full replications of virtual machines.

Virtual machines can run on both Hyper-V servers and they are restarted in case of failure.

Free trial here