How heartbeats and failover work in a cluster on Windows or Linux

The basic mechanism for synchronizing two servers and detecting server failures is the heartbeat, which is a monitoring data flow on a network shared by a pair of servers.

The SafeKit software supports as many heartbeats as there are networks shared by two servers. The heartbeat mechanism is used to implement Windows and Linux clusters. It is integrated within the SafeKit mirror cluster with real-time file replication and failover.

In normal operation, the two servers exchange their states (PRIM, SECOND, the resource states) through the heartbeat channels and synchronize their application start and stop procedures. In particular, in case of an application failover because of a software failure or a manual operation, the stop script which stops the application is first executed on the primary server, before executing the start script on the secondary server. Thus, replicated data on the secondary server are in a safe state corresponding to a clean stop of the application.

If all heartbeats are lost, it is interpreted as if the other server was down, and the local server switches to the ALONE state. If it is the SECOND server which goes to the ALONE state, then there is an application failover with restart of the application on the secondary server. Although not mandatory, it is better to have two heartbeat channels on two different networks for synchronizing the two servers in order to separate the network failure case from the server failure one.

Split brain problem and quorum when servers are in two remote computer rooms

Most often, a HA cluster securing a critical application in a data center is implemented with two servers in two geographically remote computer rooms to support the disaster of a full room.

In situation of transient network isolation between both computer rooms, the split brain problem arises. Both servers may start the critical application.

With a hardware failover cluster, this situation must not arise because a double execution means a concurrent access on a shared storage and a potential corruption of the critical application data. That's why a cluster quorum is implemented with a third quorum server or a special quorum disk or even a remote hardware reset when possible to avoid this concurrent execution of the critical application.

Unfortunately this new quorum devices add cost and complexity to the overall clustering architecture. And the system is not immune to a freeze of an OS: when the OS resumes from the freeze, there are a double execution of the application, even with the aforementioned mechanisms and potentially with corruption of data on the shared storage.

Video: Heartbeat, failover, quorum to solve the split brain problem in a cluster >

Simple cluster quorum with SafeKit

With the SafeKit HA software, the quorum within a Windows or Linux cluster requires no third quorum server, no quorum disk and no remote hardware reset. A simple split brain checker is sufficient for the SafeKit quorum to avoid the double execution of an application.

The split brain checker, on the the loss of all heartbeats between servers, selects only one server to become the primary. The other server is not up-to-date anymore and goes into the WAIT state, until it receives the other server's heartbeats again, in which case it automatically resynchronizes the replicated data from the other server.

The primary server election is based on the ping of an IP address, called the witness. The witness is typically a router that does not crash. In case of network isolation, only the server with access to the witness will be primary ALONE, the other will go to WAIT. The witness is not tested permanently but only when the system switches over. If at the time of failover, the witness is down, the cluster goes into the WAIT-WAIT state and an administrator can choose to restart one of the nodes as primary through the SafeKit web interface.

Consider the critical case of an OS freeze or a network isolation without a split brain checker configured. A SafeKit high-availability cluster supports dual execution of a critical application without data corruption. In this case, the primary server continues to run the application in the ALONE state. And the secondary server restarts the application and also goes into the ALONE state. The replicated directories are isolated and each application is working on its own data in its own directory.

When the network is reconnected, a sacrifice must be made by shutting down the application on one of the two servers. This sacrifice shutdowns the application on one server and causes data reintegration from the primary one. After this reintegration, the data are once again in mirror mode between a primary and a secondary server.

All these operations are automatic with SafeKit. The complexity of the heartbeat, failover and quorum management within the cluster is integrated inside the SafeKit product and transparent for users of SafeKit. Thus, people deploying SafeKit without specific skill can do it on two standard servers in any configuration, local or remote. In addition, the configuration is the same for a Windows or a Linux cluster.

Important: if you choose another solution based on a shared or replicated disk, make sure that after an OS freeze, the server that comes out of the freeze can no longer access the shared or replicated disk, because two servers accessing the same disk via its file system leads to data corruption.

Other differentatiors to consider when choosing a high availability cluster with heartbeat, failover and quorum

Best practices of a mirror cluster with replication and failover

Evidian SafeKit mirror cluster with real-time file replication and failover

All clustering features All clustering features

Like  A SafeKit cluster runs on Windows and Linux without the need for expensive shared or replicated disk bays

Like  SafeKit includes all clustering features: synchronous real-time file replication, monitoring of server/network/software failures, automatic application restart, virtual IP address switched in case of failure to reroute clients

Dislike  This is not the case with replication-only solutions like replication at the database level which implements only replication

Like   The cluster configuration is very simple and made by means of application modules. There is no domain controller or active directory to configure as with Microsoft cluster

Like  SafeKit implements quick application restart in case of failure: around 1 mn or less (see RTO/RPO here)

Dislike  Quick application restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with a recovery time depending on the OS reboot as with VMware HA or Hyper-V cluster

Synchronous replication Synchronous replication

Like  The real-time replication is synchronous with no data loss on failure

Dislike  This is not the case with asynchronous replication

Fully automated failback procedure Automatic failback

Like  After a failure when a server reboots, the replication failback procedure is fully automatic and the failed server reintegrates the cluster without stopping the application on the only remaining server

Dislike  This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server. The application may even be stopped on the only remaining server during the resynchonization of the failed server

Replication of any type of data

Like  The replication is working for databases but also for any files which shall be replicated

Dislike  This not the case for replication at the database level

File replication vs disk replication File replication vs disk replication

Like  The replication is based on file directories that can be located anywhere (even in the system disk)

Disike  This is not the case with disk replication where special application configuration must be made to put the application data in a special disk

File replication vs shared disk File replication vs shared disk

Like  The servers can be put in two remote sites

Dislike  This is not the case with shared disk solutions

Remote sites Remote sites

Like  All SafeKit clustering features are working for 2 servers in remote sites. Performances of replication depends on the interconnect latency for real-time synchronous replication and on the bandwidth for resynchronizing data on a failed server

Like  If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2

Like  If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer. SafeKit offers a health check: the load balancer is configured with a URL managed by SafeKit which returns OK on the primary server and NOT FOUND else. This solution is implemented for SafeKit in the Cloud but it can be also implemented with a load balancer on premise

Quorum Quorum

Like  With remote sites, the solution works with only 2 servers and for the quorum (network isolation), a simple split brain checker to a router is offered to support a single execution

Like  This is not the case for most clustering solutions where a 3rd server is required for the quorum

Active/active cluster Active active mirror cluster

Like  The secondary server is not dedicated to the restart of the primary server. The cluster can be active-active by running 2 different mirror modules

Dislike  This is not the case with a fault tolerant system where the secondary is dedicated to the execution of the same application synchronized at the instruction level

Uniform high availability solution Uniform high availability solution

Like  SafeKit implements a mirror cluster with replication and failover. But it imlements also a farm cluster with load balancing and failover. Thus a N-tiers architecture can be made highly available and load balanced with the same solution on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market

Dislike  This is not the case with an architecture mixing different technologies for load balancing, replication and failover

High availability architectures comparison

Feature

SafeKit cluster

Other clusters

Software clustering vs hardware clustering
More information...
A software cluster with SafeKit installed on two servers

Like  A simple software cluster with the SafeKit package just installed on two servers
Hardware clustering with external shared storage Network load balancers or dedicated proxy servers



Dislike  Complex hardware clustering with external storage or network load balancers
Shared nothing vs a shared disk cluster
More information...
SafeKit shared-nothing cluster: easy to deploy even in remote sites

Like  SafeKit is a shared-nothing cluster: easy to deploy even in remote sites
Shared disk cluster: complex to deploy

Dislike  A shared disk cluster is complex to deploy
Application High Availability vs Full Virtual Machine High Availability
More information...


Like  Application HA supports hardware failure and software failure with a quick recovery time (RTO around 1 mn or less).
Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist)
Virtual machines high availability supports only hardware failure with an recovery time depending on the OS reboot

Dislike  Full virtual machines HA supports only hardware failure with a VM reboot and a recovery time depending on the OS reboot.
Smooth upgrade not possible
High availability vs fault tolerance SafeKit high availability vs fault-tolerance

Like  No dedicated server. Each server can be the failover server of the other one.
Software failure with restart in another OS environment.
Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist)
Fault tolerance system

Dislike  Secondary server dedicated to the execution of the same application synchronized at the instruction level.
Software exception on both servers at the same time.
Smooth upgrade not possible
Synchronous replication vs asynchronous replication
More information...


Like  SafeKit implements real-time synchronous replication with no data loss in case of failure
Asynchronous replication with data loss on failure

Dislike  With asynchronous replication, there is data loss on failure
Byte-level file replication vs block-level disk replication
More information...
SafeKit cluster with byte-level file replication: simply replicates directories even in the system disk

Like  SafeKit implements real-time byte-level file replication and is simply configured with application directories to replicate even in the system disk
Cluster with block-level disk replication: complex and require to put application data in a special disk

Dislike  Block-level disk replication is complex to configure and requires to put application data in a special disk
Heartbeat, failover and quorum to avoid 2 master nodes
More information...
Simple quorum in a SafeKit cluster with a split brain checker configured on a router

Like  To avoid 2 masters, SafeKit proposes a simple split brain checker configured on a router
Complex quorum in other clusters: third machine, special quorum disk, remote hardware reset

Dislike  To avoid 2 masters, other clusters require a complex configuration with a third machine, a special quorum disk, a special interconnect
Network load balancing
More information...
No special network configuration in a SafeKit cluster

Like  No dedicated proxy servers and no special network configuration are required in a SafeKit cluster for network load balancing
Special network configuration in other clusters

Dislike  Special network configuration is required in other clusters for network load balancing

SafeKit Modules for Plug&Play High Availability Solutions

SafeKit Modules for Plug&Play High Availability Solutions

Network load balancing and failover: click on the blue buttons

Farm modules

Windows

Linux

IIS-
Apache
New application
Amazon AWS farm
Microsoft Azure farm
Google GCP farm
Cloud generic farm

Real-time file replication and failover: click on the blue buttons

Mirror modules

Windows

Linux

Microsoft SQL Server-
Oracle
MySQL
PostgreSQL
Firebird
Hyper-V-
Milestone XProtect-
Hanwha Wisenet SSM-
New application
Amazon AWS mirror
Microsoft Azure mirror
Google GCP mirror
Cloud generic mirror

Demonstrations of SafeKit High Availability Software

SafeKit Webinar

This webinar presents in 10 minutes Evidian SafeKit.

In this webinar, you will understand:

  • mirror and farm clusters
  • cost savings against hardware clustering solutions
  • best use cases
  • the integration process for a new application

Microsoft SQL Server Cluster

This video shows a mirror module configuration with synchronous real-time replication and failover.

The file replication and the failover are configured for Microsoft SQL Server but it works in the same manner for other databases.

Free trial here

Apache Cluster

This video shows a farm module configuration with load balancing and failover.

The load balancing and the failover are configured for Apache but it works in the same manner for other web services.

Free trial here

Hyper-V Cluster

This video shows a Hyper-V cluster with full replications of virtual machines.

Virtual machines can run on both Hyper-V servers and they are restarted in case of failure.

Free trial here