How the Evidian SafeKit software simply implements high availability with real-time synchronous replication and failover in Cloud?

Evidian SafeKit provides a high availability cluster with real-time replication and failover in Cloud. This article explains how to implement quickly such a cluster in Cloud. A free trial is offered in the installation instructions section.

How the Evidian SafeKit mirror cluster implements real-time replication and failover in Cloud?

This clustering solution is recognized as the simplest to implement by our customers and partners. It is also a complete solution that solves hardware failures (20% of problems) including the complete failure of a computer room, software failures (40% of problems) including software error detection and automatic restart and human errors (40% of problems) thanks to its simplicity of administration.

On the previous figure,

  • the servers are running in different availability zones
  • the critical application is running on the PRIM server
  • users are connected to a primary/secondary virtual IP address which is configured in the load balancer
  • SafeKit provides a generic health check for the load balancer. On the PRIM server, the health check returns OK to the load balancer and NOK on the SECOND server.
  • in each server, SafeKit monitors the critical application with process checkers and custom checkers
  • SafeKit restarts automatically the critical application when there is a software failure or a hardware failure thanks to restart scripts
  • SafeKit makes synchronous real-time replication of files containing critical data
  • a connector for the SafeKit web console is installed in each server. Thus, the high availability cluster can be managed in a very simple way to avoid human errors

On the previous figure, the server 1/PRIM runs the critical application. Users are connected to the virtual IP address of the mirror cluster. SafeKit replicates files opened by the critical application in real time. Only changes in the files are replicated across the network, thus limiting traffic (byte-level file replication). Names of file directories containing critical data are simply configured in SafeKit. There are no pre-requisites on disk organization for the two servers. Directories to replicate may be located in the system disk. SafeKit implements synchronous replication with no data loss on failure contrary to asynchronous replication.

In case of server 1 failure, there is an automatic failover on server 2 with restart of the critical application. Then, when server 1 is restarted, SafeKit implements automatic failback with reintegration of data without stopping the critical application on server 2. Finally, the system returns to synchronous replication between server 2 and server 1. The administrator can decide to swap the role of primary and secondary and return to a server 1 running the critical application. The swap can also be done automatically by configuration.

Manual installation in Cloud of a high availability cluster with synchronous replication and failover (Windows or Linux)

Configuration of the Cloud load balancer

The load balancer must be configured to periodically send health packets to virtual machines. For that, SafeKit provides a health check which runs inside the virtual machines and which

  • returns OK when the mirror module state is PRIM (green) or ALONE (green)
  • returns NOT FOUND in all other states

You must configure the Cloud load balancer with:

  • HTTP protocol
  • port 9010, the SafeKit web server port
  • URL /var/modules/mirror/ready.txt (if mirror is the module name that you will deploy later)

For more information, see the configuration of the Cloud load balancer.

Configuration of the Cloud network security

The network security must be configured to enable communications for the following protocols and ports:

  • UDP - 4800 for the safeadmin service (between SafeKit nodes)
  • UDP - 8888 for the module heartbeat (between SafeKit nodes)
  • TCP – 5600 for the module real time file replication (between SafeKit nodes)
  • TCP – 9010 for the load-balancer health check and for the SafeKit web console running in the http mode
  • TCP – 9001 to configure the https mode for the console
  • TCP – 9453 for the SafeKit web console running in https mode

Package installation on Windows

On both Windows servers

  • Install the free version of SafeKit for Cloud (click here) on 2 Windows nodes
  • The module mirror.safe is delivered inside the package.
  • To open firewall, start a command line as administrator, goto C:\safekit\private\bin and type  .\firewallcfg.cmd add on both nodes

Package installation on Linux

On both Linux servers

  • Install the free version of SafeKit for Cloud (click here) on 2 Linux nodes
  • After the download of safekit_xx.bin package, execute it to extract the rpm and the safekitinstall script and then execute the safekitinstall script
  • Answer yes to firewall automatic configuration
  • The module mirror.safe is delivered inside the package.

Configuration of SafeKit

The configuration is presented with the web console connected to 2 Windows servers but it is the same thing with 2 Linux servers.

Important: all the configuration is made from a single browser.

It is recommended to configure the web console in the https mode by connecting to https://<IP address of 1 VM>:9453 (next image). In this case, you must configure before the https mode by using the wizard described in the User's Guide: see "11.1 HTTPS Quick Configuration with the Configuration Wizard".

Start the https SafeKit web console for configuring

Or you can use the web console in the http mode by connecting to http://<IP address of 1 VM>:9010 (next image).

Start the SafeKit web console for configuring

Note that you can also make a configuration with DNS names, especially if the IP addresses are not static.

Enter IP address of the first node and click on Confirm (next image)

SafeKit web console - first node in the  cluster

Click on New node and enter IP address of the second node (next image)

SafeKit web console - second node in the  cluster

Click on the red floppy disk to save the configuration (previous image)

In the Configuration tab, click on mirror.safe then enter mirror as the module name and Confirm: next images with mirror instead of xxx

SafeKit web console - start configuration of  module SafeKit web console - enter  module name

Click on Validate (next image)

SafeKit web console - enter  module nodes

Change the path of replicated directories only if necessary (next image).

Do not configure a virtual IP address (next image) because this configuration is already made in the Cloud load balancer. This section is useful for on-premise configuration only.

If a process is defined in the Process Checker section (next image), it will be monitored on the primary server with the action restart in case of failure. The services will be stopped an restarted locally on the primary server if this process disappears from the list of running processes. After 3 unsuccessful local restarts, the module is stopped on the local server and there is a failover on the secondary server. As a consequence, the health check answers OK on the new primary server to the Cloud load balancer and the virtual IP address traffic is switched to the new primary server.

start_prim and stop_prim (next image) contain the start and the stop of services.

SafeKit web console - enter  parameters

Note:

  • on Windows, put  services with Boot Startup Type = Manual on both servers (SafeKit controls start of services in start_prim).

Click on Validate (previous image)

SafeKit web console - stop the  module before configuration the configuration

Click on Configure (previous image)

SafeKit web console - check the success green message of the  configuration

Check the success green message on both servers and click on Next (previous image). On Linux, you may have an error at this step if replicated directories are mount points. See this article to solve the problem.

SafeKit web console - select the  node with the up-to-date database

Select the node with the most up-to-date replicated directories and click on start it to make the first resynchronization in the right direction (previous image). Before this operation, we suggest you to make a copy of replicated directories before starting the cluster to avoid any errors.

SafeKit web console - the first  node starts as primary and is alone

Start the second node (previous image) which becomes SECOND green (next image) after resynchronisation of all replicated directories (binary copy from node 1 to node 2).

SafeKit web console - the second  node starts as SECOND

The cluster is operational with services running on the PRIM node and nothing running on the SECOND node (previous image). Only modifications inside files are replicated in real-time in this state.

Be careful, components which are clients of the services must be configured with the virtual IP address. The configuration can be made with a DNS name (if a DNS name has been created and associated with the virtual IP address).

Tests

Check with Windows Microsoft Management Console (MMC) or with Linux command lines that the services are started on the primary server and stopped on the secondary server.

Stop the PRIM node by scrolling down the menu of the primary node and by clicking on Stop. Check that there is a failover on the SECOND node. And check the failover of services with Windows Microsoft Management Console (MMC) or with Linux command lines.

All cloud solutions with Evidian SafeKit

Click on the blue button to access the solution

Cloud

Real-time replication and failover cluster

Load balancing and failover cluster

Amazon AWS

Microsoft Azure

Google GCP

Generic Architecture


Customers of SafeKit High Availability Software in all Business Activities

SafeKit High Availability Differentiators against Competition

Evidian SafeKit mirror cluster with real-time file replication and failover

All clustering features All clustering features

Like  The solution includes all clustering features: server failure monitoring, network failure monitoring, software failure monitoring, automatic application restart with a quick recovery time, a virtual IP address switched in case of failure to automatically reroute clients

Dislike  This is not the case with replication-only solutions like replication at the database level

Dislike  Quick application restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with an unknown recovery time

Like   The cluster configuration is very simple and made by means of a high availability application module. There is no domain controller or active directory to configure on Windows. The solution works on Windows and Linux

Synchronous replication Synchronous replication

Like  The real-time replication is synchronous with no data loss on failure

Dislike  This is not the case with asynchronous replication

Fully automated failback procedure Automatic failback

Like  After a failure when a server reboots, the replication failback procedure is fully automatic and the failed server reintegrates the cluster without stopping the application on the only remaining server

Dislike  This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server. The application may even be stopped on the only remaining server during the resynchonization of the failed server

Replication of any type of data

Like  The replication is working for databases but also for any files which shall be replicated

Dislike  This not the case for replication at the database level

File replication vs disk replication File replication vs disk replication

Like  The replication is based on file directories that can be located anywhere (even in the system disk)

Disike  This is not the case with disk replication where special application configuration must be made to put the application data in a special disk

File replication vs shared disk File replication vs shared disk

Like  The servers can be put in two remote sites

Dislike  This is not the case with shared disk solutions

Remote sites Remote sites

Like  All SafeKit clustering features are working for 2 servers in remote sites. Performances of replication depends on the interconnect latency for real-time synchronous replication and on the bandwidth for resynchronizing data on a failed server

Like  If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2

Like  If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer. SafeKit offers a health check: the load balancer is configured with a URL managed by SafeKit which returns OK on the primary server and NOT FOUND else. This solution is implemented for SafeKit in the Cloud but it can be also implemented with a load balancer on premise

Quorum Quorum

Like  With remote sites, the solution works with only 2 servers and for the quorum (network isolation), a simple split brain checker to a router is offered to support a single execution

Like  This is not the case for most clustering solutions where a 3rd server is required for the quorum

Uniform high availability solution Uniform high availability solution

Like  SafeKit implements a mirror cluster with replication and failover. But it imlements also a farm cluster with load balancing and failover. Thus a N-tiers architecture can be made highly available and load balanced with the same solution on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market

Dislike  This is not the case with an architecture mixing different technologies for load balancing, replication and failover

High availability architectures comparison

(click on the feature for more information)

FeatureSafeKit clusterOther clusters
Software clustering vs hardware clustering A software cluster with SafeKit installed on two servers
A simple software cluster with the SafeKit package just installed on two servers
Hardware clustering with external shared storage Network load balancers or dedicated proxy servers

Complex hardware clustering with external storage or network load balancers
Shared nothing vs a shared disk cluster SafeKit shared-nothing cluster: easy to deploy even in remote sites
SafeKit is a shared-nothing cluster: easy to deploy even in remote sites
Shared disk cluster: complex to deploy
A shared disk cluster is complex to deploy
Application High Availability vs Full Virtual Machine High Availability
SafeKit application HA supports hardware failure, software failure, human errors with quick recovery time
Virtual machines high availability supports only hardware failure with an unknown recovery time
Full virtual machines HA supports only hardware failure with a VM reboot and an unknown recovery time if the OS reboot does not work
Synchronous replication vs asynchronous replication
SafeKit implements real-time synchronous replication with no data loss in case of failure
Asynchronous replication with data loss on failure
With asynchronous replication, there is data loss on failure
Byte-level file replication vs block-level disk replication SafeKit cluster with byte-level file replication: simply replicates directories even in the system disk
SafeKit implements real-time byte-level file replication and is simply configured with application directories to replicate even in the system disk
Cluster with block-level disk replication: complex and require to put application data in a special disk
Block-level disk replication is complex to configure and requires to put application data in a special disk
Heartbeat, failover and quorum to avoid 2 master nodes Simple quorum in a SafeKit cluster with a split brain checker configured on a router
To avoid 2 masters, SafeKit proposes a simple split brain checker configured on a router
Complex quorum in other clusters: third machine, special quorum disk, remote hardware reset
To avoid 2 masters, other clusters require a complex configuration with a third machine, a special quorum disk, a special interconnect
Network load balancing No special network configuration in a SafeKit cluster
No dedicated server and no special network configuration are required in a SafeKit cluster for network load balancing
Special network configuration in other clusters
Special network configuration is required in other clusters for network load balancing

Demonstrations of SafeKit High Availability Software

SafeKit Webinar

This webinar presents in 10 minutes Evidian SafeKit.

In this webinar, you will understand:

  • mirror and farm clusters
  • cost savings against hardware clustering solutions
  • best use cases
  • the integration process for a new application

Microsoft SQL Server Cluster

This video shows a mirror module configuration with synchronous real-time replication and failover.

The file replication and the failover are configured for Microsoft SQL Server but it works in the same manner for other databases.

Free trial here

Apache Cluster

This video shows a farm module configuration with load balancing and failover.

The load balancing and the failover are configured for Apache but it works in the same manner for other web services.

Free trial here

Hyper-V Cluster

This video shows a Hyper-V cluster with full replications of virtual machines.

Virtual machines can run on both Hyper-V servers and they are restarted in case of failure.

Free trial here