How the Evidian SafeKit software simply implements high availability with real-time synchronous replication and failover in Amazon AWS?

Evidian SafeKit provides a high availability cluster with real-time replication and failover in Amazon AWS, the Amazon cloud. This article explains how to implement quickly such a cluster in Amazon AWS. A free trial is offered in the installation instructions section.

How the Evidian SafeKit mirror cluster implements real-time replication and failover in Amazon AWS?

This clustering solution is recognized as the simplest to implement by our customers and partners. It is also a complete solution that solves hardware failures (20% of problems) including the complete failure of a computer room, software failures (40% of problems) including software error detection and automatic restart and human errors (40% of problems) thanks to its simplicity of administration.

On the previous figure,

  • the servers are running in different availability zones
  • the critical application is running on the PRIM server
  • users are connected to a primary/secondary virtual IP address which is configured in the Amazon AWS load balancer
  • SafeKit provides a generic health check for the load balancer. On the PRIM server, the health check returns OK to the load balancer and NOK on the SECOND server.
  • in each server, SafeKit monitors the critical application with process checkers and custom checkers
  • SafeKit restarts automatically the critical application when there is a software failure or a hardware failure thanks to restart scripts
  • SafeKit makes synchronous real-time replication of files containing critical data
  • a connector for the SafeKit web console is installed in each server. Thus, the high availability cluster can be managed in a very simple way to avoid human errors

On the previous figure, the server 1/PRIM runs the critical application. Users are connected to the virtual IP address of the mirror cluster. SafeKit replicates files opened by the critical application in real time. Only changes in the files are replicated across the network, thus limiting traffic (byte-level file replication). Names of file directories containing critical data are simply configured in SafeKit. There are no pre-requisites on disk organization for the two servers. Directories to replicate may be located in the system disk. SafeKit implements synchronous replication with no data loss on failure contrary to asynchronous replication.

In case of server 1 failure, there is an automatic failover on server 2 with restart of the critical application. Then, when server 1 is restarted, SafeKit implements automatic failback with reintegration of data without stopping the critical application on server 2. Finally, the system returns to synchronous replication between server 2 and server 1. The administrator can decide to swap the role of primary and secondary and return to a server 1 running the critical application. The swap can also be done automatically by configuration.

Automatic installation in Amazon AWS of a high availability cluster with synchronous replication and failover (Windows or Linux)

Automatic deployment of the Amazon AWS template for a mirror cluster

To deploy the Evidian SafeKit high availability cluster with replication and failover in Amazon AWS, just click on the following button which deploys everything:

Configure the Amazon AWS template for a mirror cluster

After the click:

  • click on Next: the URL of the template is preconfigured in the page
  • change the stack name if it is not your first deployment
  • choose 2 "Availability Zones": 2 data centers in a region where the 2 VMs will be deployed. The region can be defined at the top of the page
  • Set "Allowed CIDR for SafeKit Console, SSH, RDP": allowed IP addresses with the right to access TCP ports of the SafeKit web console, SSH (on Linux), Remote Desktop (on Windows). With 0.0.0.0/0, any IP address has the right to access.
  • Set "Allowed CIDR for Virtual IP": allowed IP addresses with the right to access to the Virtual IP. With 0.0.0.0/0, any IP address has the right to access.
  • set in "Key Pair Name", AWS i.e. you should have created a key pair with the AWS name in the Amazon console (EC2 dashboard). And you should have download a AWS.pem file for accessing the VMs through SSH (Linux) or remote desktop (Windows)
  • choose the "Operating System": Windows or Linux. If you choose Windows, prefer "t2.medium" in "Instance Type" because "t2.micro" is a small machine for Windows
  • only if necessary, change the "Module Name", the "Server Name Prefix", the "Virtual IP Port". Note that if you change the Virtual IP port, the test on the virtual IP will not work
  • set the password for the SafeKit web console certificates
  • click twice on "Next", check "I acknowledge..." and then click on "Create" (no fee on SafeKit free trial, only on Amazon AWS infrastructure)
  • click on the refresh button to check the progress of deployment
  • wait the end of deployment of the real-time replication and failover cluster

After deployment

After deployment, click on SafeKit-Mirror, then go to the output panel and

  • visit the credential URL to install the client and CA certificates in your web browser. Force the load of the unsafe page. Put as user 'CA_admin' and the password you enter during the template configuration. Be careful, put the second certificate in the 'Trusted Root Certification Authority' store
  • after certificates installation, start the web console of the cluster
  • test the primary/secondary virtual IP address with the test URL in the output. A primary/secondary load balancing rule has been set for external port 9453, internal port 9453. The URL returns the name of the PRIM or ALONE server

Note: when the mirror cluster is "STOP" on both servers, there is no health check answering OK to the AWS load balancer. In this case, the AWS load balancer sends virtual IP/TCP sessions to both servers: if no Availability Zone contains a healthy target, the load balancer nodes route requests to all targets.

Video of the Amazon AWS mirror template deployment

Accessing the VMs through SSH (Linux) or remote desktop (Windows)

If you want to connect to Virtual Machines through SSH (Linux) or remote desktop (Windows), you can use the SafeKit web console to know IP addresses or DNS names of VMs (next images). Follow the Amazon AWS documentation to connect to the VMs with the AWS.pem key pair: SSH (Linux) with AWS key pair, remote desktop (Windows) with AWS key pair.

Step1. Where are the IP addresses of servers in the console

Step2. Where are the IP addresses of servers in the console

Deployed resources for a mirror cluster

In term of VMs, this template deploys:

  • 2 VMs (Windows or Linux)
  • each VM has an elastic IP address
  • the SafeKit free trial is installed in both VMs
  • a SafeKit mirror module is configured in both VMs

In term of load balancer, this template deploys:

  • an elastic network load balancer
  • a public DNS name is associated with the load balancer and plays the role of the virtual IP
  • both VMs are in the target group of the load balancer
  • a health checker checks the mirror module state on both VMs
  • test the primary/secondary virtual IP address with the test URL in the output. A primary/secondary load balancing rule has been set for external port 9453, internal port 9453. The URL points to the PRIM or ALONE server.

All cloud solutions with Evidian SafeKit

Click on the blue button to access the solution

Cloud

Real-time replication and failover cluster

Load balancing and failover cluster

Amazon AWS

Microsoft Azure

Google GCP

Generic Architecture


Customers of SafeKit High Availability Software in all Business Activities

SafeKit High Availability Differentiators against Competition

Evidian SafeKit mirror cluster with real-time file replication and failover

All clustering features All clustering features

Like  The solution includes all clustering features: server failure monitoring, network failure monitoring, software failure monitoring, automatic application restart with a quick recovery time, a virtual IP address switched in case of failure to automatically reroute clients

Dislike  This is not the case with replication-only solutions like replication at the database level

Dislike  Quick application restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with an unknown recovery time

Like   The cluster configuration is very simple and made by means of a high availability application module. There is no domain controller or active directory to configure on Windows. The solution works on Windows and Linux

Synchronous replication Synchronous replication

Like  The real-time replication is synchronous with no data loss on failure

Dislike  This is not the case with asynchronous replication

Fully automated failback procedure Automatic failback

Like  After a failure when a server reboots, the replication failback procedure is fully automatic and the failed server reintegrates the cluster without stopping the application on the only remaining server

Dislike  This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server. The application may even be stopped on the only remaining server during the resynchonization of the failed server

Replication of any type of data

Like  The replication is working for databases but also for any files which shall be replicated

Dislike  This not the case for replication at the database level

File replication vs disk replication File replication vs disk replication

Like  The replication is based on file directories that can be located anywhere (even in the system disk)

Disike  This is not the case with disk replication where special application configuration must be made to put the application data in a special disk

File replication vs shared disk File replication vs shared disk

Like  The servers can be put in two remote sites

Dislike  This is not the case with shared disk solutions

Remote sites Remote sites

Like  All SafeKit clustering features are working for 2 servers in remote sites. Performances of replication depends on the interconnect latency for real-time synchronous replication and on the bandwidth for resynchronizing data on a failed server

Like  If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2

Like  If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer. SafeKit offers a health check: the load balancer is configured with a URL managed by SafeKit which returns OK on the primary server and NOT FOUND else. This solution is implemented for SafeKit in the Cloud but it can be also implemented with a load balancer on premise

Quorum Quorum

Like  With remote sites, the solution works with only 2 servers and for the quorum (network isolation), a simple split brain checker to a router is offered to support a single execution

Like  This is not the case for most clustering solutions where a 3rd server is required for the quorum

Uniform high availability solution Uniform high availability solution

Like  SafeKit implements a mirror cluster with replication and failover. But it imlements also a farm cluster with load balancing and failover. Thus a N-tiers architecture can be made highly available and load balanced with the same solution on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market

Dislike  This is not the case with an architecture mixing different technologies for load balancing, replication and failover

High availability architectures comparison

(click on the feature for more information)

FeatureSafeKit clusterOther clusters
Software clustering vs hardware clustering A software cluster with SafeKit installed on two servers
A simple software cluster with the SafeKit package just installed on two servers
Hardware clustering with external shared storage Network load balancers or dedicated proxy servers

Complex hardware clustering with external storage or network load balancers
Shared nothing vs a shared disk cluster SafeKit shared-nothing cluster: easy to deploy even in remote sites
SafeKit is a shared-nothing cluster: easy to deploy even in remote sites
Shared disk cluster: complex to deploy
A shared disk cluster is complex to deploy
Application High Availability vs Full Virtual Machine High Availability
SafeKit application HA supports hardware failure, software failure, human errors with quick recovery time
Virtual machines high availability supports only hardware failure with an unknown recovery time
Full virtual machines HA supports only hardware failure with a VM reboot and an unknown recovery time if the OS reboot does not work
Synchronous replication vs asynchronous replication
SafeKit implements real-time synchronous replication with no data loss in case of failure
Asynchronous replication with data loss on failure
With asynchronous replication, there is data loss on failure
Byte-level file replication vs block-level disk replication SafeKit cluster with byte-level file replication: simply replicates directories even in the system disk
SafeKit implements real-time byte-level file replication and is simply configured with application directories to replicate even in the system disk
Cluster with block-level disk replication: complex and require to put application data in a special disk
Block-level disk replication is complex to configure and requires to put application data in a special disk
Heartbeat, failover and quorum to avoid 2 master nodes Simple quorum in a SafeKit cluster with a split brain checker configured on a router
To avoid 2 masters, SafeKit proposes a simple split brain checker configured on a router
Complex quorum in other clusters: third machine, special quorum disk, remote hardware reset
To avoid 2 masters, other clusters require a complex configuration with a third machine, a special quorum disk, a special interconnect
Network load balancing No special network configuration in a SafeKit cluster
No dedicated server and no special network configuration are required in a SafeKit cluster for network load balancing
Special network configuration in other clusters
Special network configuration is required in other clusters for network load balancing

Demonstrations of SafeKit High Availability Software

SafeKit Webinar

This webinar presents in 10 minutes Evidian SafeKit.

In this webinar, you will understand:

  • mirror and farm clusters
  • cost savings against hardware clustering solutions
  • best use cases
  • the integration process for a new application

Microsoft SQL Server Cluster

This video shows a mirror module configuration with synchronous real-time replication and failover.

The file replication and the failover are configured for Microsoft SQL Server but it works in the same manner for other databases.

Free trial here

Apache Cluster

This video shows a farm module configuration with load balancing and failover.

The load balancing and the failover are configured for Apache but it works in the same manner for other web services.

Free trial here

Hyper-V Cluster

This video shows a Hyper-V cluster with full replications of virtual machines.

Virtual machines can run on both Hyper-V servers and they are restarted in case of failure.

Free trial here