Amazon AWS: The Simplest High Availability Cluster with Synchronous Real-Time Replication and Failover

Evidian SafeKit

How the Evidian SafeKit software simply implements high availability with synchronous real-time replication and failover in Amazon AWS between two redundant Windows or Linux servers without shared storage?

Evidian SafeKit provides a high availability cluster with real-time replication and failover in Amazon AWS, the Amazon cloud. This article explains how to implement quickly such a cluster in Amazon AWS. A free trial is offered below with a quick start template for an easy deployment in the Cloud.

How the Evidian SafeKit mirror cluster implements real-time replication and failover in Amazon AWS?

Note that SafeKit is a generic product. You can implement with the same product real-time replication and failover of new directories and services, databases, docker containers, full Hyper-V or KVM virtual machines, Cloud applications. See other examples of mirror modules here.

This clustering solution is recognized as the simplest to implement by our customers and partners. It is also a complete solution that solves

  • hardware failures (20% of problems), including the complete failure of a computer room,
  • software failures (40% of problems), including smooth upgrade server by server,
  • and human errors (40% of problems) thanks to its ease of use, including a very simple administration web console to configure, control and monitor clusters.

On the previous figure,

  • the servers are running in different availability zones
  • the critical application is running on the PRIM server
  • users are connected to a primary/secondary virtual IP address which is configured in the Amazon AWS load balancer
  • SafeKit provides a generic health check for the load balancer. On the PRIM server, the health check returns OK to the load balancer and NOK on the SECOND server.
  • in each server, SafeKit monitors the critical application with process checkers and custom checkers
  • SafeKit restarts automatically the critical application when there is a software failure or a hardware failure thanks to restart scripts
  • SafeKit makes synchronous real-time replication of files containing critical data
  • a connector for the SafeKit web console is installed in each server. Thus, the high availability cluster can be managed in a very simple way to avoid human errors

On the previous figure, the server 1/PRIM runs the critical application. Users are connected to the virtual IP address of the mirror cluster. SafeKit replicates files opened by the critical application in real time. Only changes in the files are replicated across the network, thus limiting traffic (byte-level file replication). Names of file directories containing critical data are simply configured in SafeKit. There are no pre-requisites on disk organization for the two servers. Directories to replicate may be located in the system disk. SafeKit implements synchronous replication with no data loss on failure contrary to asynchronous replication.

In case of server 1 failure, there is an automatic failover on server 2 with restart of the critical application. Then, when server 1 is restarted, SafeKit implements automatic failback with reintegration of data without stopping the critical application on server 2. Finally, the system returns to synchronous replication between server 2 and server 1. The administrator can decide to swap the role of primary and secondary and return to a server 1 running the critical application. The swap can also be done automatically by configuration.

Free trial + AWS quick start template to deploy a SafeKit mirror cluster between two redundant Windows or Linux servers

Evidian SafeKit Quick Start in the Amazon AWS Cloud

AWS Quick Start of a mirror cluster cluster on Windows or Linux

To deploy the Evidian SafeKit high availability cluster with real-time replication and failover in Amazon AWS, just click on the following button to access the AWS Quick Start page and follow the deployment instructions:

Manual installation in Amazon AWS of a high availability cluster with synchronous replication and failover (Windows or Linux)

Configuration of the Amazon AWS load balancer

The load balancer must be configured to periodically send health packets to virtual machines. For that, SafeKit provides a health check which runs inside the virtual machines and which

  • returns OK when the mirror module state is PRIM (green) or ALONE (green)
  • returns NOT FOUND in all other states

You must configure the Amazon AWS load balancer with:

  • HTTP protocol
  • port 9010, the SafeKit web server port
  • URL /var/modules/mirror/ready.txt (if mirror is the module name that you will deploy later)

For more information, see the configuration of the Amazon AWS load balancer.

Configuration of the Amazon AWS network security

The network security must be configured to enable communications for the following protocols and ports:

  • UDP - 4800 for the safeadmin service (between SafeKit nodes)
  • UDP - 8888 for the module heartbeat (between SafeKit nodes)
  • TCP – 5600 for the module real time file replication (between SafeKit nodes)
  • TCP – 9010 for the load-balancer health check and for the SafeKit web console running in the http mode
  • TCP – 9001 to configure the https mode for the console
  • TCP – 9453 for the SafeKit web console running in https mode

Package installation on Windows

On both Windows servers

  • Install the free version of SafeKit for Cloud (click here) on 2 Windows nodes
  • The module mirror.safe is delivered inside the package.
  • To open firewall, start a command line as administrator, goto C:\safekit\private\bin and type  .\firewallcfg.cmd add on both nodes

Package installation on Linux

On both Linux servers

  • Install the free version of SafeKit for Cloud (click here) on 2 Linux nodes
  • After the download of safekit_xx.bin package, execute it to extract the rpm and the safekitinstall script and then execute the safekitinstall script
  • Answer yes to firewall automatic configuration
  • The module mirror.safe is delivered inside the package.

Configuration of SafeKit

The configuration is presented with the web console connected to 2 Windows servers but it is the same thing with 2 Linux servers.

Important: all the configuration is made from a single browser.

It is recommended to configure the web console in the https mode by connecting to https://<IP address of 1 VM>:9453 (next image). In this case, you must configure before the https mode by using the wizard described in the User's Guide: see "11. Securing the SafeKit web console".

Start the https SafeKit web console for configuring

Or you can use the web console in the http mode by connecting to http://<IP address of 1 VM>:9010 (next image).

Start the SafeKit web console for configuring

Note that you can also make a configuration with DNS names, especially if the IP addresses are not static.

Enter IP address of the first node and click on Confirm (next image)

SafeKit web console - first node in the  cluster

Click on New node and enter IP address of the second node (next image)

SafeKit web console - second node in the  cluster

Click on the red floppy disk to save the configuration (previous image)

In the Configuration tab, click on mirror.safe then enter mirror as the module name and Confirm: next images with mirror instead of xxx

SafeKit web console - start configuration of  module SafeKit web console - enter  module name

Click on Validate (next image)

SafeKit web console - enter  module nodes

Change the path of replicated directories only if necessary (next image).

Do not configure a virtual IP address (next image) because this configuration is already made in the Amazon AWS load balancer. This section is useful for on-premise configuration only.

If a process is defined in the Process Checker section (next image), it will be monitored on the primary server with the action restart in case of failure. The services will be stopped an restarted locally on the primary server if this process disappears from the list of running processes. After 3 unsuccessful local restarts, the module is stopped on the local server and there is a failover on the secondary server. As a consequence, the health check answers OK on the new primary server to the Amazon AWS load balancer and the virtual IP address traffic is switched to the new primary server.

start_prim and stop_prim (next image) contain the start and the stop of services.

SafeKit web console - enter  parameters

Note:

  • on Windows, put  services with Boot Startup Type = Manual on both servers (SafeKit controls start of services in start_prim).

Click on Validate (previous image)

SafeKit web console - stop the  module before configuration the configuration

Click on Configure (previous image)

SafeKit web console - check the success green message of the  configuration

Check the success green message on both servers and click on Next (previous image). On Linux, you may have an error at this step if replicated directories are mount points. See this article to solve the problem.

SafeKit web console - select the  node with the up-to-date database

Select the node with the most up-to-date replicated directories and click on start it to make the first resynchronization in the right direction (previous image). Before this operation, we suggest you to make a copy of replicated directories before starting the cluster to avoid any errors.

SafeKit web console - the first  node starts as primary and is alone

Start the second node (previous image) which becomes SECOND green (next image) after resynchronisation of all replicated directories (binary copy from node 1 to node 2).

SafeKit web console - the second  node starts as SECOND

The cluster is operational with services running on the PRIM node and nothing running on the SECOND node (previous image). Only modifications inside files are replicated in real-time in this state.

Be careful, components which are clients of the services must be configured with the virtual IP address. The configuration can be made with a DNS name (if a DNS name has been created and associated with the virtual IP address).

Tests

Check with Windows Microsoft Management Console (MMC) or with Linux command lines that the services are started on the primary server and stopped on the secondary server.

Stop the PRIM node by scrolling down the menu of the primary node and by clicking on Stop. Check that there is a failover on the SECOND node. And check the failover of services with Windows Microsoft Management Console (MMC) or with Linux command lines.

More information on tests in the User's Guide

Automatic start of the module at boot

Configure boot start (next image on the right side) configures the automatic boot of the module when the server boots. Do this configuration on both servers once the high availability solution is correctly running.

SafeKit web console - Automatic boot of  module

Note that on Windows, with Windows services manager, we assume that  services are with Boot Startup Type = Manual on both nodes. SafeKit controls start of services when starting the module in start_prim.

Note that for synchronizing SafeKit at boot and at shutdown on Windows, we assume that a command line has been run on both nodes during installation as administrator: .\addStartupShutdown.cmd in C:\safekit\private\bin (otherwise dot it now).

For reading the SafeKit logs, go to the Troubleshooting tab

For editing userconfig.xml, start_prim and stop_prim, go to the Advanced Configuration tab

Troubleshooting of a SafeKit / high availability cluster with real-time synchronous replication and failover between two redundant servers

Module log

Read the module log to understand the reasons of a failover, of a waiting state on the availability of a resource etc...
To see the module log of the primary server (next image):

  • click on the Control tab
  • click on node 1/PRIM (it becomes blue) on the left side to select the server
  • click on Module Log
  • click on the Refresh icon (green arrows) to update the console
  • click on the floppy disk to save the module log in a .txt file and to analyze in a text editor

Repeat the same operation to see the module log of the secondary server.

SafeKit web console - Module Log of the PRIM  server

Application log

Read the application log to see the output messages of the stat_prim and stop_prim restart scripts.
To see the application log of the primary server (next image):

  • click on the Control tab
  • click on node 1/PRIM (it becomes blue) on the left side to select the server
  • click on Application Log to see messages when starting and stopping services
  • click on the Refresh icon (green arrows) to update the console
  • click on the floppy disk to save the application log in a .txt file and to analyze in a text editor

Repeat the same operation to see the application log of the secondary server.

SafeKit web console - Application Log of the PRIM  server

More information on troubleshooting in the User's Guide

For support, go to the Support tab

Advanced configuration of a SafeKit / high availability cluster with real-time synchronous replication and failover between two redundant servers

In Advanced Configuration tab (next image), you can edit internal files of the module: bin/start_prim and bin/stop_prim and conf/userconfig.xml (next image on the left side). If you make change in the internal files here, you must apply the new configuration by a right click on the icon/xxx on the left side (next image): the interface will allow you to redeploy the modified files on both servers.

SafeKit web console - Advanced configuration of  module

More information on userconfig.xml in the User's Guide

For an example of userconfig.xml, start_prim and stop_prim, go to the Internals tab

Support of a SafeKit /  high availability cluster with real-time synchronous replication and failover between two redundant servers

For getting support on the call desk of https://support.evidian.com, get 2 Snaphots (2 .zip files), one for each server and upload them in the call desk tool (next image).

SafeKit web console -  snaphots for support

More information on support in the User's Guide

Internals of a SafeKit / Amazon AWS high availability cluster with synchronous replication and failover

Go to the Advanced Configuration tab, for editing these files

Internal files of the Windows mirror.safe module

userconfig.xml on Windows (description in the User's Guide)
<!DOCTYPE safe>
<safe>
   <service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
      <!-- Server Configuration -->
      <!-- Names or IP addresses on the default network are set during initialization in the console -->
      <heart pulse="700" timeout="30000">
         <heartbeat name="default" ident="flow"/>
      </heart>
      <!-- Software Error Detection Configuration -->
      <!-- Replace
         * PROCESS_NAME by the name of the process to monitor
      -->
      <errd polltimer="10">
        <proc name="PROCESS_NAME" atleast="1" action="restart" class="prim" />
      </errd>
      <!-- File Replication Configuration -->
      <rfs async="second" acl="off" nbrei="3">
         <replicated dir="c:\test1replicated" mode="read_only"/>
         <replicated dir="c:\test2replicated" mode="read_only"/>
      </rfs>
      <!-- User scripts activation -->
      <user nicestoptimeout="300" forcestoptimeout="300" logging="userlog"/>
   </service>
</safe>
start_prim.cmd on Windows
@echo off

rem Script called on the primary server for starting application services

rem For logging into SafeKit log use:
rem “%SAFE%\safekit” printi | printe "message"

rem stdout goes into Application log
echo "Running start_prim %*"

set res=0

rem Fill with your services start call

set res=%errorlevel%

if %res% == 0 goto end

:stop
“%SAFE%\safekit” printe "start_prim failed"

rem uncomment to stop SafeKit when critical
rem “%SAFE%\safekit” stop -i "start_prim"

:end
stop_prim.cmd on Windows
@echo off

rem Script called on the primary server for stopping application services

rem For logging into SafeKit log use:
rem “%SAFE%\safekit” printi | printe "message"

rem ----------------------------------------------------------
rem
rem 2 stop modes:
rem
rem - graceful stop
rem call standard application stop with net stop
rem
rem - force stop (%1=force)
rem kill application's processes
rem
rem ----------------------------------------------------------

rem stdout goes into Application log
echo "Running stop_prim %*"

set res=0

rem default: no action on forcestop
if "%1" == "force" goto end

rem Fill with your services stop call

rem If necessary, uncomment to wait for the stop of the services
rem “%SAFEBIN%\sleep” 10

if %res% == 0 goto end

“%SAFE%\safekit” printe "stop_prim failed"

:end

Internal files of the Linux mirror.safe module

userconfig.xml on Linux (description in the User's Guide)
<!DOCTYPE safe>
<safe>
   <service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
      <!-- Server Configuration -->
      <!-- Names or IP addresses on the default network are set during initialization in the console -->
      <heart pulse="700" timeout="30000">
         <heartbeat name=”default” ident=”flow”/>
      </heart>
      <!-- Software Error Detection Configuration -->
      <!-- Replace
         * PROCESS_NAME by the name of the process to monitor
      -->
      <errd polltimer="10">
        <proc name="PROCESS_NAME" atleast="1" action="restart" class="prim" />
      </errd>
      <!-- File Replication Configuration -->
      <rfs mountover="off" async="second" acl="off" nbrei="3" >
         <replicated dir="/test1replicated" mode="read_only"/>
         <replicated dir="/test2replicated" mode="read_only"/>
      </rfs>
      <!-- User scripts activation -->
      <user nicestoptimeout="300" forcestoptimeout="300" logging="userlog"/>
   </service>
</safe>
start_prim on Linux
#!/bin/sh
# Script called on the primary server for starting application

# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message" 

# stdout goes into Application log
echo "Running start_prim $*" 

res=0

# Fill with your application start call

if [ $res -ne 0 ] ; then
  $SAFE/safekit printe "start_prim failed"

  # uncomment to stop SafeKit when critical
  # $SAFE/safekit stop -i "start_prim"
fi
stop_prim on Linux
#!/bin/sh
# Script called on the primary server for stopping application

# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message" 

#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
#   call standard application stop
#
# - force stop ($1=force)
#   kill application's processes
#
#----------------------------------------------------------

# stdout goes into Application log
echo "Running stop_prim $*" 

res=0

# default: no action on forcestop
[ "$1" = "force" ] && exit 0

# Fill with your application stop call

[ $res -ne 0 ] && $SAFE/safekit printe "stop_prim failed"

All SafeKit Quick Start Templates for Plug&Play High Availability Solutions in the Cloud

Click on the blue button to access the Evidian SafeKit quick start template

Cloud

Mirror cluster with real-time replication and failover

Farm cluster with load balancing and failover

Amazon AWS Evidian SafeKit in the Amazon AWS Cloud

Microsoft Azure Evidian SafeKit in the Microsoft Azure Cloud

Google GCP Evidian SafeKit in the Google Cloud marketplace

SafeKit Modules for Plug&Play High Availability Solutions

Customers of SafeKit High Availability Software in all Business Activities

  • Best high availability use cases with SafeKit

    Best use cases [+]

    Like   OEM Software Like   Distributed Enterprise Like   Remote Sites
    Application Clustering Software for a Software Publisher High Availability Software in a Distributed Enterprise Business Continuity and Disaster Recovery without a replicated SAN
    A software publisher uses SafeKit as an OEM software for high availability of its application A distributed enterprise deploys SafeKit in many branches without specific IT skills SafeKit is deployed in two remote sites without the need for replicated bays of disks through a SAN

    Testimonials

    Like  The ideal product for a software publisher

    “SafeKit is the ideal application clustering solution for a software publisher. We currently have deployed more than 80 SafeKit clusters worldwide with our critical TV broadcasting application.

    Like  The product very easy to deploy for a reseller

    “Noemis, a value added distributor of Milestone Video Surveillance, has assisted integrators to deploy the SafeKit redundancy solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides a great support.”

    Like  The product to gain time for a system integrator

    “Thanks to a simple and powerful product, we gained time in the integration and validation of our critical projects like the supervision of Paris metro lines (the control rooms).”


  • High availability of Video Surveillance Platforms with SafeKit

    Video surveillance and access control [+]

    In video surveillance systems and access control, Evidian SafeKit implements high availability with synchronous replication and failover of

    Sebastien Temoin, Technical and Innovation Director, NOEMIS, value added distributor of Milestone solutions:

    "SafeKit by Evidian is a professional solution making easy the redundancy of Milestone Management Server, Event Server, Log Server. The solution is easy to deploy, easy to maintain and can be added on existing installation. We have assisted integrators to deploy the solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides great support. Happy to help if you have any questions."

    Use cases:


  • Harmonic has deployed more than 80 SafeKit clusters for high availability of its TV broadcasting application over satellites, terrestrials, cable, IPTV.

    TV broadcasting [+]

    Harmonic is using SafeKit as a software OEM high availability solution and deploys it with its TV broadcasting solutions over satellites, terrestrials, cable, IPTV.

    Over 80 SafeKit clusters are deployed on Windows for replication of Harmonic database and automatic failover of the critical application.

    Philippe Vidal, Product Manager, Harmonic says:

    “SafeKit is the ideal application clustering solution for a software publisher looking for a simple and economical high availability software. We are deploying SafeKit worldwide and we currently have more than 80 SafeKit clusters on Windows with our critical TV broadcasting application through terrestrial, satellite, cable and IP-TV. SafeKit implements the continuous and real-time replication of our database as well as the automatic failover of our application for software and hardware failures. Without modifying our application, it was possible for us to customize the installation of SafeKit. Since then, the time of preparation and implementation has been significantly reduced.”


  • The European Society of Warranties and Guarantees in Natixis uses SafeKit as a high availability solution for its applications.

    Finance [+]

    The European Society of Warranties and Guarantees in Natixis uses SafeKit as a high availability solution for its applications.

    Over 30 SafeKit clusters are deployed on Unix and Windows in Natixis.


  • Fives Syleps implements high availability of its ERP with SafeKit and deploys the solution in the food industry.

    Industry [+]

    Fives Syleps implements high availability of its ERP with SafeKit and deploys the solution in the food industry.

    Over 20 SafeKit clusters are deployed on Linux and Windows with Oracle.

    Testimonial of Fives Syleps:

    "The automated factories that we equip rely on our ERP. It is not possible that our ERP is out of service due to a computer failure. Otherwise, the whole activity of the factory stops.

    We chose the Evidian SafeKit high availability product because it is an easy to use solution. It is implemented on standard servers and does not require the use of shared disks on a SAN and load balancing network boxes.

    It allows servers to be put in remote computer rooms. In addition, the solution is homogeneous for Linux and Windows platforms. And it provides 3 functionalities: load balancing between servers, automatic failover and real-time data replication.”


  • Air traffic control systems supplier, Copperchase, deploys SafeKit high availability in airports.

    Air traffic control [+]

    Air traffic control systems supplier, Copperchase, deploys SafeKit high availability in airports.

    Over 20 SafeKit clusters are deployed on Windows.

    Tony Myers, Director of Business Development says:

    "By developing applications for air traffic control, Copperchase is in one of the most critical business activities. We absolutely need our applications to be available all the time. We have found with SafeKit a simple and complete clustering solution for our needs. This software combines in a single product load balancing, real time data replication with no data loss and automatic failover. This is why, Copperchase deploys SafeKit for air traffic control in airports in the UK and the 30 countries where we are present."


  • Software vendor Wellington IT deploys SafeKit high availability with its banking application for Credit Unions in Ireland and UK.

    Bank [+]

    Software vendor Wellington IT deploys SafeKit high availability with its banking application for Credit Unions in Ireland and UK.

    Over 25 SafeKit clusters are deployed on Linux with Oracle.

    Peter Knight, Sales Manager says:

    "Business continuity and disaster recovery are a major concern for our Locus banking application deployed in numerous Credit Unions around Ireland and the UK. We have found with SafeKit a simple and robust solution for high availability and synchronous replication between two servers with no data loss. With this software solution, we are not dependent on a specific and costly hardware clustering solution. It is a perfect tool to provide a software high availability option to an application of a software vendor."


  • Paris transport company (RATP) chose the SafeKit high availability and load balancing solution for the centralized control room of line 1 of the Paris subway.

    Transport [+]

    Paris transport company (RATP) chose the SafeKit high availability and load balancing solution for the centralized control room of line 1 of the Paris subway.

    20 SafeKit clusters are deployed on Windows and Linux.

    Stéphane Guilmin, RATP, Project manager says:

    "Automation of line 1 of the Paris subway is a major project for RATP, requiring a centralized command room (CCR) designed to resist IT failures. With SafeKit, we have three distinct advantages to meet this need. Firstly, SafeKit is a purely software solution that does not demand the use of shared disks on a SAN and network boxes for load balancing. It is very simple to separate our servers into separate machine rooms. Moreover, this clustering solution is homogeneous for our Windows and Linuxplatforms. SafeKit provides the three functions that we needed: load balancing between servers, automatic failover after an incident and real time data replication."

    And also, Philippe Marsol,  Atos BU Transport, Integration Manager says:

    “SafeKit is a simple and powerful product for application high availability. We have integrated SafeKit in our critical projects like the supervision of Paris metro Line 4 (the control room) or Marseille Line 1 and Line 2 (the operations center). Thanks to the simplicity of the product, we gained time for the integration and validation of the solution and we had also quick answers to our questions with a responsive Evidian team.”


  • The software integrator Systel deploys SafeKit high-availability solution in firefighter and emergency medical call centers.

    Healthcare [+]

    The software integrator Systel deploys SafeKit high-availability solution in firefighter and emergency medical call centers.

    Over 30 SafeKit clusters are deployed on Windows with SQL Server.

    Marc Pellas, CEO says:

    "SafeKit perfectly meets the needs of a software vendor. Its main advantage is that it brings in high availability through a software option that is added to our own multi-platform software suite. This way, we are not dependent on a specific and costly hardware clustering solution that is not only difficult to install and maintain, but also differs according to client environments. With SafeKit, our firefighter call centers are run with an integrated software clustering solution, which is the same for all our customers, is user friendly and for which we master the installation up to after-sales support."


  • ERP high availability and load balancing of the French army (DGA) are made with SafeKit.

    Government [+]

    ERP high availability and load balancing of the French army (DGA) are made with SafeKit.

    14 SafeKit clusters are deployed on Windows and Linux.

    Alexandre Barth, Systems administrator says:

    "Our production team implemented the SafeKit solution without any difficulty on 14 Windows and Linux clusters. Our critical activity is thus secure, with high-availability and load balancing functions. The advantages of this product are easy deployment and administration of clusters, on the one hand, and uniformity of the solution in the face of heterogeneous operating systems, on the other hand."

     


SafeKit High Availability Differentiators against Competition

Evidian SafeKit mirror cluster with real-time file replication and failover

3 products in 1 >

3 products in 1

Like  SafeKit high availability software on Windows and Linux saves on 1/ costly external shared or replicated storage, 2/ load balancing boxes, 3/ enterprise editions of OS and databases

Like  SafeKit includes all clustering features: synchronous real-time file replication, monitoring of server / network / software failures, automatic application restart, virtual IP address switched in case of failure to reroute clients

Very simple configuration >

Simple configuration with a web console

Like   The cluster configuration is very simple and made by means of application modules. New services and new replicated directories can be added to an existing application module to complete a high availability solution

Like   All the configuration of clusters is made using a simple centralized web administration console

Like   There is no domain controller or active directory to configure as with Microsoft cluster

Synchronous replication >

Synchronous replication

Like  The real-time replication is synchronous with no data loss on failure

Dislike  This is not the case with asynchronous replication

Fully automated failback procedure >

Automatic failback

Like  After a failure when a server reboots, the replication failback procedure is fully automatic and the failed server reintegrates the cluster without stopping the application on the only remaining server

Dislike  This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server. The application may even be stopped on the only remaining server during the resynchonization of the failed server

Replication of any type of data >

Any replicated data

Like  The replication is working for databases but also for any files which shall be replicated

Dislike  This not the case for replication at the database level

File replication vs disk replication >

File replication vs disk replication

Like  The replication is based on file directories that can be located anywhere (even in the system disk)

Disike  This is not the case with disk replication where special application configuration must be made to put the application data in a special disk

File replication vs shared disk >

File replication vs shared disk

Like  The servers can be put in two remote sites

Dislike  This is not the case with shared disk solutions

Remote sites and virtual IP address >

Remote sites

Like  All SafeKit clustering features are working for 2 servers in remote sites. Replication requires an extended LAN type network (latency = performance of synchronous replication, bandwidth = performance of resynchronization after failure).

Like  If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2

Like  If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer with the "healh check" of SafeKit.

Quorum >

Quorum

Like  The solution works with only 2 servers and for the quorum (network isolation between both sites), a simple split brain checker to a router is offered to support a single execution of the critical application

Like  This is not the case for most clustering solutions where a 3rd server is required for the quorum

Active/active cluster >

Active active mirror cluster

Like  The secondary server is not dedicated to the restart of the primary server. The cluster can be active-active by running 2 different mirror modules

Dislike  This is not the case with a fault-tolerant system where the secondary is dedicated to the execution of the same application synchronized at the instruction level

Uniform high availability solution >

Uniform high availability solution

Like  SafeKit implements a mirror cluster with replication and failover. But it imlements also a farm cluster with load balancing and failover.

Like  Thus a N-tiers architecture can be made highly available and load balanced with the same solution on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market

Dislike  This is not the case with an architecture mixing different technologies for load balancing, replication and failover

RTO / RPO >

Simple configuration with a web console

Like  SafeKit implements quick application restart in case of failure: around 1 mn or less (see RTO/RPO here)

Dislike  Quick application restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with a recovery time depending on the OS reboot as with VMware HA or Hyper-V cluster

Evidian SafeKit farm cluster with load balancing and failover

All clustering features >

All clustering features

Like  The solution includes all clustering features: virtual IP address, load balancing on client IP address or on sessions, monitoring of server / network / software failures, automatic application restart with a quick revovery time and a replication option with a mirror module

Dislike  This is not the case with other load balancing solutions. They are able to make load balancing but they do not include a full clustering solution with restart scripts and automatic application restart in case of failure. They do not offer a replication option

Like   The cluster configuration is very simple and made by means of application modules. There is no domain controller or active directory to configure on Windows. The solution works on Windows and Linux

Remote sites and virtual IP address >

Remote sites

Like   If servers are connected to the same IP network through an extended LAN between remote sites, the virtual IP address of SafeKit is working with load balancing at level 2

Like   If servers are connected to different IP networks between remote sites, the virtual IP address can be configured at the level of a load balancer with the help of the SafeKit health check. Thus you can implement load balancing but also all the clustering features of SafeKit, in particular monitoring and automatic recovery of the critical application on application servers

Uniform high availability solution >

Uniform high availability solution

Like  SafeKit imlements a farm cluster with load balancing and failover. But it implements also a mirror cluster with replication and failover.

Like  Thus a N-tiers architecture can be made highly available and load balanced with the same solution on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market

Dislike  This is not the case with an architecture mixing different technologies for load balancing, replication and failover

Software clustering vs hardware clustering >

A software cluster with SafeKit installed on two servers

Like  A simple software cluster with the SafeKit package just installed on two servers

Hardware clustering with external shared storage

Dislike  Complex hardware clustering with external storage or network load balancers

Shared nothing vs a shared disk cluster >

SafeKit shared-nothing cluster: easy to deploy even in remote sites

Like  SafeKit is a shared-nothing cluster: easy to deploy even in remote sites

Shared disk cluster: complex to deploy

Dislike  A shared disk cluster is complex to deploy

Application High Availability vs Full Virtual Machine High Availability >

SafeKit application high availability supports hardware failure, software failure, human errors

Like  Application HA supports hardware failure and software failure with a quick recovery time (RTO around 1 mn or less).

Dislike  Application HA requires to define restart scripts per application and folders to replicate (SafeKit application modules).

Virtual machines high availability supports only hardware failure with an recovery time depending on the OS reboot

Dislike  Full virtual machines HA supports only hardware failure with a VM reboot and a recovery time depending on the OS reboot.

Like  No restart scripts to define with full virtual machines HA (SafeKit hyperv.safe or kvm.safe modules). Hypervisors are active/active with just multiple virtual machines.

High availability vs fault tolerance >

SafeKit high availability vs fault-tolerance

Like  No dedicated server with SafeKit. Each server can be the failover server of the other one.
Software failure with restart in another OS environment.
Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist)

Fault tolerance system

Dislike  Secondary server dedicated to the execution of the same application synchronized at the instruction level.
Software exception on both servers at the same time.
Smooth upgrade not possible

Synchronous replication vs asynchronous replication >

SafeKit synchronous replication with no data in case of failure

Like  SafeKit implements real-time synchronous replication with no data loss in case of failure

Asynchronous replication with data loss on failure

Dislike  With asynchronous replication, there is data loss on failure

Byte-level file replication vs block-level disk replication >

SafeKit cluster with byte-level file replication: simply replicates directories even in the system disk

Like  SafeKit implements real-time byte-level file replication and is simply configured with application directories to replicate even in the system disk

Cluster with block-level disk replication: complex and require to put application data in a special disk

Dislike  Block-level disk replication is complex to configure and requires to put application data in a special disk

Heartbeat, failover and quorum to avoid 2 master nodes >

Simple quorum in a SafeKit cluster with a split brain checker configured on a router

Like  To avoid 2 masters, SafeKit proposes a simple split brain checker configured on a router

Complex quorum in other clusters: third machine, special quorum disk, remote hardware reset

Dislike  To avoid 2 masters, other clusters require a complex configuration with a third machine, a special quorum disk, a special interconnect

Virtual IP address
primary/secondary, network load balancing, failover >

No special network configuration in a SafeKit cluster

Like  No dedicated proxy servers and no special network configuration are required in a SafeKit cluster for virtual IP addresses

Special network configuration in other clusters

Dislike  Special network configuration is required in other clusters for virtual IP addresses. Note that SafeKit offers a health check adapted to load balancers

Demonstrations of SafeKit High Availability Software

SafeKit Webinar

This webinar presents in 10 minutes Evidian SafeKit.

In this webinar, you will understand:

  • mirror and farm clusters
  • cost savings against hardware clustering solutions
  • best use cases
  • the integration process for a new application

Microsoft SQL Server Cluster

This video shows a mirror module configuration with synchronous real-time replication and failover.

The file replication and the failover are configured for Microsoft SQL Server but it works in the same manner for other databases.

Free trial here

Apache Cluster

This video shows a farm module configuration with load balancing and failover.

The load balancing and the failover are configured for Apache but it works in the same manner for other web services.

Free trial here

Hyper-V Cluster

This video shows a Hyper-V cluster with full replications of virtual machines.

Virtual machines can run on both Hyper-V servers and they are restarted in case of failure.

Free trial here

SafeKit Training

Introduction

  1. Overview / pptx
  2. Competition / pptx

Installation, Set-up, Administration

  1. Install and setup / pptx
  2. Web console / pptx
  3. Command line / pptx