Linux: The Simplest High Availability Cluster with Synchronous Replication and Failover

Linux: The Simplest High Availability Cluster with Synchronous Replication and Failover

Evidian SafeKit brings high availability to Linux. This article explains how to implement quickly a Linux cluster without shared disk and without specific skills. The high availabity module mirror.safe and a free trial are offered in the installation instructions section.

Note that you can implement with the same product real-time replication and failover of other applications (database or other): see examples of mirror modules here.

This clustering solution is recognized as the simplest to implement by our customers and partners. It is also a complete solution that solves hardware failures (20% of problems) including the complete failure of a computer room, software failures (40% of problems) including smooth upgrade server by server and human errors (40% of problems) thanks to its simplicity of administration.

How the Evidian SafeKit software simply implements Linux high availability with real-time synchronous replication and failover without shared disk

How the Evidian SafeKit mirror cluster implements Linux high availability with synchronous replication and failover?

On the previous figure, the server 1/PRIM runs Linux services (any edition). Users are connected to the virtual IP address of the mirror cluster. SafeKit replicates files opened by Linux services in real time. Only changes in the files are replicated across the network, thus limiting traffic (byte-level file replication). Names of file directories containing Linux services data are simply configured in SafeKit. There are no pre-requisites on disk organization for the two servers. Directories to replicate may be located in the system disk. SafeKit implements synchronous replication with no data loss on failure contrary to asynchronous replication.

In case of server 1 failure, there is an automatic failover on server 2 with virtual IP failover and restart of Linux services. Then, when server 1 is restarted, SafeKit implements automatic failback with reintegration of data without stopping Linux services on server 2. Finally, the system returns to synchronous replication between server 2 and server 1. The administrator can decide to swap the role of primary and secondary and return to a server 1 running Linux services. The swap can also be done automatically by configuration.

Key differentiators of Linux replication and failover with the Evidian SafeKit mirror cluster

Evidian SafeKit mirror cluster with real-time file replication and failover

Synchronous replication Synchronous replication

Like  The real-time replication is synchronous with no data loss on failure

Dislike  This is not the case with asynchronous replication

After a server failure, fully automated failback procedure Automatic failback

Like  After a server failure and a failover, the replication failback procedure is fully automatic on the failed server and without stopping the application on the only remaining server

Dislike  This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server. The application may even be stopped on the only remaining server during the resynchonization of the failed server

All clustering features All clustering features

Like  The solution includes all clustering features: server failure monitoring, network failure monitoring, software failure monitoring, automatic application restart with a quick recovery time, a virtual IP address switched in case of failure to automatically reroute clients. A clustering configuration is simply made by means of a high availability application module. There is no domain controller or active directory to configure on Windows. The solution works on Windows and Linux

Dislike  This is not the case with replication-only solutions like replication at the database level

Dislike  Quick application restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with an unknown recovery time

Replication of any type of data

Like  The replication is working for databases but also for any files which shall be replicated

Dislike  This not the case for replication at the database level

File replication vs disk replication File replication vs disk replication

Like  The replication is based on file directories that can be located anywhere (even in the system disk)

Disike  This is not the case with disk replication where special application configuration must be made to put the application data in a special disk

File replication vs shared disk File replication vs shared disk

Like  The servers can be put in two remote sites

Dislike  This is not the case with shared disk solutions

Remote sites Remote sites

Like  With remote sites, the solution works with only 2 servers and for the quorum (network isolation), a simple split brain checker is offered

Like  This is not the case for most clustering solutions where a 3rd server is required for the quorum

Like  If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2

Like  If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer. SafeKit offers a health check: the load balancer is configured with a URL managed by SafeKit which returns OK on the primary server and NOT FOUND else. This solution is implemented for SafeKit in the Cloud but it can be also implemented with a load balancer on premise

Like  Virtual IP address is not offered by replication-only solution

Uniform high availability solution Uniform high availability solution

Like  SafeKit implements a mirror cluster with replication and failover. But it imlements also a farm cluster with load balancing and failover. Thus a N-tiers architecture can me made highly available and load balanced with the same solution on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market

Dislike  This is not the case with an architecture mixing different technologies for load balancing, replication and failover

FAQ on Evidian SafeKit [+]

Customers

Installation of SafeKit for Linux high availability with synchronous replication and failover

Package installation on Linux

On both Linux servers

Configuration instructions

The configuration is presented with the web console connected to 2 Windows servers but it is the same thing with 2 Linux servers.

Important: all the configuration is made from a single browser.

Launch the web console in a browser by connecting to http://localhost:9010 (next image)

Start the SafeKit web console for configuring the Linux module

Enter IP address of the first node and click on Confirm (next image)

SafeKit web console - first node in the Linux cluster

Click on New node and enter IP address of the second node (next image)

SafeKit web console - second node in the Linux cluster

Click on the red floppy disk to save the configuration (previous image)

In the Configuration tab, click on mirror.safe (xxx.safe in the next image) then enter mirror as the module name and Confirm

SafeKit web console - start configuration of Linux module SafeKit web console - enter Linux module name

Click on Validate (next image)

SafeKit web console - enter Linux module nodes

Change the path of replicated directories only if necessary (next image) and enter a virtual IP address. A virtual IP address is a new unused IP address in the same IP network as the IP addresses of the two nodes. The virtual IP address automatically switches in case of failover.

SafeKit web console - enter Linux parameters

For information:

Click on Validate (previous image)

SafeKit web console - stop the Linux module before configuration the configuration

Click on Configure (previous image)

SafeKit web console - check the success green message of the Linux configuration

Check the success green message on both servers and click on Next (previous image). On Linux, you may have an error at this step if replicated directories are mount points. See this article to solve the problem.

SafeKit web console - select the Linux node with the up-to-date database

Select the node with the most up-to-date replicated directories and click on start it to make the first resynchronization in the right direction (previous image). Before this operation, we suggest you to make a copy of replicated directories before starting the cluster to avoid any errors.

SafeKit web console - the first Linux node starts as primary and is alone

Start the second node (previous image) which becomes SECOND green (next image) after resynchronisation of all replicated directories (binary copy from node 1 to node 2).

SafeKit web console - the second Linux node starts as SECOND

The cluster is operational with Linux services running on the PRIM node and nothing running on the SECOND node (previous image). Only modifications inside files are replicated in real-time in this state.

Be careful, components which are clients of the Linux services must be configured with the virtual IP address. The configuration can be made with a DNS name (if a DNS name has been created and associated with the virtual IP address).

Tests

Check with Linux command lines that the Linux services are started on the primary server and stopped on the secondary server.

Stop the PRIM node by scrolling down the menu of the primary node and by clicking on Stop. Check that there is a failover on the SECOND node. And check the failover of Linux services with Linux command lines.

To understand what happens in the cluster, check the SafeKit logs of the primary server and the secondary server.

To see the module log of the primary server (next image):

SafeKit web console - Module Log of the PRIM Linux server

To see the application log of the primary server (next image):

SafeKit web console - Application Log of the PRIM Linux server

To see the logs of the secondary server (previous image), click on W12R2server75/SECOND (it will become blue) on the left side and repeat the same operations. In the secondary module log, you will find the volume and the reintegration time of replicated data.

Advanced configuration

In Advanced Configuration tab (next image), you can edit internal files of the module: bin/start_prim and bin/stop_prim and conf/userconfig.xml (next image on the left side). If you make change in the internal files here, you must apply the new configuration by a right click on the blue icon/xxx on the left side (next image): the interface will allow you to redeploy the modified files on both servers.

SafeKit web console - Advanced configuration of Linux module

Configure boot start (next image on the right side) configures the automatic boot of the module when the server boots. Do this configuration on both servers once the high availability solution is correctly running.

SafeKit web console - Automatic boot of Linux module

Support

For getting support on the call desk of https://support.evidian.com, get 2 Snaphots (2 .zip files), one for each server and upload them in the call desk tool (next image).

SafeKit web console - Linux snaphots for support

Internal files of the Linux mirror.safe module

userconfig.xml

<!DOCTYPE safe>
<safe>
   <service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
      <!-- Server Configuration -->
      <!-- Names or IP addresses on the default network are set during initialization in the console -->
      <heart pulse="700" timeout="30000">
         <heartbeat name=”default” ident=”flow”/>
      </heart>
      <!-- Virtual IP Configuration -->
      <!-- Replace
         * VIRTUAL_TO_BE_DEFINED by the name/IP of your virtual server 
      -->
      <vip>
         <interface_list>
            <interface check="on" arpreroute="on">
               <real_interface>
                  <virtual_addr addr="VIRTUAL_TO_BE_DEFINED" where="one_side_alias"/>
               </real_interface>
            </interface>
         </interface_list>
      </vip>
      <!-- Software Error Detection Configuration -->
      <!-- Replace
         * PROCESS_NAME by the name of the process to monitor
      -->
      <errd polltimer="10">
        <proc name="PROCESS_NAME" atleast="1" action="restart" class="prim" />
      </errd>
      <!-- File Replication Configuration -->
      <rfs mountover="off" async="second" acl="off" nbrei="3" >
         <replicated dir="/test1replicated" mode="read_only"/>
         <replicated dir="/test2replicated" mode="read_only"/>
      </rfs>
      <!-- User scripts activation -->
      <user nicestoptimeout="300" forcestoptimeout="300" logging="userlog"/>
   </service>
</safe>

start_prim

#!/bin/sh
# Script called on the primary server for starting application

# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message" 

# stdout goes into Application log
echo "Running start_prim $*" 

res=0

# Fill with your application start call

if [ $res -ne 0 ] ; then
  $SAFE/safekit printe "start_prim failed"

  # uncomment to stop SafeKit when critical
  # $SAFE/safekit stop -i "start_prim"
fi

stop_prim

#!/bin/sh
# Script called on the primary server for stopping application

# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message" 

#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
#   call standard application stop
#
# - force stop ($1=force)
#   kill application's processes
#
#----------------------------------------------------------

# stdout goes into Application log
echo "Running stop_prim $*" 

res=0

# default: no action on forcestop
[ "$1" = "force" ] && exit 0

# Fill with your application stop call

[ $res -ne 0 ] && $SAFE/safekit printe "stop_prim failed"