Evidian SafeKit provides a high availability cluster with real-time replication and failover in Microsoft Azure, the Microsoft cloud. This article explains how to implement quickly such a cluster in Microsoft Azure. A free trial is offered in the installation instructions section.
This clustering solution is recognized as the simplest to implement by our customers and partners. It is also a complete solution that solves hardware failures (20% of problems) including the complete failure of a computer room, software failures (40% of problems) including software error detection and automatic restart and human errors (40% of problems) thanks to its simplicity of administration.
On the previous figure,
On the previous figure, the server 1/PRIM runs the critical application. Users are connected to the virtual IP address of the mirror cluster. SafeKit replicates files opened by the critical application in real time. Only changes in the files are replicated across the network, thus limiting traffic (byte-level file replication). Names of file directories containing critical data are simply configured in SafeKit. There are no pre-requisites on disk organization for the two servers. Directories to replicate may be located in the system disk. SafeKit implements synchronous replication with no data loss on failure contrary to asynchronous replication.
In case of server 1 failure, there is an automatic failover on server 2 with restart of the critical application. Then, when server 1 is restarted, SafeKit implements automatic failback with reintegration of data without stopping the critical application on server 2. Finally, the system returns to synchronous replication between server 2 and server 1. The administrator can decide to swap the role of primary and secondary and return to a server 1 running the critical application. The swap can also be done automatically by configuration.
The Evidian SafeKit mirror cluster has been registered by Microsoft Azure in quickstart templates.
To deploy the Evidian SafeKit high availability cluster with replication and failover in Microsoft Azure, just click on the following button which deploys everything:
To deploy the Evidian SafeKit high availability cluster with replication and failover in Microsoft Azure, just click on the following button which deploys everything:
After the click:
After deployment, click on 'Microsoft.Template' (previous image), then go to the output panel and
If you want to connect to Virtual Machines through SSH (Linux) or remote desktop (Windows), you can use the SafeKit web console to know IP addresses or DNS names of VMs (next images). Use the user/password entered during the template configuration for accessing the VMs.
In term of VMs, this template deploys:
In term of load balancer, this template deploys:
The load balancer must be configured to periodically send health packets to virtual machines. For that, SafeKit provides a health probe which runs inside the virtual machines and which
You must configure the Microsoft Azure load balancer with:
For more information, see the configuration of the Microsoft Azure load balancer.
The network security must be configured to enable communications for the following protocols and ports:
On both Windows servers
On both Linux servers
The configuration is presented with the web console connected to 2 Windows servers but it is the same thing with 2 Linux servers.
Important: all the configuration is made from a single browser.
It is recommended to configure the web console in the https mode by connecting to https://<IP address of 1 VM>:9453 (next image). In this case, you must configure before the https mode by using the wizard described in the User's Guide: see "11.1 HTTPS Quick Configuration with the Configuration Wizard".
Or you can use the web console in the http mode by connecting to http://<IP address of 1 VM>:9010 (next image).
Note that you can also make a configuration with DNS names, especially if the IP addresses are not static.
Enter IP address of the first node and click on Confirm (next image)
Click on New node and enter IP address of the second node (next image)
Click on the red floppy disk to save the configuration (previous image)
In the Configuration tab, click on mirror.safe then enter mirror as the module name and Confirm: next images with mirror instead of xxx
Click on Validate (next image)
Change the path of replicated directories only if necessary (next image).
Do not configure a virtual IP address (next image) because this configuration is already made in the Microsoft Azure load balancer. This section is useful for on-premise configuration only.
If a process is defined in the Process Checker section (next image), it will be monitored on the primary server with the action restart in case of failure. The services will be stopped an restarted locally on the primary server if this process disappears from the list of running processes. After 3 unsuccessful local restarts, the module is stopped on the local server and there is a failover on the secondary server. As a consequence, the health probe answers OK on the new primary server to the Microsoft Azure load balancer and the virtual IP address traffic is switched to the new primary server.
start_prim and stop_prim (next image) contain the start and the stop of services.
Note:
Click on Validate (previous image)
Click on Configure (previous image)
Check the success green message on both servers and click on Next (previous image). On Linux, you may have an error at this step if replicated directories are mount points. See this article to solve the problem.
Select the node with the most up-to-date replicated directories and click on start it to make the first resynchronization in the right direction (previous image). Before this operation, we suggest you to make a copy of replicated directories before starting the cluster to avoid any errors.
Start the second node (previous image) which becomes SECOND green (next image) after resynchronisation of all replicated directories (binary copy from node 1 to node 2).
The cluster is operational with services running on the PRIM node and nothing running on the SECOND node (previous image). Only modifications inside files are replicated in real-time in this state.
Be careful, components which are clients of the services must be configured with the virtual IP address. The configuration can be made with a DNS name (if a DNS name has been created and associated with the virtual IP address).
Check with Windows Microsoft Management Console (MMC) or with Linux command lines that the services are started on the primary server and stopped on the secondary server.
Stop the PRIM node by scrolling down the menu of the primary node and by clicking on Stop. Check that there is a failover on the SECOND node. And check the failover of services with Windows Microsoft Management Console (MMC) or with Linux command lines.
Read the module log to understand the reasons of a failover, of a waiting state on the availability of a resource etc...
To see the module log of the primary server (next image):
Repeat the same operation to see the module log of the secondary server.
Read the application log to see the output messages of the stat_prim and stop_prim restart scripts.
To see the application log of the primary server (next image):
Repeat the same operation to see the application log of the secondary server.
In Advanced Configuration tab (next image), you can edit internal files of the module: bin/start_prim and bin/stop_prim and conf/userconfig.xml (next image on the left side). If you make change in the internal files here, you must apply the new configuration by a right click on the blue icon/xxx on the left side (next image): the interface will allow you to redeploy the modified files on both servers.
Configure boot start (next image on the right side) configures the automatic boot of the module when the server boots. Do this configuration on both servers once the high availability solution is correctly running. Note that for synchronizing SafeKit at boot and at shutdown, on both nodes, you must before start a command line as administrator, and run .\addStartupShutdown.cmd in C:\safekit\private\bin.
For getting support on the call desk of https://support.evidian.com, get 2 Snaphots (2 .zip files), one for each server and upload them in the call desk tool (next image).
<!DOCTYPE safe>
<safe>
<service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
<!-- Server Configuration -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<heart pulse="700" timeout="30000">
<heartbeat name="default" ident="flow"/>
</heart>
<!-- Software Error Detection Configuration -->
<!-- Replace
* PROCESS_NAME by the name of the process to monitor
-->
<errd polltimer="10">
<proc name="PROCESS_NAME" atleast="1" action="restart" class="prim" />
</errd>
<!-- File Replication Configuration -->
<rfs async="second" acl="off" nbrei="3">
<replicated dir="c:\test1replicated" mode="read_only"/>
<replicated dir="c:\test2replicated" mode="read_only"/>
</rfs>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog"/>
</service>
</safe>
start_prim.cmd@echo off
rem Script called on the primary server for starting application services
rem For logging into SafeKit log use:
rem “%SAFE%\safekit” printi | printe "message"
rem stdout goes into Application log
echo "Running start_prim %*"
set res=0
rem Fill with your services start call
set res=%errorlevel%
if %res% == 0 goto end
:stop
“%SAFE%\safekit” printe "start_prim failed"
rem uncomment to stop SafeKit when critical
rem “%SAFE%\safekit” stop -i "start_prim"
:end
stop_prim.cmd@echo off
rem Script called on the primary server for stopping application services
rem For logging into SafeKit log use:
rem “%SAFE%\safekit” printi | printe "message"
rem ----------------------------------------------------------
rem
rem 2 stop modes:
rem
rem - graceful stop
rem call standard application stop with net stop
rem
rem - force stop (%1=force)
rem kill application's processes
rem
rem ----------------------------------------------------------
rem stdout goes into Application log
echo "Running stop_prim %*"
set res=0
rem default: no action on forcestop
if "%1" == "force" goto end
rem Fill with your services stop call
rem If necessary, uncomment to wait for the stop of the services
rem “%SAFEBIN%\sleep” 10
if %res% == 0 goto end
“%SAFE%\safekit” printe "stop_prim failed"
:end
<!DOCTYPE safe>
<safe>
<service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
<!-- Server Configuration -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<heart pulse="700" timeout="30000">
<heartbeat name=”default” ident=”flow”/>
</heart>
<!-- Software Error Detection Configuration -->
<!-- Replace
* PROCESS_NAME by the name of the process to monitor
-->
<errd polltimer="10">
<proc name="PROCESS_NAME" atleast="1" action="restart" class="prim" />
</errd>
<!-- File Replication Configuration -->
<rfs mountover="off" async="second" acl="off" nbrei="3" >
<replicated dir="/test1replicated" mode="read_only"/>
<replicated dir="/test2replicated" mode="read_only"/>
</rfs>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog"/>
</service>
</safe>
start_prim#!/bin/sh
# Script called on the primary server for starting application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
# stdout goes into Application log
echo "Running start_prim $*"
res=0
# Fill with your application start call
if [ $res -ne 0 ] ; then
$SAFE/safekit printe "start_prim failed"
# uncomment to stop SafeKit when critical
# $SAFE/safekit stop -i "start_prim"
fi
stop_prim#!/bin/sh
# Script called on the primary server for stopping application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
# call standard application stop
#
# - force stop ($1=force)
# kill application's processes
#
#----------------------------------------------------------
# stdout goes into Application log
echo "Running stop_prim $*"
res=0
# default: no action on forcestop
[ "$1" = "force" ] && exit 0
# Fill with your application stop call
[ $res -ne 0 ] && $SAFE/safekit printe "stop_prim failed"
1 - OEM Software | 2 - Distributed Enterprise | 3 -Remote Sites |
![]() | ![]() | ![]() |
A software publisher uses SafeKit as an OEM software for high availability of its application | A distributed enterprise deploys SafeKit in many branches without specific IT skills | SafeKit is deployed in two remote sites without the need for replicated bays of disks through a SAN |
| ||
“SafeKit is the ideal application clustering solution for a software publisher. We currently have deployed more than 80 SafeKit clusters worldwide with our critical TV broadcasting application.” | ||
| ||
“WithNCompany has deployed in South Korea many SafeKit high availability solutions with the Hanwha Video Surveillance Platform. SafeKit is appreciated because the product is easy to install and very quickly deployed.” | ||
| ||
“Thanks to a simple and powerful product, we gained time in the integration and validation of our critical projects like the supervision of Paris and Marseille metro lines (the control rooms).” |
In video surveillance systems, Evidian SafeKit implements high availability with synchronous replication and failover of
Harmonic is using SafeKit as a software OEM high availability solution and deploys it with its TV broadcasting solutions over satellites, terrestrials, cable, IPTV.
Over 80 SafeKit clusters are deployed on Windows for replication of Harmonic database and automatic failover of the critical application.
Philippe Vidal, Product Manager, Harmonic says:
“SafeKit is the ideal application clustering solution for a software publisher looking for a simple and economical high availability software. We are deploying SafeKit worldwide and we currently have more than 80 SafeKit clusters on Windows with our critical TV broadcasting application through terrestrial, satellite, cable and IP-TV. SafeKit implements the continuous and real-time replication of our database as well as the automatic failover of our application for software and hardware failures. Without modifying our application, it was possible for us to customize the installation of SafeKit. Since then, the time of preparation and implementation has been significantly reduced.”
The European Society of Warranties and Guarantees in Natixis uses SafeKit as a high availability solution for its applications.
Over 30 SafeKit clusters are deployed on Unix and Windows in Natixis.
Fives Syleps, the Sydel software editor implements high availability of its ERP with SafeKit and deploys the solution in the food industry.
Over 20 SafeKit clusters are deployed on Unix with Oracle.
Air traffic control systems supplier, Copperchase, deploys SafeKit high availability in airports.
Over 20 SafeKit clusters are deployed on Windows.
Tony Myers, Director of Business Development says:
"By developing applications for air traffic control, Copperchase is in one of the most critical business activities. We absolutely need our applications to be available all the time. We have found with SafeKit a simple and complete clustering solution for our needs. This software combines in a single product load balancing, real time data replication with no data loss and automatic failover. This is why, Copperchase deploys SafeKit for air traffic control in airports in the UK and the 30 countries where we are present."
Software vendor Wellington IT deploys SafeKit high availability with its banking application for Credit Unions in Ireland and UK.
Over 25 SafeKit clusters are deployed on Linux with Oracle.
Peter Knight, Sales Manager says:
"Business continuity and disaster recovery are a major concern for our Locus banking application deployed in numerous Credit Unions around Ireland and the UK. We have found with SafeKit a simple and robust solution for high availability and synchronous replication between two servers with no data loss. With this software solution, we are not dependent on a specific and costly hardware clustering solution. It is a perfect tool to provide a software high availability option to an application of a software vendor."
Paris transport company (RATP) chose the SafeKit high availability and load balancing solution for the centralized control room of line 1 of the Paris subway.
20 SafeKit clusters are deployed on Windows and Linux.
Stéphane Guilmin, RATP, Project manager says:
"Automation of line 1 of the Paris subway is a major project for RATP, requiring a centralized command room (CCR) designed to resist IT failures. With SafeKit, we have three distinct advantages to meet this need. Firstly, SafeKit is a purely software solution that does not demand the use of shared disks on a SAN and network boxes for load balancing. It is very simple to separate our servers into separate machine rooms. Moreover, this clustering solution is homogeneous for our Windows and Unix platforms. SafeKit provides the three functions that we needed: load balancing between servers, automatic failover after an incident and real time data replication."
And also, Philippe Marsol, Atos BU Transport, Integration Manager says:
“SafeKit is a simple and powerful product for application high availability. We have integrated SafeKit in our critical projects like the supervision of Paris metro Line 4 (the control room) or Marseille Line 1 and Line 2 (the operations center). Thanks to the simplicity of the product, we gained time for the integration and validation of the solution and we had also quick answers to our questions with a responsive Evidian team.”
The software integrator Systel deploys SafeKit high-availability solution in firefighter and emergency medical call centers.
Over 30 SafeKit clusters are deployed on Windows with SQL Server.
Marc Pellas, CEO says:
"SafeKit perfectly meets the needs of a software vendor. Its main advantage is that it brings in high availability through a software option that is added to our own multi-platform software suite. This way, we are not dependent on a specific and costly hardware clustering solution that is not only difficult to install and maintain, but also differs according to client environments. With SafeKit, our firefighter call centers are run with an integrated software clustering solution, which is the same for all our customers, is user friendly and for which we master the installation up to after-sales support."
ERP high availability and load balancing of the French army (DGA) are made with SafeKit.
14 SafeKit clusters are deployed on Windows and Linux.
Alexandre Barth, Systems administrator says:
"Our production team implemented the SafeKit solution without any difficulty on 14 Windows and Linux clusters. Our critical activity is thus secure, with high-availability and load balancing functions. The advantages of this product are easy deployment and administration of clusters, on the one hand, and uniformity of the solution in the face of heterogeneous operating systems, on the other hand."
Evidian SafeKit mirror cluster with real-time file replication and failover | ||
All clustering features | ![]() |
|
Synchronous replication | ![]() |
|
Fully automated failback procedure | ![]() |
|
Replication of any type of data | ![]() |
|
File replication vs disk replication | ![]() |
|
File replication vs shared disk | ![]() |
|
Remote sites | ![]() |
|
Quorum | ![]() |
|
Active/active cluster | ![]() |
|
Uniform high availability solution | ![]() |
|
Evidian SafeKit farm cluster with load balancing and failover | ||
All clustering features | ![]() |
|
Remote sites | ![]() |
|
Uniform high availability solution | ![]() |
|
High availability architectures comparison | ||
Feature | SafeKit cluster | Other clusters |
Software clustering vs hardware clustering More information... | ![]() ![]() | ![]() ![]() ![]() |
Shared nothing vs a shared disk cluster More information... | ![]() ![]() | ![]() ![]() |
Application High Availability vs Full Virtual Machine High Availability More information... | ![]() ![]() Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist) | ![]() ![]() Smooth upgrade not possible |
High availability vs fault tolerance | ![]() ![]() Software failure with restart in another OS environment. Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist) | ![]() ![]() Software exception on both servers at the same time. Smooth upgrade not possible |
Synchronous replication vs asynchronous replication More information... | ![]() ![]() | ![]() ![]() |
Byte-level file replication vs block-level disk replication More information... | ![]() ![]() | ![]() ![]() |
Heartbeat, failover and quorum to avoid 2 master nodes More information... | ![]() ![]() | ![]() ![]() |
Network load balancing More information... | ![]() ![]() | ![]() ![]() |
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.