How to implement a Linux KVM cluster with the SafeKit software between two redundant servers without shared disk?
Replication and failover of virtual machines (VM)
The solution for KVM
Evidian SafeKit brings high availability to KVM, the free hypervisor included in Linux, between two redundant servers.
This article explains how to implement quickly a KVM cluster without shared disk and without specific skills.
Several virtual machines can be replicated and can run on both Linux hypervisors with crossed replication and mutual takeover.
With this solution, there is no need to configure restart scripts and define virtual IP addresses for each application.
A generic product
Note that SafeKit is a generic product on Windows and Linux.
You can implement with the same product real-time replication and failover of any file directory and service, database, complete Hyper-V or KVM virtual machines, Docker, Kubernetes , Cloud applications.
This platform agnostic solution is ideal for a partner reselling a critical application and who wants to provide a redundancy and high availability option easy to deploy to many customers.
With many references in many countries won by partners, SafeKit has proven to be the easiest solution to implement for redundancy and high availability of building management, video management, access control, SCADA software...
The following steps are described for one virtual machine (VM) inside one mirror module. Each replicated VM runs in an independent mirror module with a primary server that can be either the KVM server 1 or the KVM server 2.
Step 1. Real-time replication
Server 1 (PRIM) runs one VM. SafeKit replicates in real time the VM files (virtual hard disk, VM configuration). Only changes made in the files are replicated across the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the VM directory name in SafeKit. There are no pre-requisites on disk organization. The directory may be located in the system disk.
Step 2. Automatic failover
When Server 1 fails, Server 2 takes over. SafeKit restarts the VM on Server 2. KVM finds the files replicated by SafeKit uptodate on Server 2.
The VM continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (set to 30 seconds by default) plus the VM reboot time.
Step 3. Automatic failback
Failback involves restarting Server 1 after fixing the problem that caused it to fail. SafeKit automatically resynchronizes the VM files.
This reintegration takes place without disturbing the VM, which can continue running on Server 2.
Step 4. Back to normal
After reintegration, the VM files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the VM running on Server 2 and SafeKit replicating updates to Server 1.
If the administrator wishes the VM to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
Choose between redundancy at the application level or at the virtual machine level
Redundancy at the application level
In this type of solution, only application data are replicated. And only the application is restared in case of failure.
With this solution, restart scripts must be written to restart the application.
We deliver application modules to implement redundancy at the application level. They are preconfigured for well known applications and databases. You can customize them with your own services, data to replicate, application checkers. And you can combine application modules to build advanced multi-level architectures.
This solution is platform agnostic and works with applications inside physical machines, virtual machines, in the Cloud. Any hypervisor is supported (VMware, Hyper-V...).
Solution for a new application (restart scripts to write): Windows, Linux
Redundancy at the virtual machine level
In this type of solution, the full Virtual Machine (VM) is replicated (Application + OS). And the full VM is restarted in case of failure.
The advantage is that there is no restart scripts to write per application and no virtual IP address to define. If you do not know how the application works, this is the best solution.
This solution works with Windows/Hyper-V and Linux/KVM but not with VMware. This is an active/active solution with several virtual machines replicated and restarted between two nodes.
The KVM configuration is presented with a virtual machine named vm1 and containing the application to restart in case of failure.
You will have to repeat this configuration for all VMs that you want to replicate and to restart. SafeKit supports up to 25 virtual machines.
1. Prerequisites
The vm1 virtual machine image is in the file /var/lib/libvirt/images/vm1.qcow2. Before the SafeKit configuration, you have to make the following configuration.
On node 1:
Stop vm1:
virsh shutdown vm1
Create a vm1/ directory:
mkdir -p /var/lib/libvirt/images/vm1/
Copy the vm1 image to the new location:
cp -a /var/lib/libvirt/images/vm1.qcow2 /var/lib/libvirt/images/vm1/
The original vm1 image can be deleted as soon as tests with the new location are successfull.
Then, click on the red floppy disk to save the configuration.
If node1 or node2 background color is red, check connectivity of the browser to both nodes and check firewall on both nodes for troubleshooting.
This operation will place the IP addresses in the cluster.xml file on both nodes (more information in the training with the command line).
4. Choose the module
In the Configuration tab, click on the kvm.safe module.
The console finds xxx.safe in the 'Application_Modules/demo/' directory on the server side if you dropped a module there during installation.
5. Configure the module
Put in VM_PATH, the root path of the replicated directory (/var/lib/libvirt/images).
Enter in VM_NAME, the name of the virtual machine (vm1).
We assume that vm1 files are in /var/lib/libvirt/image/vm1/ (see prerequisites). This directory will be replicated in real-time by SafeKit.
This operation will report the configuration in the userconfig.xml file on both nodes (more information in the training with the command line).
6. Verify successful configuration
Check the success message (green) on both nodes and click Next.
7. Start the node with up-to-date data
If node 1 has the up-to-date vm1/ replicated directory, select it and start it.
When node 2 will be started, all data in vm1/ will be copied from node 1 to node 2.
If you make the wrong choice, you run the risk of synchronizing outdated data on both nodes.
It is also assumed that the vm1 virtual machine is stopped on node 1 so that SafeKit installs the replication mechanisms and then starts vm1 in the start_prim script.
8. Wait for the transition to ALONE (green)
Node 1 should reach the ALONE (green) state, which means that the start_prim script has been executed on node 1.
If the status is ALONE (green) and vm1 is not started, check output messages of start_prim in the Application Log of node 1.
If node 1 does not reach ALONE (green) state, analyze why with the Module Log of node 1.
If the cluster is in WAIT (red) not uptodate - STOP (red) not uptodate state, stop the WAIT node and force its start as primary.
9. Start node 2
Start node 2 with its contextual menu.
Wait for the SECOND (green) state.
Node 2 stays in the SECOND (magenta) state while resynchronizing the vm1/ replicated directory (copy from node 1 to node 2).
This may take a while depending on the size of files to resynchronize in vm1/ and the network bandwidth.
To see the progress of the copy, see the Module Log of node 2 with the verbose option without forgetting to refresh the window.
10. Verify that the cluster is operational
Check that the cluster is green/green with vm1 running on the PRIM node and not running on the SECOND node.
Only changes inside files are replicated in real time in this state.
11. Automatically start the module at boot
Apply 'Configure boot start' on node 1 to configure the automatic start of the module at boot.
Redo the same configuration on node 2.
Do this configuration once the high availability solution is working properly.
12. Testing
Stop the PRIM node by scrolling down its contextual menu and clicking Stop.
Verify that there is a failover on the SECOND node which should become ALONE (green).
Check the restart of vm1 with KVM tools.
If vm1 is not started on node 2 while the state is ALONE (green), check the output messages of the start_prim script in the Application Log of node 2.
If ALONE (green) is not reached, analyze why with the Module Log of node 2.
In Advanced Configuration tab, you can edit internal files of the module: bin/start_prim and bin/stop_prim and conf/userconfig.xml .
If you make change in the internal files here, you must apply the new configuration by a right click on the icon/xxx on the left side (see image): the interface will allow you to redeploy the modified files on both servers.
<!-- Mirror Architecture with Real Time File Replication and Failover for KVM -->
<!DOCTYPE safe>
<safe>
<!-- Set value to the path of the virtual machines repository -->
<macro name="VM_PATH" value="/var/lib/libvirt/images" />
<!-- Set value to the name of the virtual machine -->
<macro name="VM_NAME" value="vm1" />
<service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
<!-- Heartbeat Configuration -->
<heart>
<heartbeat name="">
</heartbeat>
</heart>
<!-- File Mirroring Configuration -->
<rfs mountover="off" async="second" locktimeout="200" nbrei="3">
<replicated dir="%VM_PATH%/%VM_NAME%" mode="read_only">
</replicated>
<!-- Uncomment for replicating the directory that contains snapshot xml files of the virtual machine
<replicated dir="/var/lib/libvirt/qemu/snapshot/%VM_NAME%" mode="read_only">
</replicated>
-->
</rfs>
<!-- User scripts Configuration -->
<user>
<var name="VM_PATH" value="%VM_PATH%/%VM_NAME%" />
<var name="VM_NAME" value="%VM_NAME%" />
</user>
</service>
</safe>
start_prim
#!/bin/sh
# Script called on the primary server for starting application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
# stdout goes into Application log
echo "Running start_prim $*"
res=0
# Start VM_NAME
virsh start $VM_NAME
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
if ([ "x$state" == "x" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME not found"
else
let i=1
while ( [ $i -le 5 ] && [ "x$state" != "xrunning" ]); do
sleep 5
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
let i=i+1
done
if ([ "x$state" != "xrunning" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME start failed"
fi
fi
if [ $res -ne 0 ] ; then
$SAFE/safekit printe "start_prim failed"
# uncomment to stop SafeKit when critical
$SAFE/safekit stop -i "start_prim"
fi
stop_prim
#!/bin/sh
# Script called on the primary server for stopping application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
# call standard application stop
#
# - force stop ($1=force)
# kill application's processes
#
#----------------------------------------------------------
# stdout goes into Application log
echo "Running stop_prim $*"
# Stop VM_NAME
virsh shutdown $VM_NAME
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
if ([ "x$state" == "x" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME not found"
else
let i=1
while ( [ $i -le 5 ] && [ "x$state" == "xrunning" ]); do
# Stop VM_NAME
virsh shutdown $VM_NAME
sleep 5
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
let i=i+1
done
if ([ "x$state" == "xrunning" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME stop failed"
fi
fi
res=0
# default: no action on forcestop
[ "$1" = "force" ] && exit 0
# Fill with your application stop call
[ $res -ne 0 ] && $SAFE/safekit printe "stop_prim failed"
SafeKit Modules for Plug&Play Redundancy and High Availability Solutions
“Noemis, a value added distributor of Milestone Video Surveillance, has assisted integrators to deploy the SafeKit redundancy solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides a great support.”
Video management, access control, building management [+]
Life safety is directly associated with the proper execution of security software. That's why, they need redundancy and high availability. SafeKit is recognized as the simplest redundancy solution by our partners which has deployed it in:
"SafeKit by Evidian is a professional solution making easy the redundancy of Milestone video management software. The solution is easy to deploy, easy to maintain and can be added on existing installation. We have assisted integrators to deploy the solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides great support."
TV broadcasting [+]
Harmonic is using SafeKit as a software OEM high availability solution and deploys it with its TV broadcasting solutions over satellites, terrestrials, cable, IPTV.
“SafeKit is the ideal application clustering solution for a software publisher looking for a simple and economical high availability software. We are deploying SafeKit worldwide and we currently have more than 80 SafeKit clusters on Windows with our critical TV broadcasting application through terrestrial, satellite, cable and IP-TV. SafeKit implements the continuous and real-time replication of our database as well as the automatic failover of our application for software and hardware failures. Without modifying our application, it was possible for us to customize the installation of SafeKit. Since then, the time of preparation and implementation has been significantly reduced.”
Finance [+]
The European Society of Warranties and Guarantees in Natixis uses SafeKit as a high availability solution for its applications.
Over 30 SafeKit clusters are deployed on Unix and Windows in Natixis.
Over 20 SafeKit clusters are deployed on Linux and Windows with Oracle.
Testimonial of Fives Syleps:
"The automated factories that we equip rely on our ERP. It is not possible that our ERP is out of service due to a computer failure. Otherwise, the whole activity of the factory stops.
We chose the Evidian SafeKit high availability product because it is an easy to use solution. It is implemented on standard servers and does not require the use of shared disks on a SAN and load balancing network boxes.
It allows servers to be put in remote computer rooms. In addition, the solution is homogeneous for Linux and Windows platforms. And it provides 3 functionalities: load balancing between servers, automatic failover and real-time data replication.”
Tony Myers, Director of Business Development says:
"By developing applications for air traffic control, Copperchase is in one of the most critical business activities. We absolutely need our applications to be available all the time. We have found with SafeKit a simple and complete clustering solution for our needs. This software combines in a single product load balancing, real time data replication with no data loss and automatic failover. This is why, Copperchase deploys SafeKit for air traffic control in airports in the UK and the 30 countries where we are present."
Over 25 SafeKit clusters are deployed on Linux with Oracle.
Peter Knight, Sales Manager says:
"Business continuity and disaster recovery are a major concern for our Locus banking application deployed in numerous Credit Unions around Ireland and the UK. We have found with SafeKit a simple and robust solution for high availability and synchronous replication between two servers with no data loss. With this software solution, we are not dependent on a specific and costly hardware clustering solution. It is a perfect tool to provide a software high availability option to an application of a software vendor."
20 SafeKit clusters are deployed on Windows and Linux.
Stéphane Guilmin, RATP, Project manager says:
"Automation of line 1 of the Paris subway is a major project for RATP, requiring a centralized command room (CCR) designed to resist IT failures. With SafeKit, we have three distinct advantages to meet this need. Firstly, SafeKit is a purely software solution that does not demand the use of shared disks on a SAN and network boxes for load balancing. It is very simple to separate our servers into separate machine rooms. Moreover, this clustering solution is homogeneous for our Windows and Linuxplatforms. SafeKit provides the three functions that we needed: load balancing between servers, automatic failover after an incident and real time data replication."
And also, Philippe Marsol, Atos BU Transport, Integration Manager says:
“SafeKit is a simple and powerful product for application high availability. We have integrated SafeKit in our critical projects like the supervision of Paris metro Line 4 (the control room) or Marseille Line 1 and Line 2 (the operations center). Thanks to the simplicity of the product, we gained time for the integration and validation of the solution and we had also quick answers to our questions with a responsive Evidian team.”
Over 30 SafeKit clusters are deployed on Windows with SQL Server.
Marc Pellas, CEO says:
"SafeKit perfectly meets the needs of a software vendor. Its main advantage is that it brings in high availability through a software option that is added to our own multi-platform software suite. This way, we are not dependent on a specific and costly hardware clustering solution that is not only difficult to install and maintain, but also differs according to client environments. With SafeKit, our firefighter call centers are run with an integrated software clustering solution, which is the same for all our customers, is user friendly and for which we master the installation up to after-sales support."
14 SafeKit clusters are deployed on Windows and Linux.
Alexandre Barth, Systems administrator says:
"Our production team implemented the SafeKit solution without any difficulty on 14 Windows and Linux clusters. Our critical activity is thus secure, with high-availability and load balancing functions. The advantages of this product are easy deployment and administration of clusters, on the one hand, and uniformity of the solution in the face of heterogeneous operating systems, on the other hand."
Redundancy and High Availability Differentiators against Competition
Key differentiators of SafeKit vs Microsoft Hyper-V cluster and VMware HA
Application HA supports hardware failure and software failure with a quick recovery time (RTO around 1 mn or less).
Application HA requires to define restart scripts per application and folders to replicate (SafeKit application modules).
Full virtual machines HA supports only hardware failure with a VM reboot and a recovery time depending on the OS reboot.
No restart scripts to define with full virtual machines HA (SafeKit hyperv.safe or kvm.safe modules). Hypervisors are active/active with just multiple virtual machines.
No dedicated server with SafeKit.
Each server can be the failover server of the other one.
Software failure with restart in another OS environment.
Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist)
Secondary server dedicated to the execution of the same application synchronized at the instruction level.
Software exception on both servers at the same time.
Smooth upgrade not possible
No dedicated proxy servers and no special network configuration are required in a SafeKit cluster for virtual IP addresses
Special network configuration is required in other clusters for virtual IP addresses. Note that SafeKit offers a health check adapted to load balancers