KVM: the simplest high availability cluster between two redundant servers without shared disk
Evidian SafeKit - synchronous replication, automatic failover and load balancing of virtual machines
Replication and failover of virtual machines (VM)
The solution for KVM
Evidian SafeKit brings high availability to KVM, the free hypervisor included in Linux, between two redundant servers.
This article explains how to implement quickly a KVM cluster without shared disk and without specific skills.
Several virtual machines can be replicated and can run on both Linux hypervisors with crossed replication and mutual takeover.
With this solution, there is no need to configure restart scripts and define virtual IP addresses for each application.
A generic product
Note that SafeKit is a generic product on Windows and Linux.
You can implement with the SafeKit product real-time replication and failover of any file directory and service, database, complete Hyper-V or KVM virtual machines, Docker, Podman, K3S, Cloud applications (see the module list).
Partners, the success with SafeKit
This platform agnostic solution is ideal for a partner reselling a critical application and who wants to provide a redundancy and high availability option easy to deploy to many customers.
With many references in many countries won by partners, SafeKit has proven to be the easiest solution to implement for redundancy and high availability of building management, video management, access control, SCADA software...
Building Management Software (BMS)
Video Management Software (VMS)
Electronic Access Control Software (EACS)
SCADA Software (Industry)
Step 1. Real-time replication
Server 1 (PRIM) runs one VM. SafeKit replicates in real time the VM files (virtual hard disk, VM configuration). Only changes made in the files are replicated across the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the VM directory name in SafeKit. There are no pre-requisites on disk organization. The directory may be located in the system disk.
Step 2. Automatic failover
When Server 1 fails, Server 2 takes over. SafeKit restarts the VM on Server 2. KVM finds the files replicated by SafeKit uptodate on Server 2.
The VM continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (set to 30 seconds by default) plus the VM reboot time.
Step 4. Back to normal
After reintegration, the VM files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the VM running on Server 2 and SafeKit replicating updates to Server 1.
If the administrator wishes the VM to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
Redundancy at the application level
In this type of solution, only application data are replicated. And only the application is restared in case of failure.
With this solution, restart scripts must be written to restart the application.
We deliver application modules to implement redundancy at the application level. They are preconfigured for well known applications and databases. You can customize them with your own services, data to replicate, application checkers. And you can combine application modules to build advanced multi-level architectures.
This solution is platform agnostic and works with applications inside physical machines, virtual machines, in the Cloud. Any hypervisor is supported (VMware, Hyper-V...).
Redundancy at the virtual machine level
In this type of solution, the full Virtual Machine (VM) is replicated (Application + OS). And the full VM is restarted in case of failure.
The advantage is that there is no restart scripts to write per application and no virtual IP address to define. If you do not know how the application works, this is the best solution.
This solution works with Windows/Hyper-V and Linux/KVM but not with VMware. This is an active/active solution with several virtual machines replicated and restarted between two nodes.
- Solution for a new application (no restart script to write): Windows/Hyper-V, Linux/KVM
Why a replication of a few Tera-bytes?
Resynchronization time after a failure (step 3)
- 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
- 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.
Alternative
- For a large volume of data, use external shared storage.
- More expensive, more complex.
Why a replication < 1,000,000 files?
- Resynchronization time performance after a failure (step 3).
- Time to check each file between both nodes.
Alternative
- Put the many files to replicate in a virtual hard disk / virtual machine.
- Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.
Why a failover < 25 replicated VMs?
- Each VM runs in an independent mirror module.
- Maximum of 25 mirror modules running on the same cluster.
Alternative
- Use an external shared storage and another VM clustering solution.
- More expensive, more complex.
Why a LAN/VLAN network between remote sites?
- Automatic failover of the virtual IP address with 2 nodes in the same subnet.
- Good bandwidth for resynchronization (step 3) and good latency for synchronous replication (a few ms).
Alternative
- Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
- Use backup solutions with asynchronous replication for high latency network.
Prerequisites
- You need KVM installed on 2 Linux nodes.
- You need your critical applications installed in one or more virtual machines.
Package installation on Linux
-
Install the free version of SafeKit on 2 Linux nodes.
Note: the free trial includes all SafeKit features. At the end of the trial, you can activate permanent license keys without uninstalling the package.
-
After the download of safekit_xx.bin package, execute it to extract the rpm and the safekitinstall script and then execute the safekitinstall script
-
Answer yes to firewall automatic configuration
-
Set the password for the web console and the default user admin.
- Use aphanumeric characters for the password (no special characters).
- The password must be the same on both nodes.
Module installation on Linux
-
Download the kvm.safe module.
The module is free. It contains the files userconfig.xml and the restart scripts.
- Put kvm.safe under /opt/safekit/Application_Modules/demo/ (create the demo directory if it does not exist).
The KVM configuration is presented with a virtual machine named vm1 and containing the application to restart in case of failure.
You will have to repeat this configuration for all VMs that you want to replicate and to restart. SafeKit supports up to 25 virtual machines.
1. Prerequisites
The vm1 virtual machine image is in the file /var/lib/libvirt/images/vm1.qcow2
. Before the SafeKit configuration, you have to make the following configuration.
On node 1:
-
Stop vm1:
virsh shutdown vm1
-
Create a
vm1/
directory:mkdir -p /var/lib/libvirt/images/vm1/
-
Copy the vm1 image to the new location:
cp -a /var/lib/libvirt/images/vm1.qcow2 /var/lib/libvirt/images/vm1/
The original vm1 image can be deleted as soon as tests with the new location are successfull.
-
Edit the vm1 configuration file:
EDITOR=vi virsh edit vm1
And change the line:
<source file='/var/lib/libvirt/images/vm1.qcow2'>
by :
<source file='/var/lib/libvirt/images/vm1/vm1.qcow2'>
-
Set the cache option to 'none' in the same file, for data integrity in case of crash:
<disk type='file' device='disk'> <driver name='qemu' type=’qcow2’ cache='none'/>
-
Close the vm1 configuration file
-
Disable vm1 automatic start:
virsh autostart vm1 --disable
-
Create a
vm1.xml
configuration file for vm1:virsh dumpxml vm1 > vm1.xml
On node 2:
-
Copy the
vm1.xml
configuration file from node 1.Note: whenever vm1 configuration is changed on node 1, you must reapply the new configuration on node 2.
-
Create vm1 but do not start it:
virsh define vm1.xml
-
Disable vm1 automatic start:
virsh autostart vm1 --disable
-
Create the directory for the image location:
mkdir -p /var/lib/libvirt/images/vm1/
2. Launch the SafeKit console
- Launch the web console in a browser on one node by connecting to
http://localhost:9010
.
You can also run the console in a browser on a workstation external to the cluster.
The configuration of SafeKit is done on both nodes from a single browser.
To secure the web console, see 11. Securing the SafeKit web console in the User's Guide.
3. Configure node addresses
- Enter the node IP addresses.
- Then, click on the red floppy disk to save the configuration.
If node1 or node2 background color is red, check connectivity of the browser to both nodes and check firewall on both nodes for troubleshooting.
This operation will place the IP addresses in the cluster.xml
file on both nodes (more information in the training with the command line).
5. Configure the module
- Put in
VM_PATH
, the root path of the replicated directory (/var/lib/libvirt/images
). - Enter in
VM_NAME
, the name of the virtual machine (vm1
).
We assume that vm1 files are in /var/lib/libvirt/image/vm1/
(see prerequisites). This directory will be replicated in real-time by SafeKit.
This operation will report the configuration in the userconfig.xml
file on both nodes (more information in the training with the command line).
7. Start the node with up-to-date data
- If node 1 has the up-to-date
vm1/
replicated directory, select it and start it.
When node 2 will be started, all data in vm1/
will be copied from node 1 to node 2.
If you make the wrong choice, you run the risk of synchronizing outdated data on both nodes.
It is also assumed that the vm1 virtual machine is stopped on node 1 so that SafeKit installs the replication mechanisms and then starts vm1 in the start_prim
script.
8. Wait for the transition to ALONE (green)
- Node 1 should reach the ALONE (green) state, which means that the
start_prim
script has been executed on node 1.
If the status is ALONE (green) and vm1 is not started, check output messages of start_prim
in the Application Log of node 1.
If node 1 does not reach ALONE (green) state, analyze why with the Module Log of node 1.
If the cluster is in WAIT (red) not uptodate - STOP (red) not uptodate
state, stop the WAIT node and force its start as primary.
9. Start node 2
- Start node 2 with its contextual menu.
- Wait for the SECOND (green) state.
Node 2 stays in the SECOND (magenta) state while resynchronizing the vm1/
replicated directory (copy from node 1 to node 2).
This may take a while depending on the size of files to resynchronize in vm1/
and the network bandwidth.
To see the progress of the copy, see the Module Log of node 2 with the verbose option without forgetting to refresh the window.
12. Testing
- Stop the PRIM node by scrolling down its contextual menu and clicking Stop.
- Verify that there is a failover on the SECOND node which should become ALONE (green).
- Check the restart of vm1 with KVM tools.
If vm1 is not started on node 2 while the state is ALONE (green), check the output messages of the start_prim
script in the Application Log of node 2.
If ALONE (green) is not reached, analyze why with the Module Log of node 2.
13. Replicating snapshots
The directory that contains the snapshot xml files is:
/var/lib/libvirt/qemu/snapshot/%VM_NAME%
where VM_NAME
is the name of the virtual machine (vm1).
Note: If no snapshot has been created, create one to generate the directory (else the SafeKit configuration will fail).
To replicate it:
-
Go to 'Advanced Configuration' in the SafeKit console
-
Edit
conf/userconfig.xml
file -
Insert the lines below into the
<rfs>
section:<replicated dir="/var/lib/libvirt/qemu/snapshot/%VM_NAME%" mode="read_only"> </replicated>
-
Save
userconfig.xml
-
Use
Apply the configuration
in the console -
Create snapshots on the PRIM node either through virt-manager or a command line:
virsh snapshot-create-as vm1 snapshot-name
Note: when creating a snapshot with a command line, you have to refresh the snapshot view into virt-manager.
Snapshots created on the PRIM node are operationnal on node 2 after failover, but not listed on node 2.
-
For importing a snapshot on node 2, you have to run the command:
virsh snapshot-create --redefine vm1 /var/lib/libvirt/qemu/snapshot/vm1/snapshot-name
-
The command line for listing all snapshots of vm1 is:
virsh snapshot-list vm1
Module log
- Read the module log to understand the reasons of a failover, of a waiting state etc...
To see the module log of node 1 (image):
- click on the Control tab
- click on node 1/PRIM on the left side to select the server (it becomes blue)
- click on Module Log
- click on the Refresh icon (green arrows) to update the console
- click on the floppy disk to save the module log in a .txt file and to analyze in a text editor
Click on node2 to see the module log of the secondary server.
Application log
- Read the application log to see the output messages of the start_prim and stop_prim restart scripts.
To see the application log of node1 (image):
- click on the Control tab
- click on node 1/PRIM on the left side to select the server (it becomes blue)
- click on Application Log to see messages when starting and stopping services
- click on the Refresh icon (green arrows) to update the console
- click on the floppy disk to save the application log in a .txt file and to analyze in a text editor
Click on node 2 to see the application log of the secondary server.
Advanced configuration
- In Advanced Configuration tab, you can edit internal files of the module: bin/start_prim and bin/stop_prim and conf/userconfig.xml .
If you make change in the internal files here, you must apply the new configuration by a right click on the icon/xxx on the left side (see image): the interface will allow you to redeploy the modified files on both servers.
Support
- For getting support, take 2 SafeKit Snapshots (2 .zip files), one for each server.
If you have an account on https://support.evidian.com, upload them in the call desk tool.
Internals of a SafeKit / KVM high availability cluster with synchronous replication and failover
Go to the Advanced Configuration tab in the console, for editing these filesInternal files of the Linux kvm.safe module
userconfig.xml (description in the User's Guide)
<!-- Mirror Architecture with Real Time File Replication and Failover for KVM -->
<!DOCTYPE safe>
<safe>
<!-- Set value to the path of the virtual machines repository -->
<macro name="VM_PATH" value="/var/lib/libvirt/images" />
<!-- Set value to the name of the virtual machine -->
<macro name="VM_NAME" value="vm1" />
<service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
<!-- Heartbeat Configuration -->
<heart>
<heartbeat name="">
</heartbeat>
</heart>
<!-- File Mirroring Configuration -->
<rfs mountover="off" async="second" locktimeout="200" nbrei="3">
<replicated dir="%VM_PATH%/%VM_NAME%" mode="read_only">
</replicated>
<!-- Uncomment for replicating the directory that contains snapshot xml files of the virtual machine
<replicated dir="/var/lib/libvirt/qemu/snapshot/%VM_NAME%" mode="read_only">
</replicated>
-->
</rfs>
<!-- User scripts Configuration -->
<user>
<var name="VM_PATH" value="%VM_PATH%/%VM_NAME%" />
<var name="VM_NAME" value="%VM_NAME%" />
</user>
</service>
</safe>
start_prim
#!/bin/sh
# Script called on the primary server for starting application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
# stdout goes into Application log
echo "Running start_prim $*"
res=0
# Start VM_NAME
virsh start $VM_NAME
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
if ([ "x$state" == "x" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME not found"
else
let i=1
while ( [ $i -le 5 ] && [ "x$state" != "xrunning" ]); do
sleep 5
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
let i=i+1
done
if ([ "x$state" != "xrunning" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME start failed"
fi
fi
if [ $res -ne 0 ] ; then
$SAFE/safekit printe "start_prim failed"
# uncomment to stop SafeKit when critical
$SAFE/safekit stop -i "start_prim"
fi
stop_prim
#!/bin/sh
# Script called on the primary server for stopping application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
# call standard application stop
#
# - force stop ($1=force)
# kill application's processes
#
#----------------------------------------------------------
# stdout goes into Application log
echo "Running stop_prim $*"
# Stop VM_NAME
virsh shutdown $VM_NAME
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
if ([ "x$state" == "x" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME not found"
else
let i=1
while ( [ $i -le 5 ] && [ "x$state" == "xrunning" ]); do
# Stop VM_NAME
virsh shutdown $VM_NAME
sleep 5
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
let i=i+1
done
if ([ "x$state" == "xrunning" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME stop failed"
fi
fi
res=0
# default: no action on forcestop
[ "$1" = "force" ] && exit 0
# Fill with your application stop call
[ $res -ne 0 ] && $SAFE/safekit printe "stop_prim failed"
Network load balancing and failover |
|
Windows farm | Linux farm |
Generic Windows farm > | Generic Linux farm > |
Microsoft IIS > | - |
NGINX > | |
Apache > | |
Amazon AWS farm > | |
Microsoft Azure farm > | |
Google GCP farm > | |
Other cloud > |
Advanced clustering architectures
Several modules can be deployed on the same cluster. Thus, advanced clustering architectures can be implemented:
- the farm+mirror cluster built by deploying a farm module and a mirror module on the same cluster,
- the active/active cluster with replication built by deploying several mirror modules on 2 servers,
- the Hyper-V cluster or KVM cluster with real-time replication and failover of full virtual machines between 2 active hypervisors,
- the N-1 cluster built by deploying N mirror modules on N+1 servers.
SafeKit with the Hyper-V module or the KVM module | Microsoft Hyper-V Cluster & VMware HA |
|
|
Note that the Hyper-V/SafeKit and KVM/SafeKit solutions are limited to replication and failover of 25 VMs.
VM HA with the SafeKit Hyper-V or KVM module | Application HA with SafeKit application modules |
SafeKit inside 2 hypervisors |
SafeKit inside 2 virtual or physical machines |
Replicates more data (App+OS) | Replicates only application data |
Reboot of VM on hypervisor 2 if hypervisor 1 crashes Recovery time depending on the OS reboot |
Quick recovery time with restart of App on OS2 if crash of server 1 Around 1 mn or less (see RTO/RPO here) Application checker and software failover |
Generic solution for any application / OS | Restart scripts to be written in application modules |
Software clustering vs hardware clustering More info > |
|
|
|
Shared nothing vs a shared disk cluster More info > |
|
|
|
Application High Availability vs Full Virtual Machine High Availability More info > |
|
|
|
High availability vs fault tolerance More info > |
|
|
|
Synchronous replication vs asynchronous replication More info > |
|
|
|
Byte-level file replication vs block-level disk replication More info > |
|
|
|
Heartbeat, failover and quorum to avoid 2 master nodes More info > |
|
|
|
Virtual IP address primary/secondary, network load balancing, failover More info > |
|
|
|
User's Guide
Application Modules
Release Notes
Presales documentation
Introduction
-
- Features
- Architectures
- Distinctive advantages
-
- Hardware vs software cluster
- Synchronous vs asynchronous replication
- File vs disk replication
- High availability vs fault tolerance
- Hardware vs software load balancing
- Virtual machine vs application HA
Installation, Console, CLI
- Install and setup / pptx
- Package installation
- Nodes setup
- Cluster configuration
- Upgrade
- Web console / pptx
- Cluster configuration
- Configuration tab
- Control tab
- Monitor tab
- Advanced Configuration tab
- Command line / pptx
- Silent installation
- Cluster administration
- Module administration
- Command line interface
Advanced configuration
- Mirror module / pptx
- userconfig.xml + restart scripts
- Heartbeat (<hearbeat>)
- Virtual IP address (<vip>)
- Real-time file replication (<rfs>)
- Farm module / pptx
- userconfig.xml + restart scripts
- Farm configuration (<farm>)
- Virtual IP address (<vip>)
- Checkers / pptx
- Failover machine (<failover>)
- Process monitoring (<errd>)
- Network and duplicate IP checkers
- Custom checker (<custom>)
- Split brain checker (<splitbrain>)
- TCP, ping, module checkers
Support
- Support tools / pptx
- Analyze snapshots
- Evidian support / pptx
- Get permanent license key
- Register on support.evidian.com
- Call desk