How the Evidian SafeKit software simply implements Apache load balancing and failover?
The solution for Apache
Evidian SafeKit brings load balancing and failover to Apache.
This article explains how to implement quickly a Apache cluster without network load balancers, dedicated proxy servers or special MAC addresses. SafeKit is installed directly on the Apache servers.
A generic product
Note that SafeKit is a generic product on Windows and Linux.
You can implement with the same product real-time replication and failover of directories and services, databases, full Hyper-V or KVM virtual machines, Docker, Kubernetes, Cloud applications.
A complete solution
SafeKit solves:
hardware failures (20% of problems), including the complete failure of a computer room,
software failures (40% of problems), including restart of critical processes,
and human errors (40% of problems) thanks to its ease of use and its web console.
This platform agnostic solution is ideal for a partner reselling a critical application and who wants to provide a redundancy and high availability option easy to deploy to many customers.
With many references in many countries won by partners, SafeKit has proven to be the easiest solution to implement for redundancy and high availability of building management, video management, access control, SCADA software...
On the previous figure, the Apache application is running on the 3 servers (3 is an example, it can be 2 or more). Users are connected to a virtual IP address.
The virtual IP address is configured locally on each server in the farm cluster.
The input traffic to the virtual IP address is received by all the servers and split among them by a network filter inside each server's kernel.
SafeKit detects hardware and software failures, reconfigures network filters in the event of a failure, and offers configurable application checkers and recovery scripts.
Load balancing in a network filter
The network load balancing algorithm inside the network filter is based on the identity of the client packets (client IP address, client TCP port). Depending on the identity of the client packet input, only one filter in a server accepts the packet; the other filters in other servers reject it.
Once a packet is accepted by the filter on a server, only the CPU and memory of this server are used by the Apache application that responds to the request of the client. The output messages are sent directly from the application server to the client.
If a server fails, the SafeKit membership protocol reconfigures the filters in the network load balancing cluster to re-balance the traffic on the remaining available servers.
Stateful or stateless applications
With a stateful Apache application, there is session affinity. The same client must be connected to the same server on multiple TCP sessions to retrieve its context on the server. In this case, the SafeKit load balancing rule is configured on the client IP address. Thus, the same client is always connected to the same server on multiple TCP sessions. And different clients are distributed across different servers in the farm.
With a stateless Apache application, there is no session affinity. The same client can be connected to different servers in the farm on multiple TCP sessions. There is no context stored locally on a server from one session to another. In this case, the SafeKit load balancing rule is configured on the TCP client session identity. This configuration is the one which is the best for distributing sessions between servers, but it requires a TCP service without session affinity.
You need Apache installed on 2 nodes or more (virtual machines or physical servers).
Package installation on Windows
Download the free version of SafeKit on 2 Windows nodes (or more).
Note: the free version includes all SafeKit features. At the end of the trial, you can activate permanent license keys without uninstalling the package.
To open the Windows firewall, on both nodes start a powershell as administrator, and type
c:/safekit/private/bin/firewallcfg add
To initialize the password for the default admin user of the web console, on both nodes start a powershell as administrator, and type
If node1 or node2 background color is red, check connectivity of the browser to both nodes and check firewall on both nodes for troubleshooting.
This operation will place the IP addresses in the cluster.xml file on both nodes (more information in the training with the command line).
3. Choose the module
In the Configuration tab, click on the apache_farm.safe module.
The console finds xxx.safe in the 'Application_Modules/demo/' directory on the server side if you dropped a module there during installation.
4. Configure the module
Choose an automatic start of the module at boot without delay.
Enter a virtual IP address. A virtual IP address is a standard IP address in the same IP network (same subnet) as the IP addresses of both nodes.
Application clients must be configured with the virtual IP address (or the DNS name associated with the virtual IP address).
Set the service port to load balance (ex.: TCP 80 for HTTP, TCP 443 for HTTPS...).
Set the load balancing rule (source address or source port in the client IP packet):
with the source IP address of the client, the same client will be connected to the same node in the farm on multiple TCP sessions and retrieve its context on the node.
with the source TCP port of the client, the same client will be connected to different nodes in the farm on multiple TCP sessions (without retrieving a context).
start_both and stop_both contain the start and the stop of Apache services. Check that the names of services in these scripts are those installed on both nodes else modify them in the scripts.
Note that if a process name is displayed in Process Checker, it will be monitored with a restart action in case of failure. Configuring a wrong process name will cause the module to stop right after its start.
This operation will report the configuration in the userconfig.xml, start_both, stop_both files on both nodes (more information in the training with the command line).
5. Verify successful configuration
Check the success message (green) on both nodes and click Next.
6. Start the farm cluster on node 1 and node 2
Start the farm cluster as shown in the image.
7. Wait for the transition to UP (green) / UP (green)
Node 1 and node 2 should reach the UP (green) state, which means that the start_both script has been executed on node 1 and node 2.
If the status is UP (green) and the Apache application is not started on node 1 or node 2, check output messages of the start_both script in the Application Log on node 1 or node 2.
If node 1 or node 2 does not reach UP (green), analyze why with the Module Log of node 1 or node 2.
8. Testing
SafeKit brings a built-in test in the product:
Configure a rule for TCP port 9010 with a load balancing on source TCP port.
Connect an external workstation outside the farm nodes.
Start a browser on http://virtual-ip:9010/safekit/mosaic.html.
You should see a mosaic of colors depending on nodes answering to HTTP requests.
Stop one UP (green) node by scrolling down its contextual menu and clicking Stop.
Check that there is no more TCP connections on the stopped node and on the virtual IP address.
In Advanced Configuration tab, you can edit internal files of the module: bin/start_both and bin/stop_both and conf/userconfig.xml .
If you make change in the internal files here, you must apply the new configuration by a right click on the icon/xxx on the left side (see image): the interface will allow you to redeploy the modified files on both servers.
<!DOCTYPE safe>
<safe>
<macro name="VIRTUAL_IP" value="VIRTUAL_IP_TO_BE_DEFINED" />
<macro name="APACHE_PORT" value="TCP_PORT_TO_BE_DEFINED" />
<service mode="farm" maxloop="3" loop_interval="24">
<!-- Farm topology configuration for the membership protocol -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<farm>
<lan name="default" />
</farm>
<!-- Virtual IP Configuration -->
<!-- Use VIRTUAL_IP defined in macro above -->
<vip>
<interface_list>
<interface check="on" arpreroute="on">
<virtual_interface type="vmac_directed">
<virtual_addr addr="%VIRTUAL_IP%" where="alias"/>
</virtual_interface>
</interface>
</interface_list>
<loadbalancing_list>
<group name="APACHE">
<!-- Set load-balancing rule on APACHE_PORT defined in macro above -->
<rule port="%APACHE_PORT%" proto="tcp" filter="on_addr"/>
</group>
</loadbalancing_list>
</vip>
<!-- TCP Checker Configuration -->
<!-- Use VIRTUAL_IP and APACHE_PORT defined in macros above -->
<check>
<tcp ident="HTTP_APACHE" when="both">
<to
addr="%VIRTUAL_IP%"
port="%APACHE_PORT%"
interval="10"
timeout="5"
/>
</tcp>
</check>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog" />
</service>
</safe>
start_both.cmd on Windows
@echo off
rem Script called on all servers for starting applications
rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"
rem stdout goes into Application log
echo "Running start_both %*"
set res=0
net start Apache2
if not %errorlevel% == 0 (
%SAFE%\safekit printi "Apache start failed"
) else (
%SAFE%\safekit printi "Apache started"
)
set res=%errorlevel%
if %res% == 0 goto end
:stop
set res=%errorlevel%
"%SAFE%\safekit" printe "start_both failed"
rem uncomment to stop SafeKit when critical
rem "%SAFE%\safekit" stop -i "start_both"
:end
stop_both.cmd on Windows
@echo off
rem Script called on all servers for stopping application
rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"
rem ----------------------------------------------------------
rem
rem 2 stop modes:
rem
rem - graceful stop
rem call standard application stop with net stop
rem
rem - force stop (%1=force)
rem kill application's processes
rem
rem ----------------------------------------------------------
rem stdout goes into Application log
echo "Running stop_both %*"
set res=0
rem default: no action on forcestop
if "%1" == "force" goto end
%SAFE%\safekit printi "Stopping Apache..."
net stop Apache2
rem If necessary, uncomment to wait for the real stop of services
rem "%SAFEBIN%\sleep" 10
if %res% == 0 goto end
"%SAFE%\safekit" printe "stop_both failed"
:end
Internal files of the Linux apache_farm.safe module
<!DOCTYPE safe>
<safe>
<macro name="VIRTUAL_IP" value="VIRTUAL_IP_TO_BE_DEFINED" />
<macro name="APACHE_PORT" value="TCP_PORT_TO_BE_DEFINED" />
<service mode="farm" maxloop="3" loop_interval="24">
<!-- Farm topology configuration for the membership protocol -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<farm>
<lan name="default" />
</farm>
<!-- Virtual IP Configuration -->
<!-- Use VIRTUAL_IP defined in macro above -->
<vip>
<interface_list>
<interface check="on" arpreroute="on">
<virtual_interface type="vmac_directed">
<virtual_addr addr="%VIRTUAL_IP%" where="alias"/>
</virtual_interface>
</interface>
</interface_list>
<loadbalancing_list>
<group name="APACHE">
<!-- Set load-balancing rule on APACHE_PORT defined in macro above -->
<rule port="%APACHE_PORT%" proto="tcp" filter="on_addr"/>
</group>
</loadbalancing_list>
</vip>
<!-- TCP Checker Configuration -->
<!-- Use VIRTUAL_IP and APACHE_PORT defined in macros above -->
<check>
<tcp ident="HTTP_APACHE" when="both">
<to
addr="%VIRTUAL_IP%"
port="%APACHE_PORT%"
interval="10"
timeout="5"
/>
</tcp>
</check>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog" />
</service>
</safe>
start_both on Linux
#!/bin/sh
# Script called on the primary server for starting applications
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
#---------- Clean Apache residual processes
# Call this function before starting Apache
# to clean eventual resual Apache processes
clean_Apache()
{
retval=0
# $SAFE/safekit printw "Cleaning Apache processes"
# example of a kill of started Apache process
# warning: this command also kills the httpd process which managed the SafeKit web console
# ps -e -o pid,comm | grep httpd | $AWK '{print "kill " $1}'| sh >/dev/null 2>&1
return $retval
}
#---------- Apache
# Call this function for starting Apache Server
start_Apache()
{
retval=0
$SAFE/safekit printw "Starting Apache Server"
# Apache - Starting
service httpd start
if [ $? -ne 0 ] ; then
$SAFE/safekit printw "Apache server start failed"
else
$SAFE/safekit printw "Apache server started"
fi
return $retval
}
# stdout goes into Application log
echo "Running start_both $*"
res=0
[ -z "$OSNAME" ] && OSNAME=`uname -s`
OSNAME=`uname -s`
case "$OSNAME" in
Linux)
AWK=/bin/awk
;;
*)
AWK=/usr/bin/awk
;;
esac
# TODO
# remove Apache boot start
# Clean Apache residual processes
clean_Apache || res=$?
# Start Apache
start_Apache || res=$?
if [ $res -ne 0 ] ; then
$SAFE/safekit printi "start_both failed"
# uncomment to stop SafeKit when critical
# $SAFE/safekit stop -i "start_both"
fi
exit 0
stop_both on Linux
#!/bin/sh
# Script called on the primary server for stopping application services
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
# call standard application stop
#
# - force stop ($1=force)
# kill application's processes
#
#----------------------------------------------------------
#---------- Clean Apache residual processes
# Call this function on force stop
# to clean eventual resual Apache processes
clean_Apache()
{
retval=0
# $SAFE/safekit printw "Cleaning Apache processes "
# example of a kill of started Apache process
# warning: this command also kills the httpd process which managed the SafeKit web console
# ps -e -o pid,comm | grep httpd | $AWK '{print "kill " $1}'| sh >/dev/null 2>&1
return $retval
}
#---------- Apache
# Call this function for stopping Apache
stop_Apache()
{
retval=0
if [ "$1" = "force" ] ; then
# Apache force stop
clean_Apache
return $retval
fi
# Apache graceful stop
$SAFE/safekit printw "Stopping Apache server"
service httpd stop
if [ $? -ne 0 ] ; then
$SAFE/safekit printw "Apache server stop failed"
else
$SAFE/safekit printw "Apache server stopped"
fi
return $retval
}
# stdout goes into Application log
echo "Running stop_both $*"
res=0
[ -z "$OSNAME" ] && OSNAME=`uname -s`
case "$OSNAME" in
Linux)
AWK=/bin/awk
;;
*)
AWK=/usr/bin/awk
;;
esac
mode=
if [ "$1" = "force" ] ; then
mode=force
shift
fi
# Stop Apache server
stop_Apache $mode || res=$?
[ $res -ne 0 ] && $SAFE/safekit printi "stop_both failed"
exit 0
How the SafeKit mirror cluster works?
Step 1. Real-time replication
Server 1 (PRIM) runs the application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.
Step 2. Automatic failover
When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the application automatically on Server 2.
The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.
Step 3. Automatic failback
Failback involves restarting Server 1 after fixing the problem that caused it to fail.
SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.
Failback takes place without disturbing the application, which can continue running on Server 2.
Step 4. Back to normal
After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the application running on Server 2 and SafeKit replicating file updates to Server 1.
If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
Choose between redundancy at the application level or at the virtual machine level
Redundancy at the application level
In this type of solution, only application data are replicated. And only the application is restared in case of failure.
With this solution, restart scripts must be written to restart the application.
We deliver application modules to implement redundancy at the application level. They are preconfigured for well known applications and databases. You can customize them with your own services, data to replicate, application checkers. And you can combine application modules to build advanced multi-level architectures.
This solution is platform agnostic and works with applications inside physical machines, virtual machines, in the Cloud. Any hypervisor is supported (VMware, Hyper-V...).
Solution for a new application (restart scripts to write): Windows, Linux
Redundancy at the virtual machine level
In this type of solution, the full Virtual Machine (VM) is replicated (Application + OS). And the full VM is restarted in case of failure.
The advantage is that there is no restart scripts to write per application and no virtual IP address to define. If you do not know how the application works, this is the best solution.
This solution works with Windows/Hyper-V and Linux/KVM but not with VMware. This is an active/active solution with several virtual machines replicated and restarted between two nodes.
“Noemis, a value added distributor of Milestone Video Surveillance, has assisted integrators to deploy the SafeKit redundancy solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides a great support.”
Video management, access control, building management [+]
Life safety is directly associated with the proper execution of security software. That’s why, they need redundancy and high availability. SafeKit is recognized as the simplest redundancy solution by our partners which has deployed it in:
“SafeKit by Evidian is a professional solution making easy the redundancy of Milestone video management software. The solution is easy to deploy, easy to maintain and can be added on existing installation. We have assisted integrators to deploy the solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides great support.”
TV broadcasting [+]
Harmonic is using SafeKit as a software OEM high availability solution and deploys it with its TV broadcasting solutions over satellites, terrestrials, cable, IPTV.
“SafeKit is the ideal application clustering solution for a software publisher looking for a simple and economical high availability software. We are deploying SafeKit worldwide and we currently have more than 80 SafeKit clusters on Windows with our critical TV broadcasting application through terrestrial, satellite, cable and IP-TV. SafeKit implements the continuous and real-time replication of our database as well as the automatic failover of our application for software and hardware failures. Without modifying our application, it was possible for us to customize the installation of SafeKit. Since then, the time of preparation and implementation has been significantly reduced.”
Finance [+]
The European Society of Warranties and Guarantees in Natixis uses SafeKit as a high availability solution for its applications.
Over 30 SafeKit clusters are deployed on Unix and Windows in Natixis.
Over 20 SafeKit clusters are deployed on Linux and Windows with Oracle.
Testimonial of Fives Syleps:
“The automated factories that we equip rely on our ERP. It is not possible that our ERP is out of service due to a computer failure. Otherwise, the whole activity of the factory stops.
We chose the Evidian SafeKit high availability product because it is an easy to use solution. It is implemented on standard servers and does not require the use of shared disks on a SAN and load balancing network boxes.
It allows servers to be put in remote computer rooms. In addition, the solution is homogeneous for Linux and Windows platforms. And it provides 3 functionalities: load balancing between servers, automatic failover and real-time data replication.”
Tony Myers, Director of Business Development says:
“By developing applications for air traffic control, Copperchase is in one of the most critical business activities. We absolutely need our applications to be available all the time. We have found with SafeKit a simple and complete clustering solution for our needs. This software combines in a single product load balancing, real time data replication with no data loss and automatic failover. This is why, Copperchase deploys SafeKit for air traffic control in airports in the UK and the 30 countries where we are present.”
Over 25 SafeKit clusters are deployed on Linux with Oracle.
Peter Knight, Sales Manager says:
“Business continuity and disaster recovery are a major concern for our Locus banking application deployed in numerous Credit Unions around Ireland and the UK. We have found with SafeKit a simple and robust solution for high availability and synchronous replication between two servers with no data loss. With this software solution, we are not dependent on a specific and costly hardware clustering solution. It is a perfect tool to provide a software high availability option to an application of a software vendor.”
20 SafeKit clusters are deployed on Windows and Linux.
Stéphane Guilmin, RATP, Project manager says:
“Automation of line 1 of the Paris subway is a major project for RATP, requiring a centralized command room (CCR) designed to resist IT failures. With SafeKit, we have three distinct advantages to meet this need. Firstly, SafeKit is a purely software solution that does not demand the use of shared disks on a SAN and network boxes for load balancing. It is very simple to separate our servers into separate machine rooms. Moreover, this clustering solution is homogeneous for our Windows and Linuxplatforms. SafeKit provides the three functions that we needed: load balancing between servers, automatic failover after an incident and real time data replication.”
And also, Philippe Marsol, Atos BU Transport, Integration Manager says:
“SafeKit is a simple and powerful product for application high availability. We have integrated SafeKit in our critical projects like the supervision of Paris metro Line 4 (the control room) or Marseille Line 1 and Line 2 (the operations center). Thanks to the simplicity of the product, we gained time for the integration and validation of the solution and we had also quick answers to our questions with a responsive Evidian team.”
Over 30 SafeKit clusters are deployed on Windows with SQL Server.
Marc Pellas, CEO says:
“SafeKit perfectly meets the needs of a software vendor. Its main advantage is that it brings in high availability through a software option that is added to our own multi-platform software suite. This way, we are not dependent on a specific and costly hardware clustering solution that is not only difficult to install and maintain, but also differs according to client environments. With SafeKit, our firefighter call centers are run with an integrated software clustering solution, which is the same for all our customers, is user friendly and for which we master the installation up to after-sales support.”
14 SafeKit clusters are deployed on Windows and Linux.
Alexandre Barth, Systems administrator says:
“Our production team implemented the SafeKit solution without any difficulty on 14 Windows and Linux clusters. Our critical activity is thus secure, with high-availability and load balancing functions. The advantages of this product are easy deployment and administration of clusters, on the one hand, and uniformity of the solution in the face of heterogeneous operating systems, on the other hand.”
SafeKit High Availability Differentiators against Competition
Key differentiators of a mirror cluster with replication and failover
Evidian SafeKit mirror cluster with real-time file replication and failover
The SafeKit high availability software saves on Windows and Linux the cost of :
external shared or replicated storage,
load balancing boxes,
enterprise editions of OS and databases
SafeKit includes all clustering features: synchronous real-time file replication, monitoring of server / network / software failures, automatic application restart, virtual IP address switched in case of failure to reroute clients
The cluster configuration is very simple and made by means of application modules. New services and new replicated directories can be added to an existing application module to complete a high availability solution
All the configuration of clusters is made using a simple centralized web administration console
There is no domain controller or active directory to configure as with Microsoft cluster
After a failure when a server reboots, the replication failback procedure
is fully automatic and the failed server reintegrates the cluster without stopping the application on the only remaining server
This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server. The application may even be stopped on the only remaining server during the resynchonization of the failed server
All SafeKit clustering features are working for 2 servers in remote sites. Replication requires an extended LAN type network (latency = performance of synchronous replication, bandwidth = performance of resynchronization after failure).
If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2
If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer with the "healh check" of SafeKit.
The solution works with only 2 servers and for the quorum (network isolation between both sites), a simple split brain checker to a router is offered to support a single execution of the critical application
This is not the case for most clustering solutions where a 3rd server is required for the quorum
The secondary server is not dedicated to the restart of the primary server. The cluster can be active-active by running 2 different mirror modules
This is not the case with a fault-tolerant system where the secondary is dedicated to the execution of the same application synchronized at the instruction level
SafeKit implements a mirror cluster with replication and failover. But it imlements also
a farm cluster with load balancing and failover.
Thus a N-tiers architecture can be made highly available and load balanced with the same solution on
Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market
This is not the case with an architecture mixing different technologies for load balancing, replication and failover
SafeKit implements quick application restart in case of failure: around 1 mn or less
Quick application restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with a recovery time depending on the OS reboot as with VMware HA or Hyper-V cluster
Key differentiators of a farm cluster with load balancing and failover
Evidian SafeKit farm cluster with load balancing and failover
The solution does not require load balancers or dedicated proxy servers above the farm for imlementing load balancing.
SafeKit is installed directly on the application servers in the farm. The load balancing is based on a standard virtual IP address/Ethernet MAC address and is working with physical servers or virtual machines on Windows and Linux without special network configuration
This is not the case with network load balancers
This is not the case with dedicated proxies on Linux
The solution includes all clustering features: virtual IP address, load balancing on client IP address or on sessions,
monitoring of server / network / software failures, automatic application restart with a quick revovery time and a replication option with a mirror module
This is not the case with other load balancing solutions. They are able to make load balancing but they do not include a full clustering solution with restart scripts and
automatic application restart in case of failure. They do not offer a replication option
The cluster configuration is very simple and made by means of application modules. There is no domain controller or active directory to configure on Windows. The solution works on Windows and Linux
If servers are connected to the same IP network through an extended LAN between remote sites, the virtual IP address of SafeKit is working with load balancing at level 2
If servers are connected to different IP networks between remote sites, the virtual IP address can be configured at the level of a load balancer with the help of the SafeKit health check. Thus you can implement load balancing but also all the clustering features of SafeKit, in particular monitoring and automatic recovery of the critical application on application servers
Thus a N-tiers architecture can be made highly available and load balanced with the same solution
on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market
This is not the case with an architecture mixing different technologies for load balancing, replication and failover
Key differentiators of the SafeKit high availability technology
Application HA supports hardware failure and software failure with a quick recovery time (RTO around 1 mn or less).
Application HA requires to define restart scripts per application and folders to replicate (SafeKit application modules).
Full virtual machines HA supports only hardware failure with a VM reboot and a recovery time depending on the OS reboot.
No restart scripts to define with full virtual machines HA (SafeKit hyperv.safe or kvm.safe modules). Hypervisors are active/active with just multiple virtual machines.
No dedicated server with SafeKit.
Each server can be the failover server of the other one.
Software failure with restart in another OS environment.
Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist)
Secondary server dedicated to the execution of the same application synchronized at the instruction level.
Software exception on both servers at the same time.
Smooth upgrade not possible
No dedicated proxy servers and no special network configuration are required in a SafeKit cluster for virtual IP addresses
Special network configuration is required in other clusters for virtual IP addresses. Note that SafeKit offers a health check adapted to load balancers