eviden-logo

Evidian > Products > SafeKit: Simple, Cost-Effective High Availability Software > How a virtual IP address works (Windows/Linux)?

How a virtual IP address works (Windows/Linux)?

Evidian SafeKit

How a primary/secondary virtual IP address works in the same subnet?

Case of a mirror cluster with 2 Windows or Linux servers

How a primary/secondary virtual IP address works in a same subnet ?

When both servers of a mirror cluster are in the same subnet, the virtual IP address is set on the Ethernet card of the primary server (through IP aliasing). The virtual IP address is a third IP address coming in addition to the two physical IP addresses of server 1 and server 2. Note that with SafeKit, several virtual IP addresses can be set in the cluster on the same Ethernet card or on different Ethernet cards.

If server 1 is the primary server, then the virtual IP address is associated to the Ethernet MAC address of server 1 in the clients ARP caches: mac1 in the figure. If there is a failure of server 1 and a failover on server 2, SafeKit automatically sends gratuitous ARP to reroute clients ARP caches with the Ethernet address mac2 of server 2. Thus, clients are reconnected to server 2 running the application which has been restarted on this server by the SafeKit clustering mechanisms.

When two servers are in remote sites, the previous virtual IP address algorithms are working if they are connected in the same subnet through an extended LAN/VLAN. This is the simplest use case for remote sites.

How a primary/secondary virtual IP address works in different subnets?

Case of a mirror cluster with 2 Windows or Linux servers

How a primary/secondary virtual IP address works in differents subnets?

If the servers are in differents subnets, the virtual IP address can be set at the level of a load balancer. The load balancer is configured with the two physical IP addresses of the two servers in their respective subnets. And the load balancer routes the traffic according a health check to servers.

The health check is based on a URL managed by SafeKit servers and answering OK or NOT FOUND according the status of a server. If the server is SECOND, the SafeKit health check returns NOT FOUND. Thus no traffic is sent by the load balancer to the secondary server. And if the server is PRIM, then the SafeKit health check returns OK. Thus all the traffic is sent by the load balancer to the primary server. In case of failover, SafeKit changes its answers to the health check. Thus the traffic of the load balancer is rerouted.

This implementation is the one used in SafeKit mirror-like solutions in the Cloud: Amazon AWS, Microsoft Azure and Google GCP.

Please note that SafeKit does not provide a load balancer; it only offers health checks. The load balancer must be supplied by the network infrastructure between the two subnets.

If needed, it can be discussed with the network team whether, instead of setting up a load balancer, an extended LAN could be configured between the two subnets. Moreover, when using a load balancer, it is essential to ensure that the application supports clients connecting via the load balancer's virtual IP address and that it properly handles connections arriving through the translated physical IP address assigned by the load balancer.

This issue does not arise with an extended LAN, which also provides sufficient bandwidth and appropriate latency for real-time synchronous replication without data loss.

How a load balanced virtual IP address works in the same subnet?

Case of a farm cluster with 2 Windows or Linux servers

How a load balanced virtual IP address works in the same subnet?

In a load balancing farm cluster, a virtual IP address is required to load balance clients requests and to reroute clients in case of failover. In this example, we consider only two servers but the solution works with more than two servers.

When both servers of the cluster are in the same subnet, the virtual IP address is set on the Ethernet card of both servers (IP aliasing).

In the ARP cache of clients, the virtual IP address is associated to the Ethernet MAC address of one server: mac1 of server1 in the figure. A filter inside the kernel of server 1 receives the traffic and split it according the identity of the client packets (client IP address, client TCP port).

If there is a failure of server 1, SafeKit sends gratuitous ARP to reroute clients ARP caches with the Ethernet address mac2 of server 2. Thus, clients are reconnected to server 2.

When two servers are in remote sites, the previous virtual IP address algorithms are working if they are connected in the same subnet through an extended LAN/VLAN. This is the simplest use case for remote sites.

How a load balanced virtual IP address works in different subnets?

Case of a farm cluster with 2 Windows or Linux servers

How a load balanced virtual IP address works in different subnets?

If the servers are in differents subnets, the virtual IP address can be set at the level of a load balancer. The load balancer is configured with the two physical IP addresses of the two servers in their subnets. And the load balancer routes the traffic according load balancing rules (client IP address, client TCP port) and according a health check to servers.

The health check is based on a URL managed by SafeKit servers and answering OK or NOT FOUND according the status of a server. If the server is UP, the SafeKit health check returns OK, else NOT FOUND. In case of failover, SafeKit does not answer anymore OK to the health check on the failed server. Thus the traffic of the load balancer is rerouted.

This implementation is the one used in SafeKit farm-like solutions in the Cloud: Amazon AWS, Microsoft Azure and Google GCP.

Note that there is another solution by rerouting at the DNS level. But this solution is not working in most cases because the prerequisite is that clients makes a DNS resolution after a failover to be rerouted to the new server. Most often, they do not and continue their execution with the IP address resolved when they started.

How the SafeKit mirror cluster works?

Step 1. Real-time replication

Server 1 (PRIM) runs the application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network.

File replication at byte level in a mirror cluster

The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.

Step 2. Automatic failover

When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the application automatically on Server 2.
The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.

Failover in a mirror cluster

The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.

Step 3. Automatic failback

Failback involves restarting Server 1 after fixing the problem that caused it to fail.
SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.

Failback in a mirror cluster

Failback takes place without disturbing the application, which can continue running on Server 2.

Step 4. Back to normal

After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the application running on Server 2 and SafeKit replicating file updates to Server 1.

Return to normal operation in a mirror cluster

If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.

Typical usage with SafeKit

Why a replication of a few Tera-bytes?

Resynchronization time after a failure (step 3)

  • 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
  • 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.

Alternative

Why a replication < 1,000,000 files?

  • Resynchronization time performance after a failure (step 3).
  • Time to check each file between both nodes.

Alternative

  • Put the many files to replicate in a virtual hard disk / virtual machine.
  • Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.

Why a failover ≤ 32 replicated VMs?

  • Each VM runs in an independent mirror module.
  • Maximum of 32 mirror modules running on the same cluster.

Alternative

  • Use an external shared storage and another VM clustering solution.
  • More expensive, more complex.

Why a LAN/VLAN network between remote sites?

Alternative

  • Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
  • Use backup solutions with asynchronous replication for high latency network.

How the SafeKit farm cluster works?

Virtual IP address in a farm cluster

How the Evidian SafeKit farm cluster implements network load balancing and failover

On the previous figure, the application is running on the 3 servers (3 is an example, it can be 2 or more). Users are connected to a virtual IP address.
The virtual IP address is configured locally on each server in the farm cluster.
The input traffic to the virtual IP address is received by all the servers and split among them by a network filter inside each server's kernel.
SafeKit detects hardware and software failures, reconfigures network filters in the event of a failure, and offers configurable application checkers and recovery scripts.

Load balancing in a network filter

The network load balancing algorithm inside the network filter is based on the identity of the client packets (client IP address, client TCP port). Depending on the identity of the client packet input, only one filter in a server accepts the packet; the other filters in other servers reject it.
Once a packet is accepted by the filter on a server, only the CPU and memory of this server are used by the application that responds to the request of the client. The output messages are sent directly from the application server to the client.
If a server fails, the farm heartbeat protocol reconfigures the filters in the network load balancing cluster to re-balance the traffic on the remaining available servers.

Stateful or stateless applications

With a stateful application, there is session affinity. The same client must be connected to the same server on multiple TCP sessions to retrieve its context on the server. In this case, the SafeKit load balancing rule is configured on the client IP address. Thus, the same client is always connected to the same server on multiple TCP sessions. And different clients are distributed across different servers in the farm.
With a stateless application, there is no session affinity. The same client can be connected to different servers in the farm on multiple TCP sessions. There is no context stored locally on a server from one session to another. In this case, the SafeKit load balancing rule is configured on the TCP client session identity. This configuration is the one which is the best for distributing sessions between servers, but it requires a TCP service without session affinity.

SafeKit High Availability (HA) Solutions: Quick Installation Guides for Windows and Linux Clusters

This table presents the SafeKit High Availability (HA) solutions, categorized by application and operating environment (Databases, Web Servers, VMs, Cloud). Identify the specific pre‑configured .safe module (e.g., mirror.safe, farm.safe, and others) required for real‑time replication, load balancing, and automatic failover of critical business applications on Windows or Linux. Simplify your HA cluster setup with direct links to quick installation guides, each including a download link for the corresponding .safe module.

A SafeKit .safe module is essentially a pre‑configured High Availability (HA) template that defines how a specific application will be clustered and protected by the SafeKit software. In practice, it contains a configuration file (userconfig.xml) and restart scripts.

SafeKit High Availability (HA) Solutions: Quick Installation Guides (with downloadable .safe modules)
Application Category HA Scenario (High Availability) Technology / Product .safe Module Installation Guide
New Applications Real-Time Replication and Failover Windows mirror.safe View Guide: Windows Replication
New Applications Real-Time Replication and Failover Linux mirror.safe View Guide: Linux Replication
New Applications Network Load Balancing and Failover Windows farm.safe View Guide: Windows Load Balancing
New Applications Network Load Balancing and Failover Linux farm.safe View Guide: Linux Load Balancing
Databases Replication and Failover Microsoft SQL Server sqlserver.safe View Guide: SQL Server Cluster
Databases Replication and Failover PostgreSQL postgresql.safe View Guide: PostgreSQL Replication
Databases Replication and Failover MySQL mysql.safe View Guide: MySQL Cluster
Databases Replication and Failover Oracle oracle.safe View Guide: Oracle Failover Cluster
Databases Replication and Failover Firebird firebird.safe View Guide: Firebird HA
Web Servers Load Balancing and Failover Apache apache_farm.safe View Guide: Apache Load Balancing
Web Servers Load Balancing and Failover IIS iis_farm.safe View Guide: IIS Load Balancing
Web Servers Load Balancing and Failover NGINX farm.safe View Guide: NGINX Load Balancing
VMs and Containers Replication and Failover Hyper-V hyperv.safe View Guide: Hyper-V VM Replication
VMs and Containers Replication and Failover KVM kvm.safe View Guide: KVM VM Replication
VMs and Containers Replication and Failover Docker mirror.safe View Guide: Docker Container Failover
VMs and Containers Replication and Failover Podman mirror.safe View Guide: Podman Container Failover
VMs and Containers Replication and Failover Kubernetes K3S k3s.safe View Guide: Kubernetes K3S Replication
AWS Cloud Real-Time Replication and Failover AWS mirror.safe View Guide: AWS Replication Cluster
AWS Cloud Network Load Balancing and Failover AWS farm.safe View Guide: AWS Load Balancing Cluster
GCP Cloud Real-Time Replication and Failover GCP mirror.safe View Guide: GCP Replication Cluster
GCP Cloud Network Load Balancing and Failover GCP farm.safe View Guide: GCP Load Balancing Cluster
Azure Cloud Real-Time Replication and Failover Azure mirror.safe View Guide: Azure Replication Cluster
Azure Cloud Network Load Balancing and Failover Azure farm.safe View Guide: Azure Load Balancing Cluster
Physical Security / VMS Real-Time Replication and Failover Milestone XProtect milestone.safe View Guide: Milestone XProtect Failover
Physical Security / VMS Real-Time Replication and Failover Nedap AEOS nedap.safe View Guide: Nedap AEOS Failover
Physical Security / VMS Real-Time Replication and Failover Genetec (SQL Server) sqlserver.safe View Guide: Genetec SQL Failover
Physical Security / VMS Real-Time Replication and Failover Bosch AMS (Hyper-V) hyperv.safe View Guide: Bosch AMS Hyper-V Failover
Physical Security / VMS Real-Time Replication and Failover Bosch BIS (Hyper-V) hyperv.safe View Guide: Bosch BIS Hyper-V Failover
Physical Security / VMS Real-Time Replication and Failover Bosch BVMS (Hyper-V) hyperv.safe View Guide: Bosch BVMS Hyper-V Failover
Physical Security / VMS Real-Time Replication and Failover Hanwha Vision (Hyper-V) hyperv.safe View Guide: Hanwha Vision Hyper-V Failover
Physical Security / VMS Real-Time Replication and Failover Hanwha Wisenet (Hyper-V) hyperv.safe View Guide: Hanwha Wisenet Hyper-V Failover
Siemens Products Real-Time Replication and Failover Siemens Siveillance suite (Hyper-V) hyperv.safe View Guide: Siemens Siveillance HA
Siemens Products Real-Time Replication and Failover Siemens Desigo CC (Hyper-V) hyperv.safe View Guide: Siemens Desigo CC HA
Siemens Products Real-Time Replication and Failover Siemens Siveillance VMS SiveillanceVMS.safe View Guide: Siemens Siveillance VMS HA
Siemens Products Real-Time Replication and Failover Siemens SiPass (Hyper-V) hyperv.safe View Guide: Siemens SiPass HA
Siemens Products Real-Time Replication and Failover Siemens SIPORT (Hyper-V) hyperv.safe View Guide: Siemens SIPORT HA
Siemens Products Real-Time Replication and Failover Siemens SIMATIC PCS 7 (Hyper-V) hyperv.safe View Guide: SIMATIC PCS 7 HA
Siemens Products Real-Time Replication and Failover Siemens SIMATIC WinCC (Hyper-V) hyperv.safe View Guide: SIMATIC WinCC HA

Comparison of SafeKit with Traditional High Availability (HA) Clusters

How does SafeKit compare to traditional High Availability (HA) cluster solutions?

This comparison highlights the fundamental differences between SafeKit and traditional High Availability (HA) cluster solutions like Failover Clusters, Virtualization HA, and SQL Always-On. SafeKit is designed as a low-complexity, software-only solution for generic application redundancy, contrasting with the high complexity and specific storage requirements (shared storage, SAN) typical of traditional HA mechanisms.
Comparison of SafeKit with traditional High Availability (HA) clusters
Solutions Complexity Comments
Failover Cluster (Microsoft) High Specific Storage (shared storage, SAN)
Virtualization (VMware HA) High Specific Storage (shared storage, SAN, vSAN)
SQL Always-On (Microsoft) High Only SQL is redundant, requires SQL Enterprise Edition
Evidian SafeKit Low Simplest, generic and software-only. Unsuitable for large data replication.

SafeKit's Advantage in Application Redundancy

SafeKit achieves its low-complexity High Availability through a simple, software-based mirroring mechanism that eliminates the need for expensive, dedicated hardware like a SAN (Storage Area Network). This makes it a highly accessible solution for quickly implementing application redundancy without complex infrastructure changes.