NGINX: The Simplest Load Balancing Cluster with Failover
Evidian SafeKit
The solution for NGINX
Evidian SafeKit brings load balancing and failover to NGINX.
This article explains how to implement quickly a NGINX cluster without network load balancers, dedicated proxy servers or special MAC addresses. SafeKit is installed directly on the NGINX servers.
A generic product
Note that SafeKit is a generic product on Windows and Linux.
You can implement with the SafeKit product real-time replication and failover of any file directory and service, database, complete Hyper-V or KVM virtual machines, Docker, Podman, K3S, Cloud applications (see the module list).
A complete solution
SafeKit solves:
- hardware failures (20% of problems), including the complete failure of a computer room,
- software failures (40% of problems), including restart of critical processes,
- and human errors (40% of problems) thanks to its ease of use and its web console.
Partners, the success with SafeKit
This platform agnostic solution is ideal for a partner reselling a critical application and who wants to provide a redundancy and high availability option easy to deploy to many customers.
With many references in many countries won by partners, SafeKit has proven to be the easiest solution to implement for redundancy and high availability of building management, video management, access control, SCADA software...
Virtual IP address in a farm cluster
On the previous figure, the NGINX application is running on the 3 servers (3 is an example, it can be 2 or more). Users are connected to a virtual IP address.
The virtual IP address is configured locally on each server in the farm cluster.
The input traffic to the virtual IP address is received by all the servers and split among them by a network filter inside each server's kernel.
SafeKit detects hardware and software failures, reconfigures network filters in the event of a failure, and offers configurable application checkers and recovery scripts.
Load balancing in a network filter
The network load balancing algorithm inside the network filter is based on the identity of the client packets (client IP address, client TCP port). Depending on the identity of the client packet input, only one filter in a server accepts the packet; the other filters in other servers reject it.
Once a packet is accepted by the filter on a server, only the CPU and memory of this server are used by the NGINX application that responds to the request of the client. The output messages are sent directly from the application server to the client.
If a server fails, the SafeKit membership protocol reconfigures the filters in the network load balancing cluster to re-balance the traffic on the remaining available servers.
Stateful or stateless applications
With a stateful NGINX application, there is session affinity. The same client must be connected to the same server on multiple TCP sessions to retrieve its context on the server. In this case, the SafeKit load balancing rule is configured on the client IP address. Thus, the same client is always connected to the same server on multiple TCP sessions. And different clients are distributed across different servers in the farm.
With a stateless NGINX application, there is no session affinity. The same client can be connected to different servers in the farm on multiple TCP sessions. There is no context stored locally on a server from one session to another. In this case, the SafeKit load balancing rule is configured on the TCP client session identity. This configuration is the one which is the best for distributing sessions between servers, but it requires a TCP service without session affinity.
Prerequisites
- You need NGINX installed on 2 nodes or more (virtual machines or physical servers).
Package installation on Windows
-
Download the free version of SafeKit on 2 Windows nodes (or more).
Note: the free version includes all SafeKit features. At the end of the trial, you can activate permanent license keys without uninstalling the package.
-
To open the Windows firewall, on both nodes start a powershell as administrator, and type
c:/safekit/private/bin/firewallcfg add
-
To initialize the password for the default admin user of the web console, on both nodes start a powershell as administrator, and type
c:/safekit/private/bin/webservercfg.ps1 -passwd pwd
- Use aphanumeric characters for the password (no special characters).
- pwd must be the same on both nodes.
-
Exclude from antivirus scans C:\safekit\ (the default installation directory) and all replicated folders that you are going to define.
Antiviruses may face detection challenges with SafeKit due to its close integration with the OS, virtual IP mechanisms, real-time replication and restart of critical services.
Package installation on Linux
-
Install the free version of SafeKit on 2 Linux nodes (or more).
Note: the free trial includes all SafeKit features. At the end of the trial, you can activate permanent license keys without uninstalling the package.
-
After the download of safekit_xx.bin package, execute it to extract the rpm and the safekitinstall script and then execute the safekitinstall script
-
Answer yes to firewall automatic configuration
-
Set the password for the web console and the default user admin.
- Use aphanumeric characters for the password (no special characters).
- The password must be the same on both nodes.
Note: the generic farm.safe module that you are going to configure is delivered inside the package.
1. Launch the SafeKit console
- Launch the web console in a browser on one cluster node by connecting to
http://localhost:9010
. - Enter
admin
as user name and the password defined during installation.
You can also run the console in a browser on a workstation external to the cluster.
The configuration of SafeKit is done on both nodes from a single browser.
To secure the web console, see 11. Securing the SafeKit web service in the User's Guide.
2. Configure node addresses
- Enter the node IP addresses, press the Tab key to check connectivity and fill node names.
- Then, click on
Save and apply
to save the configuration.
If either node1 or node2 has a red color, check connectivity of the browser to both nodes and check firewall on both nodes for troubleshooting.
In a farm architecture, you can define more than 2 nodes.
If you want, you can add a new LAN for a second heartbeat.
This operation will place the IP addresses in the cluster.xml
file on both nodes (more information in the training with the command line).
4. Configure the module
- Choose an
Automatic
start of the module at boot without delay inModule startup at boot
. - Normally, you have a single
Heartbeat
network (except if you add a LAN at step 2). - Enter a
Virtual IP address
. A virtual IP address is a standard IP address in the same IP network (same subnet) as the IP addresses of both nodes.
Application clients must be configured with the virtual IP address (or the DNS name associated with the virtual IP address). - Set the service port to load balance (ex.: TCP 80 for HTTP, TCP 443 for HTTPS, TCP 9010 in the example).
- Set the load balancing rule,
Source address
orSource port
:- with the source IP address of the client, the same client will be connected to the same node in the farm on multiple TCP sessions and retrieve its context on the node.
- with the source TCP port of the client, the same client will be connected to different nodes in the farm on multiple TCP sessions (without retrieving a context).
- Note that if a process name is displayed in
Monitored processes/services
, it will be monitored with a restart action in case of failure. Configuring a wrong process name will cause the module to stop right after its start.
5. Edit scripts (optional)
start_both
andstop_both
must contain starting and stopping of the NGINX application (example provided for Microsoft IIS on the right).- You can add new services in these scripts.
- Check that the names of the services started in these scripts are those installed on both nodes, otherwise modify them in the scripts.
- On Windows and on both nodes, with the Windows services manager, set
Boot Startup Type = Manual
for all the services registered instart_both
(SafeKit controls the start of services instart_both
).
8. Wait for the transition to UP (green) / UP (green)
Node 1 and node 2 should reach the UP (green) state, which means that the start_both
script has been executed on node 1 and node 2.
If UP (green) is not reached or if the application is not started, analyze why with the module log of node 1 or node 2.
- click the "log" icon of
node1
ornode2
to open the module log and look for error messages such as a checker detecting an error and stopping the module. - click on
start_both
in the log: output messages of the script are displayed on the right and errors can be detected such as a service incorrectly started.
9. Testing
SafeKit brings a built-in test in the product:
- Configure a rule for TCP port 9010 with a load balancing on source TCP port.
- Connect an external workstation outside the farm nodes.
- Start a browser on http://virtual-ip:9010/safekit/mosaic.html.
- Stop one UP (green) node by scrolling down its contextual menu and clicking Stop.
- Check that there is no more TCP connections on the stopped node and on the virtual IP address.
You should see a mosaic of colors depending on nodes answering to HTTP requests.
10. Support
- For getting support, take 2 SafeKit
Snapshots
(2 .zip files), one for each node. - If you have an account on https://support.evidian.com, upload them in the call desk tool.
Internal files of a SafeKit / NGINX load balancing cluster with failover
Go to the Advanced Configuration tab in the console, for editing these filesInternal files of the Windows farm.safe module
userconfig.xml (description in the User's Guide)
<!DOCTYPE safe>
<safe>
<service mode="farm" maxloop="3" loop_interval="24">
<!-- Farm topology configuration for the membership protocol -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<farm>
<lan name="default" />
</farm>
<!-- Virtual IP Configuration -->
<!-- Replace
* VIRTUAL_IP_ADDR_TO_BE_DEFINED by the IP address of your virtual server
-->
<vip>
<interface_list>
<interface check="on" arpreroute="on">
<virtual_interface type="vmac_directed">
<virtual_addr addr="VIRTUAL_IP_ADDR_TO_BE_DEFINED" where="alias"/>
</virtual_interface>
</interface>
</interface_list>
<loadbalancing_list>
<group name="Windows_Appli">
<!-- Set load-balancing rule on the TCP port of the service to load balance -->
<rule port="TCP_PORT_TO_BE_DEFINED" proto="tcp" filter="on_addr"/>
</group>
</loadbalancing_list>
</vip>
<!-- TCP Checker Configuration -->
<!-- Replace
* VIRTUAL_IP_ADDR_TO_BE_DEFINED by the IP address of your virtual server
* TCP_PORT_TO_BE_DEFINED by the TCP port of the service to check
-->
<check>
<tcp ident="Check_Appli" when="both">
<to
addr="VIRTUAL_IP_ADDR_TO_BE_DEFINED"
port="TCP_PORT_TO_BE_DEFINED"
interval="10"
timeout="5"
/>
</tcp>
</check>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog" />
</service>
</safe>
start_both.cmd
@echo off
rem Script called on all servers for starting applications
rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"
rem stdout goes into Application log
echo "Running start_both %*"
set res=0
rem Fill with your services start call
set res=%errorlevel%
if %res% == 0 goto end
:stop
set res=%errorlevel%
"%SAFE%\safekit" printe "start_both failed"
rem uncomment to stop SafeKit when critical
rem "%SAFE%\safekit" stop -i "start_both"
:end
stop_both.cmd
@echo off
rem Script called on all servers for stopping application
rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"
rem ----------------------------------------------------------
rem
rem 2 stop modes:
rem
rem - graceful stop
rem call standard application stop with net stop
rem
rem - force stop (%1=force)
rem kill application's processes
rem
rem ----------------------------------------------------------
rem stdout goes into Application log
echo "Running stop_both %*"
set res=0
rem default: no action on forcestop
if "%1" == "force" goto end
rem Fill with your services stop call
rem If necessary, uncomment to wait for the real stop of services
rem "%SAFEBIN%\sleep" 10
if %res% == 0 goto end
"%SAFE%\safekit" printe "stop_both failed"
:end
Internal files of the Linux farm.safe module
userconfig.xml (description in the User's Guide)
<!DOCTYPE safe>
<safe>
<service mode="farm" maxloop="3" loop_interval="24">
<!-- Farm topology configuration for the membership protocol -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<farm>
<lan name="default" />
</farm>
<!-- Virtual IP Configuration -->
<!-- Replace
* VIRTUAL_IP_ADDR_TO_BE_DEFINED by the IP address of your virtual server
-->
<vip>
<interface_list>
<interface check="on" arpreroute="on">
<virtual_interface type="vmac_directed">
<virtual_addr addr="VIRTUAL_IP_ADDR_TO_BE_DEFINED" where="alias"/>
</virtual_interface>
</interface>
</interface_list>
<loadbalancing_list>
<group name="Windows_Appli">
<!-- Set load-balancing rule on the TCP port of the service to load balance -->
<rule port="TCP_PORT_TO_BE_DEFINED" proto="tcp" filter="on_addr"/>
</group>
</loadbalancing_list>
</vip>
<!-- TCP Checker Configuration -->
<!-- Replace
* VIRTUAL_IP_ADDR_TO_BE_DEFINED by the IP address of your virtual server
* TCP_PORT_TO_BE_DEFINED by the TCP port of the service to check
-->
<check>
<tcp ident="Check_Appli" when="both">
<to
addr="VIRTUAL_IP_ADDR_TO_BE_DEFINED"
port="TCP_PORT_TO_BE_DEFINED"
interval="10"
timeout="5"
/>
</tcp>
</check>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog" />
</service>
</safe>
start_both
#!/bin/sh
# Script called on the primary server for starting application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
# stdout goes into Application log
echo "Running start_both $*"
res=0
# Fill with your application start call
if [ $res -ne 0 ] ; then
$SAFE/safekit printe "start_both failed"
# uncomment to stop SafeKit when critical
# $SAFE/safekit stop -i "start_both"
fi
stop_both
#!/bin/sh
# Script called on the primary server for stopping application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
# call standard application stop
#
# - force stop ($1=force)
# kill application's processes
#
#----------------------------------------------------------
# stdout goes into Application log
echo "Running stop_both $*"
res=0
# default: no action on forcestop
[ "$1" = "force" ] && exit 0
# Fill with your application stop call
[ $res -ne 0 ] && $SAFE/safekit printe "stop_both failed"
Step 1. Real-time replication
Server 1 (PRIM) runs the application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.
Step 2. Automatic failover
When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the application automatically on Server 2.
The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.
Step 3. Automatic failback
Failback involves restarting Server 1 after fixing the problem that caused it to fail.
SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.
Failback takes place without disturbing the application, which can continue running on Server 2.
Step 4. Back to normal
After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the application running on Server 2 and SafeKit replicating file updates to Server 1.
If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
More information on power outage and network isolation in a cluster.
Redundancy at the application level
In this type of solution, only application data are replicated. And only the application is restared in case of failure.
With this solution, restart scripts must be written to restart the application.
We deliver application modules to implement redundancy at the application level. They are preconfigured for well known applications and databases. You can customize them with your own services, data to replicate, application checkers. And you can combine application modules to build advanced multi-level architectures.
This solution is platform agnostic and works with applications inside physical machines, virtual machines, in the Cloud. Any hypervisor is supported (VMware, Hyper-V...).
Redundancy at the virtual machine level
In this type of solution, the full Virtual Machine (VM) is replicated (Application + OS). And the full VM is restarted in case of failure.
The advantage is that there is no restart scripts to write per application and no virtual IP address to define. If you do not know how the application works, this is the best solution.
This solution works with Windows/Hyper-V and Linux/KVM but not with VMware. This is an active/active solution with several virtual machines replicated and restarted between two nodes.
- Solution for a new application (no restart script to write): Windows/Hyper-V, Linux/KVM
More comparison between VM HA vs Application HA
Why a replication of a few Tera-bytes?
Resynchronization time after a failure (step 3)
- 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
- 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.
Alternative
- For a large volume of data, use external shared storage.
- More expensive, more complex.
Why a replication < 1,000,000 files?
- Resynchronization time performance after a failure (step 3).
- Time to check each file between both nodes.
Alternative
- Put the many files to replicate in a virtual hard disk / virtual machine.
- Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.
Why a failover ≤ 32 replicated VMs?
- Each VM runs in an independent mirror module.
- Maximum of 32 mirror modules running on the same cluster.
Alternative
- Use an external shared storage and another VM clustering solution.
- More expensive, more complex.
Why a LAN/VLAN network between remote sites?
- Automatic failover of the virtual IP address with 2 nodes in the same subnet.
- Good bandwidth for resynchronization (step 3) and good latency for synchronous replication (typically a round-trip of less than 2ms).
Alternative
- Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
- Use backup solutions with asynchronous replication for high latency network.
Network load balancing and failover |
|
Windows farm | Linux farm |
Generic Windows farm > | Generic Linux farm > |
Microsoft IIS > | - |
NGINX > | |
Apache > | |
Amazon AWS farm > | |
Microsoft Azure farm > | |
Google GCP farm > | |
Other cloud > |
Advanced clustering architectures
Several modules can be deployed on the same cluster. Thus, advanced clustering architectures can be implemented:
- the farm+mirror cluster built by deploying a farm module and a mirror module on the same cluster,
- the active/active cluster with replication built by deploying several mirror modules on 2 servers,
- the Hyper-V cluster or KVM cluster with real-time replication and failover of full virtual machines between 2 active hypervisors,
- the N-1 cluster built by deploying N mirror modules on N+1 servers.
Evidian SafeKit mirror cluster with real-time file replication and failover |
|
3 products in 1 More info > |
|
Very simple configuration More info > |
|
Synchronous replication More info > |
|
Fully automated failback More info > |
|
Replication of any type of data More info > |
|
File replication vs disk replication More info > |
|
File replication vs shared disk More info > |
|
Remote sites and virtual IP address More info > |
|
Quorum and split brain More info > |
|
Active/active cluster More info > |
|
Uniform high availability solution More info > |
|
RTO / RPO More info > |
|
Evidian SafeKit farm cluster with load balancing and failover |
|
No load balancer or dedicated proxy servers or special multicast Ethernet address |
|
All clustering features |
|
Remote sites and virtual IP address |
|
Uniform high availability solution |
|
Software clustering vs hardware clustering
|
|
|
|
Shared nothing vs a shared disk cluster |
|
|
|
Application High Availability vs Full Virtual Machine High Availability
|
|
|
|
High availability vs fault tolerance
|
|
|
|
Synchronous replication vs asynchronous replication
|
|
|
|
Byte-level file replication vs block-level disk replication
|
|
|
|
Heartbeat, failover and quorum to avoid 2 master nodes
|
|
|
|
Virtual IP address primary/secondary, network load balancing, failover
|
|
|
|
Evidian SafeKit 8.2
All new features compared to SafeKit 7.5 described in the release notes
Packages
- Windows (with Microsoft Visual C++ Redistributable)
- Windows (without Microsoft Visual C++ Redistributable)
- Linux
- Supported OS and last fixes
One-month license key
Technical documentation
Training
Product information
New application (empty restart scripts)
- Quick installation guide for a generic Windows mirror HA solution
- Quick installation guide for a generic Linux mirror HA solution
- Quick installation guide for a generic Windows farm HA solution
- Quick installation guide for a generic Linux farm HA solution
Web (network load balancing and failover)
Database (real-time replication and failover)
- Quick installation guide for a Microsoft SQL Server HA solution
- Quick installation guide for a Oracle HA solution
- Quick installation guide for a MariaDB HA solution
- Quick installation guide for a MySQL HA solution
- Quick installation guide for a PostgreSQL HA solution
- Quick installation guide for a Firebird HA solution
Full VM or container real-time replication and failover
- Quick installation guide for a Windows Hyper-V HA solution
- Quick installation guide for a Linux KVM HA solution
- Quick installation guide for a Docker HA solution
- Quick installation guide for a Podman HA solution
- Quick installation guide for a Kubernetes K3S HA solution
- Quick installation guide for a Elasticsearch HA solution
Physical security (real-time replication and failover)
- Quick installation guide for a Milestone XProtect HA solution
- Quick installation guide for a Genetec SQL Server HA solution
- Quick installation guide for a Nedap AEOS HA solution
- Quick installation guide for a Bosch AMS HA solution
- Quick installation guide for a Bosch BIS HA solution
- Quick installation guide for a Bosch BVMS HA solution
- Quick installation guide for a Hanwha Vision HA solution
- Quick installation guide for a Hanwha Wisenet HA solution
Siemens (real-time replication and failover)
- Quick installation guide for a Siemens Siveillance suite HA solution
- Quick installation guide for a Siemens Desigo CC HA solution
- Quick installation guide for a Siemens SiPass HA solution
- Quick installation guide for a Siemens SIPORT HA solution
- Quick installation guide for a Siemens Siveillance VMS HA solution
- Quick installation guide for a Siemens SIMATIC WinCC HA solution
- Quick installation guide for a Siemens SIMATIC PCS 7 HA solution
Cloud (mirror or farm)
- Quick installation guide for a Microsoft Azure mirror HA solution
- Quick installation guide for a Google GCP mirror HA solution
- Quick installation guide for a Amazon AWS mirror HA solution
- Quick installation guide for Other cloud mirror HA solution
- Quick installation guide for a Microsoft Azure farm HA solution
- Quick installation guide for a Google GCP farm HA solution
- Quick installation guide for a Amazon AWS farm HA solution
- Quick installation guide for Other cloud farm HA solution
Introduction
-
- Demonstration
- Examples of redundancy and high availability solution
- Evidian SafeKit sold in many different countries with Milestone
- 2 solutions: virtual machine or application cluster
- Distinctive advantages
- More information on the web site
-
- Cluster of virtual machines
- Mirror cluster
- Farm cluster
Installation, Console, CLI
- Install and setup / pptx
- Package installation
- Nodes setup
- Upgrade
- Web console / pptx
- Configuration of the cluster
- Configuration of a new module
- Advanced usage
- Securing the web console
- Command line / pptx
- Configure the SafeKit cluster
- Configure a SafeKit module
- Control and monitor
Advanced configuration
- Mirror module / pptx
- start_prim / stop_prim scripts
- userconfig.xml
- Heartbeat (<hearbeat>)
- Virtual IP address (<vip>)
- Real-time file replication (<rfs>)
- How real-time file replication works?
- Mirror's states in action
- Farm module / pptx
- start_both / stop_both scripts
- userconfig.xml
- Farm heartbeats (<farm>)
- Virtual IP address (<vip>)
- Farm's states in action
Troubleshooting
- Troubleshooting / pptx
- Analyze yourself the logs
- Take snapshots for support
- Boot / shutdown
- Web console / Command lines
- Mirror / Farm / Checkers
- Running an application without SafeKit
Support
- Evidian support / pptx
- Get permanent license key
- Register on support.evidian.com
- Call desk