NGINX: The Simplest Load Balancing Cluster with Failover
Evidian SafeKit
The solution for NGINX
Evidian SafeKit brings load balancing and failover to NGINX.
This article explains how to implement quickly a NGINX cluster without network load balancers, dedicated proxy servers or special MAC addresses. SafeKit is installed directly on the NGINX servers.
A generic product
Note that SafeKit is a generic product on Windows and Linux.
You can implement with the same product real-time replication and failover of directories and services, databases, full Hyper-V or KVM virtual machines, Docker, Kubernetes, Cloud applications.
A complete solution
SafeKit solves:
- hardware failures (20% of problems), including the complete failure of a computer room,
- software failures (40% of problems), including restart of critical processes,
- and human errors (40% of problems) thanks to its ease of use and its web console.
SafeKit: an ideal solution for a partner
This platform agnostic solution is ideal for a partner reselling a critical application and who wants to provide a redundancy and high availability option easy to deploy to many customers.
With many references in many countries won by partners, SafeKit has proven to be the easiest solution to implement for redundancy and high availability of building management, video management, access control, SCADA software...
Building Management Software
(BMS)
Video Management Software
(VMS)
Electronic Access Control
Software (EACS)
SCADA Software
(Industry)
Virtual IP address in a farm cluster
On the previous figure, the NGINX application is running on the 3 servers (3 is an example, it can be 2 or more). Users are connected to a virtual IP address.
The virtual IP address is configured locally on each server in the farm cluster.
The input traffic to the virtual IP address is received by all the servers and split among them by a network filter inside each server's kernel.
SafeKit detects hardware and software failures, reconfigures network filters in the event of a failure, and offers configurable application checkers and recovery scripts.
Load balancing in a network filter
The network load balancing algorithm inside the network filter is based on the identity of the client packets (client IP address, client TCP port). Depending on the identity of the client packet input, only one filter in a server accepts the packet; the other filters in other servers reject it.
Once a packet is accepted by the filter on a server, only the CPU and memory of this server are used by the NGINX application that responds to the request of the client. The output messages are sent directly from the application server to the client.
If a server fails, the SafeKit membership protocol reconfigures the filters in the network load balancing cluster to re-balance the traffic on the remaining available servers.
Stateful or stateless applications
With a stateful NGINX application, there is session affinity. The same client must be connected to the same server on multiple TCP sessions to retrieve its context on the server. In this case, the SafeKit load balancing rule is configured on the client IP address. Thus, the same client is always connected to the same server on multiple TCP sessions. And different clients are distributed across different servers in the farm.
With a stateless NGINX application, there is no session affinity. The same client can be connected to different servers in the farm on multiple TCP sessions. There is no context stored locally on a server from one session to another. In this case, the SafeKit load balancing rule is configured on the TCP client session identity. This configuration is the one which is the best for distributing sessions between servers, but it requires a TCP service without session affinity.
Prerequisites
- You need NGINX installed on 2 nodes or more (virtual machines or physical servers).
Package installation on Windows
-
Download the free version of SafeKit on 2 Windows nodes (or more).
Note: the free version includes all SafeKit features. At the end of the trial, you can activate permanent license keys without uninstalling the package.
-
To open the Windows firewall, on both nodes start a powershell as administrator, and type
c:/safekit/private/bin/firewallcfg add
-
To initialize the password for the default admin user of the web console, on both nodes start a powershell as administrator, and type
c:/safekit/private/bin/webservercfg.ps1 -passwd pwd
pwd must be the same on both nodes
-
For synchronizing SafeKit at boot and at shutdown, on both nodes start a powershell as administrator, and type only once
c:/safekit/private/bin/addStartupShutdown
Package installation on Linux
-
Install the free version of SafeKit on 2 Linux nodes (or more).
Note: the free trial includes all SafeKit features. At the end of the trial, you can activate permanent license keys without uninstalling the package.
- After the download of safekit_xx.bin package, execute it to extract the rpm and the safekitinstall script and then execute the safekitinstall script
- Answer yes to firewall automatic configuration
- Set the password for the web console and the default user admin. Set the same password on both nodes
Note: the generic farm.safe module that you are going to configure is delivered inside the package.
The NGINX configuration is presented with 2 nodes. But you can add more nodes if necessary.
1. Launch the SafeKit console
- Launch the web console in a browser on one node by connecting to
http://localhost:9010
. - Enter
admin
as user name and the password defined during installation.
You can also run the console in a browser on a workstation external to the cluster.
The configuration of SafeKit is done on all nodes from a single browser.
To secure the web console, see 11. Securing the SafeKit web console in the User's Guide.
2. Configure node addresses
- Enter the node IP addresses.
- Then, click on
Apply
to save the configuration.
If node1 or node2 background color is red, check connectivity of the browser to both nodes and check firewall on both nodes for troubleshooting.
This operation will place the IP addresses in the cluster.xml
file on both nodes (more information in the training with the command line).
4. Configure the module
- Choose an automatic start of the module at boot without delay.
- Enter a virtual IP address. A virtual IP address is a standard IP address in the same IP network (same subnet) as the IP addresses of both nodes.
Application clients must be configured with the virtual IP address (or the DNS name associated with the virtual IP address). - Set the service port to load balance (ex.: TCP 80 for HTTP, TCP 443 for HTTPS...).
- Set the load balancing rule (source address or source port in the client IP packet):
- with the source IP address of the client, the same client will be connected to the same node in the farm on multiple TCP sessions and retrieve its context on the node.
- with the source TCP port of the client, the same client will be connected to different nodes in the farm on multiple TCP sessions (without retrieving a context).
start_both
andstop_both
contain the start and the stop of NGINX services. Check that the names of services in these scripts are those installed on both nodes else modify them in the scripts.
Note that if a process name is displayed in Process Checker, it will be monitored with a restart action in case of failure. Configuring a wrong process name will cause the module to stop right after its start.
This operation will report the configuration in the userconfig.xml
, start_both
, stop_both
files on both nodes (more information in the training with the command line).
7. Wait for the transition to UP (green) / UP (green)
Node 1 and node 2 should reach the UP (green) state, which means that the start_both
script has been executed on node 1 and node 2.
If the status is UP (green) and the NGINX application is not started on node 1 or node 2, check output messages of the start_both
script in the Application Log on node 1 or node 2.
If node 1 or node 2 does not reach UP (green), analyze why with the Module Log of node 1 or node 2.
8. Testing
SafeKit brings a built-in test in the product:
- Configure a rule for TCP port 9010 with a load balancing on source TCP port.
- Connect an external workstation outside the farm nodes.
- Start a browser on http://virtual-ip:9010/safekit/mosaic.html.
- Stop one UP (green) node by scrolling down its contextual menu and clicking Stop.
- Check that there is no more TCP connections on the stopped node and on the virtual IP address.
You should see a mosaic of colors depending on nodes answering to HTTP requests.
Module log
- Read the module log to understand the reasons of a failover, of a waiting state etc...
To see the module log of node 1 (image):
- click on the Control tab
- click on node 1/UP on the left side to select the server (it becomes blue)
- click on Module Log
- click on the Refresh icon (green arrows) to update the console
- click on the floppy disk to save the module log in a .txt file and to analyze in a text editor
Click on node2 to see the module log of this server.
Application log
- Read the application log to see the output messages of the stat_both and stop_both restart scripts.
To see the application log of node1 (image):
- click on the Control tab
- click on node 1/UP on the left side to select the server (it becomes blue)
- click on Application Log to see messages when starting and stopping NGINX services
- click on the Refresh icon (green arrows) to update the console
- click on the floppy disk to save the application log in a .txt file and to analyze in a text editor
Click on node 2 to see the application log of this server.
Advanced configuration
- In Advanced Configuration tab, you can edit internal files of the module: bin/start_both and bin/stop_both and conf/userconfig.xml .
If you make change in the internal files here, you must apply the new configuration by a right click on the icon/xxx on the left side (see image): the interface will allow you to redeploy the modified files on both servers.
Support
- For getting support, take 2 SafeKit Snaphots (2 .zip files), one for each server.
If you have an account on https://support.evidian.com, upload them in the call desk tool.
Internal files of a SafeKit / NGINX load balancing cluster with failover
Go to the Advanced Configuration tab in the console, for editing these filesInternal files of the Windows farm.safe module
userconfig.xml (description in the User's Guide)
<!DOCTYPE safe>
<safe>
<service mode="farm" maxloop="3" loop_interval="24">
<!-- Farm topology configuration for the membership protocol -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<farm>
<lan name="default" />
</farm>
<!-- Virtual IP Configuration -->
<!-- Replace
* VIRTUAL_IP_ADDR_TO_BE_DEFINED by the IP address of your virtual server
-->
<vip>
<interface_list>
<interface check="on" arpreroute="on">
<virtual_interface type="vmac_directed">
<virtual_addr addr="VIRTUAL_IP_ADDR_TO_BE_DEFINED" where="alias"/>
</virtual_interface>
</interface>
</interface_list>
<loadbalancing_list>
<group name="Windows_Appli">
<!-- Set load-balancing rule on the TCP port of the service to load balance -->
<rule port="TCP_PORT_TO_BE_DEFINED" proto="tcp" filter="on_addr"/>
</group>
</loadbalancing_list>
</vip>
<!-- TCP Checker Configuration -->
<!-- Replace
* VIRTUAL_IP_ADDR_TO_BE_DEFINED by the IP address of your virtual server
* TCP_PORT_TO_BE_DEFINED by the TCP port of the service to check
-->
<check>
<tcp ident="Check_Appli" when="both">
<to
addr="VIRTUAL_IP_ADDR_TO_BE_DEFINED"
port="TCP_PORT_TO_BE_DEFINED"
interval="10"
timeout="5"
/>
</tcp>
</check>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog" />
</service>
</safe>
start_both.cmd
@echo off
rem Script called on all servers for starting applications
rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"
rem stdout goes into Application log
echo "Running start_both %*"
set res=0
rem Fill with your services start call
set res=%errorlevel%
if %res% == 0 goto end
:stop
set res=%errorlevel%
"%SAFE%\safekit" printe "start_both failed"
rem uncomment to stop SafeKit when critical
rem "%SAFE%\safekit" stop -i "start_both"
:end
stop_both.cmd
@echo off
rem Script called on all servers for stopping application
rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"
rem ----------------------------------------------------------
rem
rem 2 stop modes:
rem
rem - graceful stop
rem call standard application stop with net stop
rem
rem - force stop (%1=force)
rem kill application's processes
rem
rem ----------------------------------------------------------
rem stdout goes into Application log
echo "Running stop_both %*"
set res=0
rem default: no action on forcestop
if "%1" == "force" goto end
rem Fill with your services stop call
rem If necessary, uncomment to wait for the real stop of services
rem "%SAFEBIN%\sleep" 10
if %res% == 0 goto end
"%SAFE%\safekit" printe "stop_both failed"
:end
Internal files of the Linux farm.safe module
userconfig.xml (description in the User's Guide)
<!DOCTYPE safe>
<safe>
<service mode="farm" maxloop="3" loop_interval="24">
<!-- Farm topology configuration for the membership protocol -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<farm>
<lan name="default" />
</farm>
<!-- Virtual IP Configuration -->
<!-- Replace
* VIRTUAL_IP_ADDR_TO_BE_DEFINED by the IP address of your virtual server
-->
<vip>
<interface_list>
<interface check="on" arpreroute="on">
<virtual_interface type="vmac_directed">
<virtual_addr addr="VIRTUAL_IP_ADDR_TO_BE_DEFINED" where="alias"/>
</virtual_interface>
</interface>
</interface_list>
<loadbalancing_list>
<group name="Windows_Appli">
<!-- Set load-balancing rule on the TCP port of the service to load balance -->
<rule port="TCP_PORT_TO_BE_DEFINED" proto="tcp" filter="on_addr"/>
</group>
</loadbalancing_list>
</vip>
<!-- TCP Checker Configuration -->
<!-- Replace
* VIRTUAL_IP_ADDR_TO_BE_DEFINED by the IP address of your virtual server
* TCP_PORT_TO_BE_DEFINED by the TCP port of the service to check
-->
<check>
<tcp ident="Check_Appli" when="both">
<to
addr="VIRTUAL_IP_ADDR_TO_BE_DEFINED"
port="TCP_PORT_TO_BE_DEFINED"
interval="10"
timeout="5"
/>
</tcp>
</check>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog" />
</service>
</safe>
start_both
#!/bin/sh
# Script called on the primary server for starting application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
# stdout goes into Application log
echo "Running start_both $*"
res=0
# Fill with your application start call
if [ $res -ne 0 ] ; then
$SAFE/safekit printe "start_both failed"
# uncomment to stop SafeKit when critical
# $SAFE/safekit stop -i "start_both"
fi
stop_both
#!/bin/sh
# Script called on the primary server for stopping application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
# call standard application stop
#
# - force stop ($1=force)
# kill application's processes
#
#----------------------------------------------------------
# stdout goes into Application log
echo "Running stop_both $*"
res=0
# default: no action on forcestop
[ "$1" = "force" ] && exit 0
# Fill with your application stop call
[ $res -ne 0 ] && $SAFE/safekit printe "stop_both failed"
Step 1. Real-time replication
Server 1 (PRIM) runs the application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.
Step 2. Automatic failover
When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the application automatically on Server 2.
The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.
Step 3. Automatic failback
Failback involves restarting Server 1 after fixing the problem that caused it to fail.
SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.
Failback takes place without disturbing the application, which can continue running on Server 2.
Step 4. Back to normal
After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the application running on Server 2 and SafeKit replicating file updates to Server 1.
If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
Redundancy at the application level
In this type of solution, only application data are replicated. And only the application is restared in case of failure.
With this solution, restart scripts must be written to restart the application.
We deliver application modules to implement redundancy at the application level. They are preconfigured for well known applications and databases. You can customize them with your own services, data to replicate, application checkers. And you can combine application modules to build advanced multi-level architectures.
This solution is platform agnostic and works with applications inside physical machines, virtual machines, in the Cloud. Any hypervisor is supported (VMware, Hyper-V...).
Redundancy at the virtual machine level
In this type of solution, the full Virtual Machine (VM) is replicated (Application + OS). And the full VM is restarted in case of failure.
The advantage is that there is no restart scripts to write per application and no virtual IP address to define. If you do not know how the application works, this is the best solution.
This solution works with Windows/Hyper-V and Linux/KVM but not with VMware. This is an active/active solution with several virtual machines replicated and restarted between two nodes.
- Solution for a new application (no restart script to write): Windows/Hyper-V, Linux/KVM
Why a replication of a few Tera-bytes?
Resynchronization time after a failure (step 3)
- 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
- 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.
Alternative
- For a large volume of data, use external shared storage with a hardware clustering solution.
- More expensive, more complex.
Why a replication < 1,000,000 files?
- Resynchronization time performance after a failure (step 3).
- Time to check each file between both nodes.
Alternative
- Put the many files to replicate in a virtual hard disk / virtual machine.
- Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.
Why a failover < 25 replicated VMs?
- Each VM runs in an independent mirror module.
- Maximum of 25 mirror modules running on the same cluster.
Alternative
- Use an external shared storage and another VM clustering solution.
- More expensive, more complex.
Why a LAN/VLAN network between remote sites?
- Automatic failover of the virtual IP address with 2 nodes in the same subnet.
- Good bandwidth for resynchronization (step 3) and good latency for synchronous replication (a few ms).
Alternative
- Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
- Use backup solutions with asynchronous replication for high latency network.
Network load balancing and failover |
|
Windows farm |
Linux farm |
Generic Windows farm > | Generic Linux farm > |
Microsoft IIS > | - |
NGINX > | |
Apache > | |
Amazon AWS farm > | |
Microsoft Azure farm > | |
Google GCP farm > | |
Other cloud > |
Advanced clustering architectures
Several modules can be deployed on the same cluster. Thus, advanced clustering architectures can be implemented:
- the farm+mirror cluster built by deploying a farm module and a mirror module on the same cluster,
- the active/active cluster with replication built by deploying several mirror modules on 2 servers,
- the Hyper-V cluster or KVM cluster with real-time replication and failover of full virtual machines between 2 active hypervisors,
- the N-1 cluster built by deploying N mirror modules on N+1 servers.
-
Best use cases [+]
OEM Software
Distributed Enterprise
Remote Sites
A software publisher uses SafeKit as an OEM software for high availability of its application A distributed enterprise deploys SafeKit in many branches without specific IT skills SafeKit is deployed in two remote sites without the need for replicated bays of disks through a SAN Testimonials
The ideal product for a software publisher
“SafeKit is the ideal application clustering solution for a software publisher. We currently have deployed more than 80 SafeKit clusters worldwide with our critical TV broadcasting application.”
The product very easy to deploy for a reseller
“Noemis, a value added distributor of Milestone Video Surveillance, has assisted integrators to deploy the SafeKit redundancy solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides a great support.”
The product to gain time for a system integrator
“Thanks to a simple and powerful product, we gained time in the integration and validation of our critical projects like the supervision of Paris metro lines (the control rooms).”
-
Video management, access control, building management [+]
Life safety is directly associated with the proper execution of security software. That's why, they need redundancy and high availability. SafeKit is recognized as the simplest redundancy solution by our partners which has deployed it in:
Testimonial of Sebastien Temoin, Technical and Innovation Director, NOEMIS:
"SafeKit by Evidian is a professional solution making easy the redundancy of Milestone video management software. The solution is easy to deploy, easy to maintain and can be added on existing installation. We have assisted integrators to deploy the solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides great support."
-
TV broadcasting [+]
Harmonic is using SafeKit as a software OEM high availability solution and deploys it with its TV broadcasting solutions over satellites, terrestrials, cable, IPTV.
Over 80 SafeKit clusters are deployed on Windows for replication of Harmonic database and automatic failover of the critical application.
Philippe Vidal, Product Manager, Harmonic says:
“SafeKit is the ideal application clustering solution for a software publisher looking for a simple and economical high availability software. We are deploying SafeKit worldwide and we currently have more than 80 SafeKit clusters on Windows with our critical TV broadcasting application through terrestrial, satellite, cable and IP-TV. SafeKit implements the continuous and real-time replication of our database as well as the automatic failover of our application for software and hardware failures. Without modifying our application, it was possible for us to customize the installation of SafeKit. Since then, the time of preparation and implementation has been significantly reduced.”
-
Finance [+]
The European Society of Warranties and Guarantees in Natixis uses SafeKit as a high availability solution for its applications.
Over 30 SafeKit clusters are deployed on Unix and Windows in Natixis.
-
Industry [+]
Fives Syleps implements high availability of its ERP with SafeKit and deploys the solution in the food industry.
Over 20 SafeKit clusters are deployed on Linux and Windows with Oracle.
Testimonial of Fives Syleps:
"The automated factories that we equip rely on our ERP. It is not possible that our ERP is out of service due to a computer failure. Otherwise, the whole activity of the factory stops.
We chose the Evidian SafeKit high availability product because it is an easy to use solution. It is implemented on standard servers and does not require the use of shared disks on a SAN and load balancing network boxes.
It allows servers to be put in remote computer rooms. In addition, the solution is homogeneous for Linux and Windows platforms. And it provides 3 functionalities: load balancing between servers, automatic failover and real-time data replication.”
-
Air traffic control [+]
Air traffic control systems supplier, Copperchase, deploys SafeKit high availability in airports.
Over 20 SafeKit clusters are deployed on Windows.
Tony Myers, Director of Business Development says:
"By developing applications for air traffic control, Copperchase is in one of the most critical business activities. We absolutely need our applications to be available all the time. We have found with SafeKit a simple and complete clustering solution for our needs. This software combines in a single product load balancing, real time data replication with no data loss and automatic failover. This is why, Copperchase deploys SafeKit for air traffic control in airports in the UK and the 30 countries where we are present."
-
Bank [+]
Software vendor Wellington IT deploys SafeKit high availability with its banking application for Credit Unions in Ireland and UK.
Over 25 SafeKit clusters are deployed on Linux with Oracle.
Peter Knight, Sales Manager says:
"Business continuity and disaster recovery are a major concern for our Locus banking application deployed in numerous Credit Unions around Ireland and the UK. We have found with SafeKit a simple and robust solution for high availability and synchronous replication between two servers with no data loss. With this software solution, we are not dependent on a specific and costly hardware clustering solution. It is a perfect tool to provide a software high availability option to an application of a software vendor."
-
Transport [+]
Paris transport company (RATP) chose the SafeKit high availability and load balancing solution for the centralized control room of line 1 of the Paris subway.
20 SafeKit clusters are deployed on Windows and Linux.
Stéphane Guilmin, RATP, Project manager says:
"Automation of line 1 of the Paris subway is a major project for RATP, requiring a centralized command room (CCR) designed to resist IT failures. With SafeKit, we have three distinct advantages to meet this need. Firstly, SafeKit is a purely software solution that does not demand the use of shared disks on a SAN and network boxes for load balancing. It is very simple to separate our servers into separate machine rooms. Moreover, this clustering solution is homogeneous for our Windows and Linuxplatforms. SafeKit provides the three functions that we needed: load balancing between servers, automatic failover after an incident and real time data replication."
And also, Philippe Marsol, Atos BU Transport, Integration Manager says:
“SafeKit is a simple and powerful product for application high availability. We have integrated SafeKit in our critical projects like the supervision of Paris metro Line 4 (the control room) or Marseille Line 1 and Line 2 (the operations center). Thanks to the simplicity of the product, we gained time for the integration and validation of the solution and we had also quick answers to our questions with a responsive Evidian team.”
-
Healthcare [+]
The software integrator Systel deploys SafeKit high-availability solution in firefighter and emergency medical call centers.
Over 30 SafeKit clusters are deployed on Windows with SQL Server.
Marc Pellas, CEO says:
"SafeKit perfectly meets the needs of a software vendor. Its main advantage is that it brings in high availability through a software option that is added to our own multi-platform software suite. This way, we are not dependent on a specific and costly hardware clustering solution that is not only difficult to install and maintain, but also differs according to client environments. With SafeKit, our firefighter call centers are run with an integrated software clustering solution, which is the same for all our customers, is user friendly and for which we master the installation up to after-sales support."
-
Government [+]
ERP high availability and load balancing of the French army (DGA) are made with SafeKit.
14 SafeKit clusters are deployed on Windows and Linux.
Alexandre Barth, Systems administrator says:
"Our production team implemented the SafeKit solution without any difficulty on 14 Windows and Linux clusters. Our critical activity is thus secure, with high-availability and load balancing functions. The advantages of this product are easy deployment and administration of clusters, on the one hand, and uniformity of the solution in the face of heterogeneous operating systems, on the other hand."
Evidian SafeKit mirror cluster with real-time file replication and failover |
|
3 products in 1 More info > |
|
Very simple configuration More info > |
|
Synchronous replication More info > |
|
Fully automated failback More info > |
|
Replication of any type of data More info > |
|
File replication vs disk replication More info > |
|
File replication vs shared disk More info > |
|
Remote sites and virtual IP address More info > |
|
Quorum and split brain More info > |
|
Active/active cluster More info > |
|
Uniform high availability solution More info > |
|
RTO / RPO More info > |
|
Evidian SafeKit farm cluster with load balancing and failover |
|
No load balancer or dedicated proxy servers or special multicast Ethernet address |
|
All clustering features |
|
Remote sites and virtual IP address |
|
Uniform high availability solution |
|
Software clustering vs hardware clustering
|
|
|
|
Shared nothing vs a shared disk cluster |
|
|
|
Application High Availability vs Full Virtual Machine High Availability
|
|
|
|
High availability vs fault tolerance
|
|
|
|
Synchronous replication vs asynchronous replication
|
|
|
|
Byte-level file replication vs block-level disk replication
|
|
|
|
Heartbeat, failover and quorum to avoid 2 master nodes
|
|
|
|
Virtual IP address primary/secondary, network load balancing, failover
|
|
|
|
Introduction
-
- Features
- Architectures
- Distinctive advantages
-
- Hardware vs software cluster
- Synchronous vs asynchronous replication
- File vs disk replication
- High availability vs fault tolerance
- Hardware vs software load balancing
- Virtual machine vs application HA
Installation, Console, CLI
-
- Package installation
- Nodes setup
- Cluster configuration
- Upgrade
-
- Cluster configuration
- Configuration tab
- Control tab
- Monitor tab
- Advanced Configuration tab
-
- Silent installation
- Cluster administration
- Module administration
- Command line interface
Advanced configuration
-
- userconfig.xml + restart scripts
- Heartbeat (<hearbeat>)
- Virtual IP address (<vip>)
- Real-time file replication (<rfs>)
-
- userconfig.xml + restart scripts
- Farm configuration (<farm>)
- Virtual IP address (<vip>)
-
- Failover machine (<failover>)
- Process monitoring (<errd>)
- Network and duplicate IP checkers
- Custom checker (<custom>)
- Split brain checker (<splitbrain>)
- TCP, ping, module checkers
Support
-
- Analyze snapshots
-
- Get permanent license key
- Register on support.evidian.com
- Call desk
Documentation
-
Technical documentation
-
Presales documentation