Evidian > Products > High Availability Software - Zero Extra Hardware > SafeKit Training - Web Management Console
Server 1 (PRIM) runs the application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.
When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the application automatically on Server 2.
The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.
Failback involves restarting Server 1 after fixing the problem that caused it to fail.
SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.
Failback takes place without disturbing the application, which can continue running on Server 2.
After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the application running on Server 2 and SafeKit replicating file updates to Server 1.
If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
On the previous figure, the application is running on the 3 servers (3 is an example, it can be 2 or more). Users are connected to a virtual IP address.
The virtual IP address is configured locally on each server in the farm cluster.
The input traffic to the virtual IP address is received by all the servers and split among them by a network filter inside each server's kernel.
SafeKit detects hardware and software failures, reconfigures network filters in the event of a failure, and offers configurable application checkers and recovery scripts.
The network load balancing algorithm inside the network filter is based on the identity of the client packets (client IP address, client TCP port). Depending on the identity of the client packet input, only one filter in a server accepts the packet; the other filters in other servers reject it.
Once a packet is accepted by the filter on a server, only the CPU and memory of this server are used by the application that responds to the request of the client. The output messages are sent directly from the application server to the client.
If a server fails, the SafeKit membership protocol reconfigures the filters in the network load balancing cluster to re-balance the traffic on the remaining available servers.
With a stateful application, there is session affinity. The same client must be connected to the same server on multiple TCP sessions to retrieve its context on the server. In this case, the SafeKit load balancing rule is configured on the client IP address. Thus, the same client is always connected to the same server on multiple TCP sessions. And different clients are distributed across different servers in the farm.
With a stateless application, there is no session affinity. The same client can be connected to different servers in the farm on multiple TCP sessions. There is no context stored locally on a server from one session to another. In this case, the SafeKit load balancing rule is configured on the TCP client session identity. This configuration is the one which is the best for distributing sessions between servers, but it requires a TCP service without session affinity.
Network load balancing and failover |
|
Windows farm |
Linux farm |
Generic farm > | Generic farm > |
Microsoft IIS > | - |
NGINX > | NGINX > |
Apache > | Apache > |
Amazon AWS farm > | Amazon AWS farm > |
Microsoft Azure farm > | Microsoft Azure farm > |
Google GCP farm > | Google GCP farm > |
Other cloud > | Other cloud > |
Several modules can be deployed on the same cluster. Thus, advanced clustering architectures can be implemented:
Evidian SafeKit mirror cluster with real-time file replication and failover |
|
|
|
|
|
|
|
Fully automated failback procedure > |
|
Replication of any type of data > |
|
File replication vs disk replication > |
|
File replication vs shared disk > |
|
Remote sites and virtual IP address > |
|
|
|
|
|
Uniform high availability solution > |
|
|
|
Evidian SafeKit farm cluster with load balancing and failover |
|
No load balancer or dedicated proxy servers or special multicast Ethernet address > |
|
|
|
Remote sites and virtual IP address > |
|
Uniform high availability solution > |
|
|
|
|
|
Application High Availability vs Full Virtual Machine High Availability > |
|
|
|
|
|
|
|
Byte-level file replication vs block-level disk replication > |
|
|
|
|
|
Virtual IP address |
|
|
|