The SafeKit farm cluster implements a network load balancing cluster among several servers. It provides a simple solution to critical application scalability and high availability.
In a network load balancing cluster, the same application runs on each server, and the load is balanced by the distribution of network activity on the different servers of the farm. This type of cluster is suited to front-end applications like web services.
The SafeKit software saves the cost of hardware load balancers. It does not require specific servers above the farm for implementing the network load balancing cluster.
For server in the same subnet, the network load balancing is very efficiently implemented by a network filter driver. This driver works on Windows and Linux (even on Windows editions for PCs). For servers in different subnets, SafeKit offers a health check which can be configured at the level of a load balancer: see the article on how a load balanced virtual IP address works.
SafeKit provides a generic farm module on Windows and Linux to build a network load balancing cluster. You can write your own farm module for your application starting from the generic farm module. Apache and Microsoft IIS are examples of farm modules.
Combined with the farm cluster, you can also implement a mirror cluster with real-time file replication and failover.
The virtual IP address is configured locally on each server in the network load balancing cluster.
The input traffic for the virtual IP address is received by all the servers and split among them by a filter inside each server's kernel.
The network load balancing algorithm inside the filter is based on the identity of the client packets (client IP address, client TCP port). Depending on the identity of the client packet input, only one filter in a server accepts the packet; the other filters in other servers reject it.
Once a packet is accepted by the filter on a server, only the CPU and memory of this server are used by the application that responds to the request of the client. The output messages are sent directly from the application server to the client.
If a server fails, the SafeKit membership protocol reconfigures the filters in the network load balancing cluster to re-balance the traffic on the remaining available servers.
Note that a comparison between Microsoft NLB and SafeKit network load balancing is available here. And note that SafeKit network load balancing is working not only on Windows (including Windows editions for PCs) but also on Linux.
With a stateful server, there is session affinity. The same client must be connected to the same server on multiple HTTP/TCP sessions to retrieve its context on the server. In this case, the SafeKit load balancing rule is configured on the client IP address. Thus, the same client is always connected to the same server on multiple TCP sessions. And different clients are distributed across different servers in the farm. This configuration is used when there are session affinities.
With a stateless server, there is no session affinity. The same client can be connected to different servers in the farm on multiple HTTP/TCP sessions; because there is no context stored locally on a server from one session to another. In this case, the SafeKit load balancing rule is configured on the TCP client session identity. This configuration is the one which is the best for distributing sessions between servers, but it requires a TCP service without session affinity.
If you are also interested by real-time replication and failover in a mirror cluster, read this article.
Evidian SafeKit farm cluster with load balancing and failover | ||
No load balancer or dedicated proxy servers or special multicast Ethernet address | |
|
All clustering features | |
|
Remote sites and virtual IP address | |
|
Uniform high availability solution | |
|
Evidian SafeKit mirror cluster with real-time file replication and failover | ||
All clustering features | |
|
Synchronous replication | |
|
Fully automated failback procedure | |
|
Replication of any type of data | |
|
File replication vs disk replication | |
|
File replication vs shared disk | |
|
Remote sites and virtual IP address | |
|
Quorum | |
|
Active/active cluster | |
|
Uniform high availability solution | |
|
High availability architectures comparison | ||
Feature | SafeKit cluster | Other clusters |
Software clustering vs hardware clustering | | |
Shared nothing vs a shared disk cluster | | |
Application High Availability vs Full Virtual Machine High Availability | Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist) | Smooth upgrade not possible |
High availability vs fault tolerance | Software failure with restart in another OS environment. Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist) | Software exception on both servers at the same time. Smooth upgrade not possible |
Synchronous replication vs asynchronous replication | | |
Byte-level file replication vs block-level disk replication | | |
Heartbeat, failover and quorum to avoid 2 master nodes | | |
Virtual IP address primary/secondary, network load balancing, failover | | |
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.