SafeKit: an ideal solution for a partner application
This platform agnostic solution is ideal for a partner with a critical application and who wants to provide a high availability option easy to deploy to many customers.
This clustering solution is also recognized as the simplest to implement by our partners.
SafeKit high availability software on Windows and Linux saves on 1/ costly external shared or replicated storage, 2/ load balancing boxes, 3/ enterprise editions of OS and databases
SafeKit includes
all clustering features:
synchronous real-time file replication, monitoring of server / network / software failures, automatic
application
restart, virtual IP address switched in case of failure to reroute clients
The cluster configuration is very simple and made
with the .safe module.
by means of application modules.
New services and new replicated directories can be added to an existing application module
to complete a high availability solution
All the configuration of clusters is made using a simple centralized web administration console
There is no domain controller or active directory to configure as with Microsoft cluster
After a failure when a server reboots, the replication failback procedure
is fully automatic and the failed server reintegrates the cluster without stopping
the application
on the only remaining server
This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server.
The application
may even be stopped on the only remaining server during the resynchonization of the failed server
All SafeKit clustering features are working for 2 servers in remote sites. Replication requires an extended LAN type network (latency = performance of synchronous replication, bandwidth = performance of resynchronization after failure).
If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2
If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer with the "healh check" of SafeKit.
The secondary server is not dedicated to the restart of the primary server. The cluster can be active-active by running 2 different mirror modules
This is not the case with a fault-tolerant system where the secondary is dedicated to the execution of the same application synchronized at the instruction level
SafeKit implements a mirror cluster with replication and failover. But it imlements also
a farm cluster with load balancing and failover.
Thus a N-tiers architecture can be made highly available and load balanced with the same solution on
Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market
This is not the case with an architecture mixing different technologies for load balancing, replication and failover
SafeKit implements quick
application
restart in case of failure: around 1 mn or less (see RTO/RPO here)
Quick
application
restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with a recovery time depending on the OS reboot as with VMware HA or Hyper-V cluster
Key differentiators of a farm cluster with load balancing and failover
Evidian SafeKit farm cluster with load balancing and failover
The solution does not require load balancers or dedicated proxy servers above the farm for imlementing load balancing.
SafeKit is installed directly on the
application
servers in the farm. The load balancing is based on a standard virtual IP address/Ethernet MAC address and is working with physical servers or virtual machines on Windows and Linux without special network configuration
This is not the case with network load balancers
This is not the case with dedicated proxies on Linux
The solution includes all clustering features: virtual IP address,
load balancing on client IP address or on sessions,
monitoring of server / network / software failures,
automatic
application
restart with a quick revovery time and
a replication option with a mirror module
This is not the case with other load balancing solutions. They are able to make load balancing but they do not include a full clustering solution with restart scripts and
automatic
application
restart in case of failure. They do not offer a replication option
The cluster configuration is very simple and made
by means of application modules.
There is no domain controller or active directory to configure on Windows. The solution works on Windows and Linux
If servers are connected to the same IP network through an extended LAN between remote sites, the virtual IP address of SafeKit is working with load balancing at level 2
If servers are connected to different IP networks between remote sites, the virtual IP address can be configured at the level of a load balancer with the help of the SafeKit health check.
Thus you can implement load balancing but also all the clustering features of SafeKit, in particular monitoring and automatic recovery of the critical application on application servers
Thus a N-tiers architecture can be made highly available and load balanced with the same solution
on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market
This is not the case with an architecture mixing different technologies for load balancing, replication and failover
Key differentiators of the SafeKit high availability technology
Application HA supports hardware failure and software failure with
a quick recovery time
(RTO around 1 mn or less).
Application HA requires to define restart scripts per application and folders to replicate (SafeKit application modules).
Full virtual machines HA supports only hardware failure with a VM reboot and a recovery time depending on the OS reboot.
No restart scripts to define with full virtual machines HA (SafeKit hyperv.safe or kvm.safe modules). Hypervisors are active/active with just multiple virtual machines.
No dedicated server with SafeKit.
Each server can be the failover server of the other one.
Software failure with restart in another OS environment.
Smooth upgrade of application and OS possible server by server (version N and N+1 can coexist)
Secondary server dedicated to the execution of the same application synchronized at the instruction level.
Software exception on both servers at the same time.
Smooth upgrade not possible
No dedicated proxy servers and no special network configuration are required in a SafeKit cluster for virtual IP addresses
Special network configuration is required in other clusters for virtual IP addresses. Note that SafeKit offers a health check adapted to load balancers
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRead More
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.