Shared nothing architecture vs shared disk architecture
Evidian SafeKit
Shared nothing architecture vs shared disk architecture for high availability clusters
Overview
This article explores the pros and cons of shared nothing architecture vs shared disk architecture for high availability clusters. We are looking at hardware constraints, impact on application data organization, recovery time, simplicity of implementation.
The following comparative tables explain in detail the difference between shared disk architecture and SafeKit, a software clustering product implementing a shared nothing architecture.
What is a shared disk architecture?
A shared disk architecture (like with Microsoft failover cluster) is based on 2 servers sharing a disk with an automatic application failover in case of hardware of software failures.
This architecture has hardware constraints: the specific external shared storage, the specific cards to install inside the servers, and the specific switches between the servers and the shared storage.
A shared disk architecture has a strong impact on the organization of application data. All application data must be localized in the shared disk for a restart after a failover.
Moreover, on failover, the file system recovery procedure must be executed on the shared disk. This increases the recovery time (RTO).
Finally, the solution is not easy to configure because skills are required to configure the specific hardware. Additionally, application skills are required to configure application data in the shared disk.
What is a shared nothing architecture ?
A shared nothing architecture (like with SafeKit) is based on 2 servers replicating data in real-time with an automatic application failover in case of hardware of software failures.
There are two types of data replication: byte level file replication vs block level disk replication. We consider here byte level file replication because it has many advantages against block level disk replication.
The shared nothing architecture has no hardware constraints: the servers can be physical or virtual with any type of disk organization. Real-time file replication (synchronous for having 0 data loss) is made through the standard network between servers.
This architecture has no impact on application data organization. For instance, if an application has its data in the system disk, real-time file replication is working.
Recovery time (RTO) in the event of a failover is reduced to the application restart time on the secondary server's replicated files.
Finally, the solution is very simple to configure as only the paths of directories to replicate are configured.
Pros and cons of shared nothing architecture vs shared disk architecture
Shared nothing architecture
|
Shared disk architecture
|
Product |
|
Clustering toolkit for shared disk |
|
Extra hardware |
|
No - Use internal disks of servers |
Yes - Extra cost with a shared bay of disks |
Application data organization |
|
0 impact on application data organization with SafeKit. Just define directories to replicate in real-time. Even directories inside the system disk can be replicated. |
Impact on application data organization. Special configuration of the application to put its data in a shared disk. Data in the system disk cannot be recovered. |
Complexity of deployment |
|
No - install a software on 2 servers |
Yes - require specific IT skills to configure OS and shared disk |
Failover |
|
Just restart the application on the second server. |
Switch the shared disk. Remount the file system. Pass the recovery procedure on the file system. And then restart the application. |
Disaster revovery |
Just put the 2 servers in 2 remotes sites connected by an extended LAN. |
Extra cost with a second bay of disks. Specific IT skills to configure mirroring of bays across a SAN. |
Quorum and split brain |
|
Application executed on a single server after a network isolation (split brain). Coherency of data after a split brain. No need for a third machine or a quorum disk or a special heartbeat line for split brain. |
Require a special quorum disk or a third quorum server to avoid data corruption on split brain |
Suited for |
|
Software editors which want to add a simple high availability option to their application |
Enterprise with IT skills in clustering and with large database applications |
Evidian SafeKit mirror cluster with real-time file replication and failover |
|
3 products in 1 More info > |
|
Very simple configuration More info > |
|
Synchronous replication More info > |
|
Fully automated failback More info > |
|
Replication of any type of data More info > |
|
File replication vs disk replication More info > |
|
File replication vs shared disk More info > |
|
Remote sites and virtual IP address More info > |
|
Quorum and split brain More info > |
|
Active/active cluster More info > |
|
Uniform high availability solution More info > |
|
RTO / RPO More info > |
|
Evidian SafeKit farm cluster with load balancing and failover |
|
No load balancer or dedicated proxy servers or special multicast Ethernet address |
|
All clustering features |
|
Remote sites and virtual IP address |
|
Uniform high availability solution |
|
Software clustering vs hardware clustering
|
|
|
|
Shared nothing vs a shared disk cluster |
|
|
|
Application High Availability vs Full Virtual Machine High Availability
|
|
|
|
High availability vs fault tolerance
|
|
|
|
Synchronous replication vs asynchronous replication
|
|
|
|
Byte-level file replication vs block-level disk replication
|
|
|
|
Heartbeat, failover and quorum to avoid 2 master nodes
|
|
|
|
Virtual IP address primary/secondary, network load balancing, failover
|
|
|
|
Video content
This video first illustrates the work to be done with a shared disk architecture when the two servers of a high availability cluster must be placed on two remote sites.
Next, the video demonstrates the same use case with the SafeKt shared nothing architecture.
SafeKit: an ideal solution for a partner application
This platform agnostic solution is ideal for a partner with a critical application and who wants to provide a high availability option easy to deploy to many customers.
This clustering solution is also recognized as the simplest to implement by our partners.
Network load balancing and failover |
|
Windows farm |
Linux farm |
Generic Windows farm > | Generic Linux farm > |
Microsoft IIS > | - |
NGINX > | |
Apache > | |
Amazon AWS farm > | |
Microsoft Azure farm > | |
Google GCP farm > | |
Other cloud > |
Advanced clustering architectures
Several modules can be deployed on the same cluster. Thus, advanced clustering architectures can be implemented:
- the farm+mirror cluster built by deploying a farm module and a mirror module on the same cluster,
- the active/active cluster with replication built by deploying several mirror modules on 2 servers,
- the Hyper-V cluster or KVM cluster with real-time replication and failover of full virtual machines between 2 active hypervisors,
- the N-1 cluster built by deploying N mirror modules on N+1 servers.
Real-time file replication and failover |
|
Windows mirror |
Linux mirror |
Generic Windows mirror > | Generic Linux mirror > |
Microsoft SQL Server > | - |
Oracle > | |
MariaDB > | |
MySQL > | |
PostgreSQL > | |
Firebird > | |
Windows Hyper-V > | - |
- | Linux KVM > |
- | Docker > |
- | Kubernetes > |
- | Elasticsearch > |
Milestone XProtect > | - |
Genetec SQL Server > | - |
Hanwha Wisenet > | - |
Nedap AEOS > | - |
Siemens Desigo CC > Siemens SiPass > Siemens SIPORT > Siemens Siveillance > |
- |
Bosch AMS > Bosch BIS > Bosch BVMS > |
- |
Amazon AWS mirror > | |
Microsoft Azure mirror > | |
Google GCP mirror > | |
Other cloud > |
Introduction
-
- Features
- Architectures
- Distinctive advantages
-
- Hardware vs software cluster
- Synchronous vs asynchronous replication
- File vs disk replication
- High availability vs fault tolerance
- Hardware vs software load balancing
- Virtual machine vs application HA
Installation, Console, CLI
-
- Package installation
- Nodes setup
- Cluster configuration
- Upgrade
-
- Cluster configuration
- Configuration tab
- Control tab
- Monitor tab
- Advanced Configuration tab
-
- Silent installation
- Cluster administration
- Module administration
- Command line interface
Advanced configuration
-
- userconfig.xml + restart scripts
- Heartbeat (<hearbeat>)
- Virtual IP address (<vip>)
- Real-time file replication (<rfs>)
-
- userconfig.xml + restart scripts
- Farm configuration (<farm>)
- Virtual IP address (<vip>)
-
- Failover machine (<failover>)
- Process monitoring (<errd>)
- Network and duplicate IP checkers
- Custom checker (<custom>)
- Split brain checker (<splitbrain>)
- TCP, ping, module checkers
Support
-
- Analyze snaphots
-
- Get permanent license key
- Register on support.evidian.com
- Call desk
Documentation
-
Technical documentation
-
Presales documentation