eviden-logo

Evidian > Products > High Availability Software - Zero Extra Hardware > Podman: the simplest high availability cluster between two redundant servers

Podman: the simplest high availability cluster between two redundant servers

With the synchronous replication and automatic failover provided by Evidian SafeKit

How the Evidian SafeKit software simply implements a Podman high availability cluster?

The solution for Podman

Evidian SafeKit brings high availability to Podman between two redundant servers.

The principle is to configure inside SafeKit the real-time replicaton of directories associated to Podman persistent data. And to put inside SafeKit restart scripts, the start and stop of the Podman application.

This article explains how to implement quickly a Podman cluster without shared disk and without specific skills.

A generic product

Note that SafeKit is a generic product on Windows and Linux.

You can implement with the SafeKit product real-time replication and failover of any file directory and service, database, complete Hyper-V or KVM virtual machines, Docker, Podman, K3S, Cloud applications (see all solutions).

A complete solution

SafeKit solves:

  • hardware failures (20% of problems), including the complete failure of a computer room,
  • software failures (40% of problems), including restart of critical processes,
  • and human errors (40% of problems) thanks to its ease of use via its web console.

How the SafeKit mirror cluster works with Podman?

Step 1. Real-time replication

Server 1 (PRIM) runs the Podman application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network. 

File replication at byte level in a mirror Podman cluster

The replication is synchronous with no data loss on failure contrary to asynchronous replication.

You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.

Step 2. Automatic failover

When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the Podman application automatically on Server 2.

The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.

Failover of Podman in a mirror cluster

The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.

Step 3. Automatic failback

Failback involves restarting Server 1 after fixing the problem that caused it to fail.

SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.

Failback in a mirror Podman cluster

Failback takes place without disturbing the Podman application, which can continue running on Server 2.

Step 4. Back to normal

After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the Podman application running on Server 2 and SafeKit replicating file updates to Server 1.

Return to normal operation in a mirror Podman cluster

If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.

Typical usage with SafeKit

Why a replication of a few Tera-bytes?

Resynchronization time after a failure (step 3)

  • 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
  • 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.

Alternative

Why a replication < 1,000,000 files?

  • Resynchronization time performance after a failure (step 3).
  • Time to check each file between both nodes.

Alternative

  • Put the many files to replicate in a virtual hard disk / virtual machine.
  • Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.

Why a failover ≤ 32 replicated VMs?

  • Each VM runs in an independent mirror module.
  • Maximum of 32 mirror modules running on the same cluster.

Alternative

  • Use an external shared storage and another VM clustering solution.
  • More expensive, more complex.

Why a LAN/VLAN network between remote sites?

Alternative

  • Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
  • Use backup solutions with asynchronous replication for high latency network.

SafeKit High Availability Differentiators