KVM cluster without shared storage on a SAN
[SafeKit] Synchronous real-time replication, high availability and migration of virtual machines between two servers
The solution for KVM
Evidian SafeKit brings high availability to KVM between two servers of any brand.
This article explains how to implement quickly a KVM cluster without shared storage on a SAN and without specific skills.
The principle of the solution is to put a critical application in a virtual machine under KVM. SafeKit implements real-time replication and automatic failover of the virtual machine.
Note that KVM is the free hypervisor included in all Linux versions.
A solution open to several applications
Several applications can be put in several virtual machines replicated and restarted by SafeKit. You have the possibility to migrate each virtual machine between both servers with the SafeKit console and thus balance the load in an active-active cluster.
Save costs with this solution
There is no need for complex VMware-type solution with three servers and shared storage on a SAN or vSAN. With SafeKit, you will have instead synchronous real-time replication and failover of several virtual machines between two servers.
And with the standard virt-manager GUI, you will be able to manage very simply your virtual machines.
Note that you can implement with the SafeKit product real-time replication and failover of any file directory and service, database, complete Hyper-V or KVM virtual machines, Docker, Podman, K3S, Cloud applications (see the module list).
Partners, the success with SafeKit
This platform agnostic solution is ideal for a partner reselling a critical application and who wants to provide a redundancy and high availability option easy to deploy to many customers.
With many references in many countries won by partners, SafeKit has proven to be the easiest solution to implement for redundancy and high availability of building management, video management, access control, SCADA software...
How the SafeKit mirror cluster works with KVM?
The following steps are described for one virtual machine inside one mirror module. Each replicated virtual machine runs in an independent mirror module (with a maximum of 32 virtual machines) with a primary server that can be either the KVM server 1 or the KVM server 2.
Step 1. Real-time replication
Server 1 (PRIM) runs one VM. SafeKit replicates in real time the VM files (virtual hard disk, VM configuration). Only changes made in the files are replicated across the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the VM directory name in SafeKit. There are no pre-requisites on disk organization. The directory may be located in the system disk.
Step 2. Automatic failover
When Server 1 fails, Server 2 takes over. SafeKit restarts the VM on Server 2. KVM finds the files replicated by SafeKit uptodate on Server 2.
The VM continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (set to 30 seconds by default) plus the VM reboot time.
Step 4. Back to normal
After reintegration, the VM files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the VM running on Server 2 and SafeKit replicating updates to Server 1.
If the administrator wishes the VM to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
Redundancy at the application level
In this type of solution, only application data are replicated. And only the application is restared in case of failure.
With this solution, restart scripts must be written to restart the application.
We deliver application modules to implement redundancy at the application level. They are preconfigured for well known applications and databases. You can customize them with your own services, data to replicate, application checkers. And you can combine application modules to build advanced multi-level architectures.
This solution is platform agnostic and works with applications inside physical machines, virtual machines, in the Cloud. Any hypervisor is supported (VMware, Hyper-V...).
Redundancy at the virtual machine level
In this type of solution, the full Virtual Machine (VM) is replicated (Application + OS). And the full VM is restarted in case of failure.
The advantage is that there is no restart scripts to write per application and no virtual IP address to define. If you do not know how the application works, this is the best solution.
This solution works with Windows/Hyper-V and Linux/KVM but not with VMware. This is an active/active solution with several virtual machines replicated and restarted between two nodes.
- Solution for a new application (no restart script to write): Windows/Hyper-V, Linux/KVM
More comparison between VM HA vs Application HA
Why a replication of a few Tera-bytes?
Resynchronization time after a failure (step 3)
- 1 Gb/s network ≈ 3 Hours for 1 Tera-bytes.
- 10 Gb/s network ≈ 1 Hour for 1 Tera-bytes or less depending on disk write performances.
Alternative
- For a large volume of data, use external shared storage.
- More expensive, more complex.
Why a replication < 1,000,000 files?
- Resynchronization time performance after a failure (step 3).
- Time to check each file between both nodes.
Alternative
- Put the many files to replicate in a virtual hard disk / virtual machine.
- Only the files representing the virtual hard disk / virtual machine will be replicated and resynchronized in this case.
Why a failover ≤ 32 replicated VMs?
- Each VM runs in an independent mirror module.
- Maximum of 32 mirror modules running on the same cluster.
Alternative
- Use an external shared storage and another VM clustering solution.
- More expensive, more complex.
Why a LAN/VLAN network between remote sites?
- Automatic failover of the virtual IP address with 2 nodes in the same subnet.
- Good bandwidth for resynchronization (step 3) and good latency for synchronous replication (typically a round-trip of less than 2ms).
Alternative
- Use a load balancer for the virtual IP address if the 2 nodes are in 2 subnets (supported by SafeKit, especially in the cloud).
- Use backup solutions with asynchronous replication for high latency network.
Prerequisites
- You need KVM installed on 2 Linux nodes.
- You need your critical applications installed in one or more virtual machines.
Package installation on Linux
-
Install the free version of SafeKit on 2 Linux nodes.
Note: the free trial includes all SafeKit features. At the end of the trial, you can activate permanent license keys without uninstalling the package.
-
After the download of safekit_xx.bin package, execute it to extract the rpm and the safekitinstall script and then execute the safekitinstall script
-
Answer yes to firewall automatic configuration
-
Set the password for the web console and the default user admin.
- Use aphanumeric characters for the password (no special characters).
- The password must be the same on both nodes.
Module installation on Linux
-
Download the kvm.safe module.
The module is free. It contains the files userconfig.xml and the restart scripts.
- Put kvm.safe under /opt/safekit/Application_Modules/generic/.
The KVM configuration is presented with a virtual machine named VM1
and containing the application to restart in case of failure.
You will have to repeat this configuration for all VMs that you want to replicate and to restart. SafeKit supports up to 25 virtual machines.
1. Prerequisites
The VM1 virtual machine image is in the file /var/lib/libvirt/images/vm1.qcow2
. Before configuring SafeKit, you must perform the following configuration to place the virtual machine in a vm1-specific directory that will be replicated by SafeKit.
On node 1:
-
Stop vm1:
virsh shutdown vm1
-
Create a
vm1/
directory:mkdir -p /var/lib/libvirt/images/vm1/
-
Copy the vm1 image to the new location:
cp -a /var/lib/libvirt/images/vm1.qcow2 /var/lib/libvirt/images/vm1/
The original vm1 image can be deleted as soon as tests with the new location are successfull.
-
Edit the vm1 configuration file:
EDITOR=vi virsh edit vm1
And change the line:
<source file='/var/lib/libvirt/images/vm1.qcow2'>
by :
<source file='/var/lib/libvirt/images/vm1/vm1.qcow2'>
-
Set the cache option to 'none' in the same file, for data integrity in case of crash:
<disk type='file' device='disk'> <driver name='qemu' type=’qcow2’ cache='none'/>
-
Close the vm1 configuration file
-
Disable vm1 automatic start:
virsh autostart vm1 --disable
-
Create a
vm1.xml
configuration file for vm1:virsh dumpxml vm1 > vm1.xml
On node 2:
-
Copy the
vm1.xml
configuration file from node 1.Note: whenever vm1 configuration is changed on node 1, you must reapply the new configuration on node 2.
-
Create vm1 but do not start it:
virsh define vm1.xml
-
Disable vm1 automatic start:
virsh autostart vm1 --disable
-
Create the directory for the image location:
mkdir -p /var/lib/libvirt/images/vm1/
2. Launch the SafeKit console
- Launch the web console in a browser on one cluster node by connecting to
http://localhost:9010
. - Enter
admin
as user name and the password defined during installation.
You can also run the console in a browser on a workstation external to the cluster.
The configuration of SafeKit is done on both nodes from a single browser.
To secure the web console, see 11. Securing the SafeKit web service in the User's Guide.
3. Configure node addresses
- Enter the node IP addresses, press the Tab key to check connectivity and fill node names.
- Then, click on
Save and apply
to save the configuration.
If either node1 or node2 has a red color, check connectivity of the browser to both nodes and check firewall on both nodes for troubleshooting.
If you want, you can add a new LAN for a second heartbeat and for a dedicated replication network.
This operation will place the IP addresses in the cluster.xml
file on both nodes (more information in the training with the command line).
5. Configure the module
- Choose an
Automatic
start of the module at boot without delay. - Normally, you have a single
Heartbeat
network on which the replication is made. But, you can define a private network if necessary (by adding a LAN at step 3). - Put in
VM_PATH
, the root path of the replicated directory (/var/lib/libvirt/images
). - Enter in
VM_NAME
, the name of the virtual machine (vm1
).
We assume that the VM1 files are in /var/lib/libvirt/image/vm1/
(see prerequisites). This directory will be replicated in real-time by SafeKit.
You do not need to configure a virtual IP address. VM1 will be rebooted on the secondary KVM with its physical IP address, and this IP address will be rerouted.
6. Edit scripts (optional)
- Do not edit scripts.
10. Start the node with up-to-date data
- If node 1 has the up-to-date replicated directory for
vm1/
, select it and start itAs primary
.
When node 2 will be started, all data will be copied from node 1 to node 2.
If you make the wrong choice, you run the risk of synchronizing outdated data on both nodes.
It is also assumed that VM1
is stopped on node 1 so that SafeKit installs the replication mechanisms and then starts VM1
in the start_prim
script.
Use Start
for subsequent starts: SafeKit retains the most up-to-date server. Starting As primary
is a special start-up the first time or during exceptional operations.
11. Wait for the transition to ALONE (green)
- Node 1 should reach the ALONE (green) state, which means that the
start_prim
script has been executed on node 1.
If ALONE (green) is not reached or if VM1 is not started, analyze why with the module log of node 1.
- click the "log" icon of
node1
to open the module log and look for error messages such as a checker detecting an error and stopping the module. - click on
start_prim
in the log: output messages of the script are displayed on the right and errors can be detected such as VM1 incorrectly started.
If the cluster is in WAIT (red) not uptodate, STOP (red) not uptodate
state, stop the WAIT node and force its start as primary.
12. Start node 2
- Start node 2 with its contextual menu.
- Wait for the SECOND (green) state.
Node 2 stays in the SECOND (orange) state while resynchronizing the replicated directories (copy from node 1 to node 2).
This may take a while depending on the size of files to resynchronize in replicated directories and the network bandwidth.
To see the progress of the copy, see the module log and the replication resources of node 2.
14. Testing
- Stop the PRIM node by scrolling down its contextual menu and clicking
Stop
. - Verify that there is a failover on the SECOND node which should become ALONE (green).
- Check with KVM tools that
VM1
is running on node 2.
If ALONE (green) is not reached on node2 or if VM1 is not started, analyze why with the module log of node 2.
- click the "log" icon of
node2
to open the module log and look for error messages such as a checker detecting an error and stopping the module. - click on
start_prim
in the log: output messages of the script are displayed on the right and errors can be detected such as VM1 incorrectly started.
If everything is okay, initiate a start on node1, which will resynchronize the replicated directories from node2.
If things go wrong, stop node2 and force the start as primary of node1, which will restart with its locally healthy data at the time of the stop.
15. Replicating snapshots
The directory that contains the snapshot xml files is:
/var/lib/libvirt/qemu/snapshot/%VM_NAME%
where VM_NAME
is the name of the virtual machine (vm1).
Note: If no snapshot has been created, create one to generate the directory (else the SafeKit configuration will fail).
To replicate it:
- In the module configuration, click on
Advanced Configuration
(see image) to edituserconfig.xml
. -
Insert the lines below into the
<rfs>
section of userconfig.xml:<replicated dir="/var/lib/libvirt/qemu/snapshot/%VM_NAME%" mode="read_only"> </replicated>
Save and apply
the new configuration to redeploy the modified userconfig.xml file on both nodes (module must be stopped on both nodes to save and apply).-
Create snapshots on the PRIM node either through virt-manager or a command line:
virsh snapshot-create-as vm1 snapshot-name
Note: when creating a snapshot with a command line, you have to refresh the snapshot view into virt-manager.
Snapshots created on the PRIM node are operationnal on node 2 after failover, but not listed on node 2.
-
For importing a snapshot on node 2, you have to run the command:
virsh snapshot-create --redefine vm1 /var/lib/libvirt/qemu/snapshot/vm1/snapshot-name
-
The command line for listing all snapshots of vm1 is:
virsh snapshot-list vm1
16. Support
- For getting support, take 2 SafeKit
Snapshots
(2 .zip files), one for each node. - If you have an account on https://support.evidian.com, upload them in the call desk tool.
17. If necessary, configure a splitbrain checker
- See below "What are the different scenarios in case of network isolation in a cluster?" to know if you need to configure a splitbrain checker.
- Go to the module configuration and click on
Checkers / Splitbrain
(see image) to edit the splitbrain parameters. Save and apply
the new configuration to redeploy it on both nodes (module must be stopped on both nodes to save and apply).
Parameters:
Resource name
identifies the witness with a resource name:splitbrain.witness
. You can change this value to identify the witness.Witness address
is the argument for a ping when a node goes from PRIM to ALONE or from SECOND to ALONE. Change this value with the IP of the witness (a robust element, typically a router).- Note: you can set several IPs separated by white spaces. Pay attention that the IP addresses must be accessible from one node but not from the other in the event of network isolation.
A single network
When there is a network isolation, the default behavior is:
- as heartbeats are lost for each node, each node goes to ALONE and runs the application with its virtual IP address (double execution of the application modifying its local data),
- when the isolation is repaired, one ALONE node is forced to stop and to resynchronize its data from the other node,
- at the end the cluster is PRIM-SECOND (or SECOND-PRIM according the duplicate virtual IP address detection made by Windows).
Two networks with a dedicated replication network
When there is a network isolation, the behavior with a dedicated replication network is:
- a dedicated replication network is implemented on a private network,
- heartbeats on the production network are lost (isolated network),
- heartbeats on the replication network are working (not isolated network),
- the cluster stays in PRIM/SECOND state.
A single network and a splitbrain checker
When there is a network isolation, the behavior with a split-brain checker is:
- a split-brain checker has been configured with the IP address of a witness (typically a router),
- the split-brain checker operates when a server goes from PRIM to ALONE or from SECOND to ALONE,
- in case of network isolation, before going to ALONE, both nodes test the IP address,
- the node which can access the IP address goes to ALONE, the other one goes to WAIT,
- when the isolation is repaired, the WAIT node resynchronizes its data and becomes SECOND.
Note: If the witness is down or disconnected, both nodes go to WAIT and the application is no more running. That's why you must choose a robust witness like a router.
Internals of a SafeKit / KVM high availability cluster with synchronous replication and failover
Go to the Advanced Configuration tab in the console, for editing these filesInternal files of the Linux kvm.safe module
userconfig.xml (description in the User's Guide)
<!-- Mirror Architecture with Real Time File Replication and Failover for KVM -->
<!DOCTYPE safe>
<safe>
<!-- Set value to the path of the virtual machines repository -->
<macro name="VM_PATH" value="/var/lib/libvirt/images" />
<!-- Set value to the name of the virtual machine -->
<macro name="VM_NAME" value="vm1" />
<service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
<!-- Heartbeat Configuration -->
<heart>
<heartbeat name="">
</heartbeat>
</heart>
<!-- File Mirroring Configuration -->
<rfs mountover="off" async="second" locktimeout="200" nbrei="3">
<replicated dir="%VM_PATH%/%VM_NAME%" mode="read_only">
</replicated>
<!-- Uncomment for replicating the directory that contains snapshot xml files of the virtual machine
<replicated dir="/var/lib/libvirt/qemu/snapshot/%VM_NAME%" mode="read_only">
</replicated>
-->
</rfs>
<!-- User scripts Configuration -->
<user>
<var name="VM_PATH" value="%VM_PATH%/%VM_NAME%" />
<var name="VM_NAME" value="%VM_NAME%" />
</user>
</service>
</safe>
start_prim
#!/bin/sh
# Script called on the primary server for starting application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
# stdout goes into Application log
echo "Running start_prim $*"
res=0
# Start VM_NAME
virsh start $VM_NAME
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
if ([ "x$state" == "x" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME not found"
else
let i=1
while ( [ $i -le 5 ] && [ "x$state" != "xrunning" ]); do
sleep 5
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
let i=i+1
done
if ([ "x$state" != "xrunning" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME start failed"
fi
fi
if [ $res -ne 0 ] ; then
$SAFE/safekit printe "start_prim failed"
# uncomment to stop SafeKit when critical
$SAFE/safekit stop -i "start_prim"
fi
stop_prim
#!/bin/sh
# Script called on the primary server for stopping application
# For logging into SafeKit log use:
# $SAFE/safekit printi | printe "message"
#----------------------------------------------------------
#
# 2 stop modes:
#
# - graceful stop
# call standard application stop
#
# - force stop ($1=force)
# kill application's processes
#
#----------------------------------------------------------
# stdout goes into Application log
echo "Running stop_prim $*"
# Stop VM_NAME
virsh shutdown $VM_NAME
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
if ([ "x$state" == "x" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME not found"
else
let i=1
while ( [ $i -le 5 ] && [ "x$state" == "xrunning" ]); do
# Stop VM_NAME
virsh shutdown $VM_NAME
sleep 5
state=$(virsh list --all | grep " $VM_NAME " | awk '{ print $3}')
let i=i+1
done
if ([ "x$state" == "xrunning" ]) ; then
res=1
$SAFE/safekit printe "$VM_NAME stop failed"
fi
fi
res=0
# default: no action on forcestop
[ "$1" = "force" ] && exit 0
# Fill with your application stop call
[ $res -ne 0 ] && $SAFE/safekit printe "stop_prim failed"
Network load balancing and failover |
|
Windows farm | Linux farm |
Generic Windows farm > | Generic Linux farm > |
Microsoft IIS > | - |
NGINX > | |
Apache > | |
Amazon AWS farm > | |
Microsoft Azure farm > | |
Google GCP farm > | |
Other cloud > |
Advanced clustering architectures
Several modules can be deployed on the same cluster. Thus, advanced clustering architectures can be implemented:
- the farm+mirror cluster built by deploying a farm module and a mirror module on the same cluster,
- the active/active cluster with replication built by deploying several mirror modules on 2 servers,
- the Hyper-V cluster or KVM cluster with real-time replication and failover of full virtual machines between 2 active hypervisors,
- the N-1 cluster built by deploying N mirror modules on N+1 servers.
SafeKit with the Hyper-V module or the KVM module | Microsoft Hyper-V Cluster & VMware HA |
No shared disk - synchronous real-time replication instead with no data loss | Shared disk and specific extenal bay of disk |
Remote sites = no SAN for replication | Remote sites = replicated bays of disk across a SAN |
No specific IT skill to configure the system | Specific IT skills to configure the system |
Note that the Hyper-V/SafeKit and KVM/SafeKit solutions are limited to replication and failover of 25 VMs.
VM HA with the SafeKit Hyper-V or KVM module | Application HA with SafeKit application modules |
SafeKit inside 2 hypervisors SafeKit replication and failover of full VM |
SafeKit inside 2 virtual or physical machines SafeKit replication and failover at application level |
Replicates more data (App+OS) | Replicates only application data |
Reboot of VM on hypervisor 2 if hypervisor 1 crashes Recovery time depending on the OS reboot |
Quick recovery time with restart of App on OS2 if crash of server 1 Around 1 mn or less (see RTO/RPO here) Application checker and software failover |
Generic solution for any application / OS | Restart scripts to be written in application modules |
Software clustering vs hardware clustering More info > |
|
|
|
Shared nothing vs a shared disk cluster More info > |
|
|
|
Application High Availability vs Full Virtual Machine High Availability More info > |
|
|
|
High availability vs fault tolerance More info > |
|
|
|
Synchronous replication vs asynchronous replication More info > |
|
|
|
Byte-level file replication vs block-level disk replication More info > |
|
|
|
Heartbeat, failover and quorum to avoid 2 master nodes More info > |
|
|
|
Virtual IP address primary/secondary, network load balancing, failover More info > |
|
|
|
Evidian SafeKit 8.2
All new features compared to SafeKit 7.5 described in the release notes
Packages
- Windows (with Microsoft Visual C++ Redistributable)
- Windows (without Microsoft Visual C++ Redistributable)
- Linux
- Supported OS and last fixes
One-month license key
Technical documentation
Training
Product information
New application (empty restart scripts)
- Quick installation guide for a generic Windows mirror HA solution
- Quick installation guide for a generic Linux mirror HA solution
- Quick installation guide for a generic Windows farm HA solution
- Quick installation guide for a generic Linux farm HA solution
Web (network load balancing and failover)
Database (real-time replication and failover)
- Quick installation guide for a Microsoft SQL Server HA solution
- Quick installation guide for a Oracle HA solution
- Quick installation guide for a MariaDB HA solution
- Quick installation guide for a MySQL HA solution
- Quick installation guide for a PostgreSQL HA solution
- Quick installation guide for a Firebird HA solution
Full VM or container real-time replication and failover
- Quick installation guide for a Windows Hyper-V HA solution
- Quick installation guide for a Linux KVM HA solution
- Quick installation guide for a Docker HA solution
- Quick installation guide for a Podman HA solution
- Quick installation guide for a Kubernetes K3S HA solution
- Quick installation guide for a Elasticsearch HA solution
Physical security (real-time replication and failover)
- Quick installation guide for a Milestone XProtect HA solution
- Quick installation guide for a Genetec SQL Server HA solution
- Quick installation guide for a Nedap AEOS HA solution
- Quick installation guide for a Bosch AMS HA solution
- Quick installation guide for a Bosch BIS HA solution
- Quick installation guide for a Bosch BVMS HA solution
- Quick installation guide for a Hanwha Vision HA solution
- Quick installation guide for a Hanwha Wisenet HA solution
Siemens (real-time replication and failover)
- Quick installation guide for a Siemens Siveillance suite HA solution
- Quick installation guide for a Siemens Desigo CC HA solution
- Quick installation guide for a Siemens SiPass HA solution
- Quick installation guide for a Siemens SIPORT HA solution
- Quick installation guide for a Siemens Siveillance VMS HA solution
- Quick installation guide for a Siemens SIMATIC WinCC HA solution
- Quick installation guide for a Siemens SIMATIC PCS 7 HA solution
Cloud (mirror or farm)
- Quick installation guide for a Microsoft Azure mirror HA solution
- Quick installation guide for a Google GCP mirror HA solution
- Quick installation guide for a Amazon AWS mirror HA solution
- Quick installation guide for Other cloud mirror HA solution
- Quick installation guide for a Microsoft Azure farm HA solution
- Quick installation guide for a Google GCP farm HA solution
- Quick installation guide for a Amazon AWS farm HA solution
- Quick installation guide for Other cloud farm HA solution
Introduction
-
- Demonstration
- Examples of redundancy and high availability solution
- Evidian SafeKit sold in many different countries with Milestone
- 2 solutions: virtual machine or application cluster
- Distinctive advantages
- More information on the web site
-
- Cluster of virtual machines
- Mirror cluster
- Farm cluster
Installation, Console, CLI
- Install and setup / pptx
- Package installation
- Nodes setup
- Upgrade
- Web console / pptx
- Configuration of the cluster
- Configuration of a new module
- Advanced usage
- Securing the web console
- Command line / pptx
- Configure the SafeKit cluster
- Configure a SafeKit module
- Control and monitor
Advanced configuration
- Mirror module / pptx
- start_prim / stop_prim scripts
- userconfig.xml
- Heartbeat (<hearbeat>)
- Virtual IP address (<vip>)
- Real-time file replication (<rfs>)
- How real-time file replication works?
- Mirror's states in action
- Farm module / pptx
- start_both / stop_both scripts
- userconfig.xml
- Farm heartbeats (<farm>)
- Virtual IP address (<vip>)
- Farm's states in action
Troubleshooting
- Troubleshooting / pptx
- Analyze yourself the logs
- Take snapshots for support
- Boot / shutdown
- Web console / Command lines
- Mirror / Farm / Checkers
- Running an application without SafeKit
Support
- Evidian support / pptx
- Get permanent license key
- Register on support.evidian.com
- Call desk