Evidian > Products > High Availability Software - Zero Extra Hardware > Hanwha Wisenet SSM: the simplest high availability cluster between two redundant servers
Evidian SafeKit brings high availability to Hanwha Wisenet SSM, the CCTV video surveillance system, between two redundant servers.
This article explains how to implement quickly a Hanwha cluster with real-time replication and automatic failover of Wisenet SSM services and data.
A free trial and the Hanwha high availability module are offered in this article.
There is another SafeKit solution very easy to deploy and based on Hyper-V if you need to replicate other applications in addition to Hanwha Wisenet SSM.
The SafeKit/Hyper-V solution replicates full Hyper-V virtual machines which can contain Hanwha Wisenet SSM and other applications. And SafeKit restarts automatically the VMs in case of failure.
Several virtual machines can be replicated and can run on both Windows hypervisors with crossed replication and mutual takeover.
With this solution, there is no need to configure restart scripts and define virtual IP addresses for each application.
Note that SafeKit is a generic product on Windows and Linux.
You can implement with the same product real-time replication and failover of any file directory and service, database, complete Hyper-V or KVM virtual machines, Docker, Kubernetes , Cloud applications.
This platform agnostic solution is ideal for a partner with a critical application and who wants to provide a high availability option easy to deploy to many customers.
This clustering solution is also recognized as the simplest to implement by our partners.
Server 1 (PRIM) runs the Hanwha Wisenet SSM application. Clients are connected to a virtual IP address. SafeKit replicates in real time modifications made inside files through the network.
The replication is synchronous with no data loss on failure contrary to asynchronous replication.
You just have to configure the names of directories to replicate in SafeKit. There are no pre-requisites on disk organization. Directories may be located in the system disk.
When Server 1 fails, Server 2 takes over. SafeKit switches the virtual IP address and restarts the Hanwha Wisenet SSM application automatically on Server 2.
The application finds the files replicated by SafeKit uptodate on Server 2. The application continues to run on Server 2 by locally modifying its files that are no longer replicated to Server 1.
The failover time is equal to the fault-detection time (30 seconds by default) plus the application start-up time.
Failback involves restarting Server 1 after fixing the problem that caused it to fail.
SafeKit automatically resynchronizes the files, updating only the files modified on Server 2 while Server 1 was halted.
Failback takes place without disturbing the Hanwha Wisenet SSM application, which can continue running on Server 2.
After reintegration, the files are once again in mirror mode, as in step 1. The system is back in high-availability mode, with the Hanwha Wisenet SSM application running on Server 2 and SafeKit replicating file updates to Server 1.
If the administrator wishes the application to run on Server 1, he/she can execute a "swap" command either manually at an appropriate time, or automatically through configuration.
In this type of solution, only application data are replicated. And only the application is restared in case of failure. Restart scripts must be written to restart the application. This solution is platform agnostic and works with physical machines, virtual machines, in the Cloud.
We deliver application modules to implement this type of solution (like the Hanwha Wisenet SSM module provided in the free trial below). They are preconfigured for well known applications and databases. You can customize them with your own services, data to replicate, application checkers. And you can combine application modules to build advanced multi-level architectures.
In this type of solution, the full Virtual Machine (VM) is replicated (Application + OS). And the full VM is restarted in case of failure. The advantage is that there is no restart scripts to write per application and no virtual IP address to define. If you do not know how the application works, this is the best solution.
We deliver two modules for implementing this solution: one for Hyper-V on Windows and one for KVM on Linux. Several VMs can be replicated and can run on both hypervisors with crossed replication and mutual takeover.
Resynchronization time after a failure (step 3)
Alternative
Alternative
Alternative
Alternative
SafeKit controls start of Wisenet SSM services in the restart scripts. Edit the restart scripts during the configuration to check if you have put all services in Manual boot including the new ones that you can add.
Launch the web console in a browser on one node by connecting to http://localhost:9010
.
You can also run the console in a browser on an external workstation.
The configuration of SafeKit is done on both nodes from a single browser.
To secure the web console, see 11. Securing the SafeKit web console.
Enter the node IP addresses.
Then, click on the red floppy disk to save the configuration.
If node1 or node2 background color is red, check connectivity of the browser to both nodes and check firewall on both nodes for troubleshooting.
This operation will place the IP addresses in the
cluster.xml
file on both nodes (more information in the training with the command line).
start_prim
and stop_prim
must contain starting and stopping of the Wisenet SSM application.start_prim
on both nodes.Boot Startup Type = Manual
for all the services registered in start_prim
(SafeKit controls the start of services in start_prim
). Note that if a process name is displayed in Process Checker, it will be monitored with a restart action in case of failure. Configuring a wrong process name will cause the module to stop right after its start.
This operation will report the configuration in the
userconfig.xml
, start_prim
, stop_prim
files on both nodes (more information in the training with the command line).
Check the success message (green) on both nodes and click Next.
On Linux you may get an error at this step if the replicated directories are mount points. See this article to solve the problem.
If node 1 has up-to-date replicated directories, select it and start it.
When node 2 will be started, all data from node 1 will be copied to node 2.
If you make the wrong choice, you run the risk of synchronizing outdated data on both nodes.
It is also assumed that the services modifying the replicated directories are stopped in order to be able to install the replication mechanisms and then start the Wisenet SSM application in
start_prim
.
Node 1 should reach the ALONE (green) state, which means that the start_prim
script has been executed on node 1.
If the status is ALONE (green) and the Wisenet SSM application is not started, check output messages of the
start_prim
script in the Application Log on node 1.
If node 1 does not reach ALONE (green) state, analyze why with the Module Log of node 1.
If the cluster is in [WAIT (red) not uptodate - STOP (red) not uptodate] state, stop the WAIT node and force its start as primary.
Start node 2 with its contextual menu.
Node 2 should go to SECOND green after resynchronizing all replicated directories (binary copy from node 1 to node 2).
This may take a while depending on the size of replicated directories and the network bandwidth.
If node 2 does not reach SECOND (green) state, analyze why with the Module Log of node 2.
The cluster is up with Wisenet SSM services running on the PRIM node and not running on the SECOND node. Only changes inside files are replicated in real time in this state.
Warning, components that are clients of Wisenet SSM services must be configured with the virtual IP address. The configuration can be done with a DNS name (if a DNS name has been created and associated with the virtual IP address).
Stop the PRIM node by scrolling down its contextual menu and clicking Stop. Verify that there is a failover on the SECOND node which should become ALONE (green).
And with Microsoft Management Console (MMC) on Windows or with command lines on Linux, check the failover of Wisenet SSM services (stopped on node 1 in the stop_prim
script and started on node 2 in the start_prim
script).
If the Wisenet SSM application is not started on node 2 while the state is ALONE (green), check output messages of the
start_prim
script in the Application Log of node 2.
If ALONE (green) is not reached, analyze why with the Module Log of node 2.
Read the module log to understand the reasons of a failover, of a waiting state etc...
To see the module log of node 1 (image):
Click on node2 to see the module log of the secondary server.
Read the application log to see the output messages of the stat_prim and stop_prim restart scripts.
To see the application log of node1 (image):
Click on node 2 to see the application log of the secondary server.
In Advanced Configuration tab, you can edit internal files of the module: bin/start_prim and bin/stop_prim and conf/userconfig.xml .
If you make change in the internal files here, you must apply the new configuration by a right click on the icon/xxx on the left side (see image): the interface will allow you to redeploy the modified files on both servers.
Training and documentation here (with configuration by command lines).
For getting support, take 2 SafeKit Snaphots (2 .zip files), one for each server.
If you have an account on https://support.evidian.com, upload them in the call desk tool.
<!DOCTYPE safe>
<safe>
<service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
<!-- Heartbeat Configuration -->
<!-- Names or IP addresses on the default network are set during initialization in the console -->
<heart pulse="700" timeout="30000">
<heartbeat name="default" ident="flow">
</heartbeat>
</heart>
<!-- Virtual IP Configuration -->
<!-- Replace
* VIRTUAL_TO_BE_DEFINED by the name of your virtual server
-->
<vip>
<interface_list>
<interface check="on" arpreroute="on">
<real_interface>
<virtual_addr addr="VIRTUAL_TO_BE_DEFINED" where="one_side_alias" />
</real_interface>
</interface>
</interface_list>
</vip>
<!-- Software Error Detection Configuration -->
<errd polltimer="10">
<!-- Samsung SSM process -->
<proc name="pg_ctl.exe" atleast="1" action="restart" class="prim" />
<proc name="ServiceManager.exe" atleast="1" action="restart" class="prim" />
<!--
<proc name="HAServerService.exe" atleast="1" action="restart" class="prim"/>
-->
</errd>
<!-- File Replication Configuration -->
<!-- Adapt with the directory of your PostgreSQL database and logs
-->
<rfs async="second" acl="off" nbrei="3">
<replicated dir="C:\PostgreSQL\9.1\data" mode="read_only">
<notreplicated path="pg_log"/>
<notreplicated path="postmaster.pid"/>
</replicated>
<replicated dir="C:\Program Files (x86)\Samsung\SSM\SystemManager\MapFile" mode="read_only"/>
</rfs>
<!-- User scripts activation -->
<user nicestoptimeout="300" forcestoptimeout="300" logging="userlog" />
</service>
</safe>
start_prim.cmd
@echo off
rem Script called on the primary server for starting application services
rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"
rem stdout goes into Application log
echo "Running start_prim %*"
set res=0
rem TODO: set to manual the start of services when the system boots
net start "postgresql-9.1 - PostgreSQL Server 9.1"
if not %errorlevel% == 0 (
%SAFE%\safekit printi "PostgreSQL (postgresql-9.1 - PostgreSQL Server 9.1) start failed"
goto stop
) else (
%SAFE%\safekit printi "PostgreSQL (postgresql-9.1 - PostgreSQL Server 9.1) started"
)
net start "SSM System Manager"
if not %errorlevel% == 0 (
%SAFE%\safekit printi "SSM System Manager start failed")
goto stop
) else (
%SAFE%\safekit printi "SSM System Manager started"
)
net start "SSM Watch Services Manager"
if not %errorlevel% == 0 (
%SAFE%\safekit printi "SSM Watch Services Manager start failed"
goto stop
) else (
%SAFE%\safekit printi "SSM Watch Services Manager started"
)
rem net start "HA Server Service"
rem if not %errorlevel% == 0 (
rem %SAFE%\safekit printi "HA Server Service start failed"
rem goto stop
rem ) else (
rem %SAFE%\safekit printi "HA Server Service started"
rem )
if %res% == 0 goto end
:stop
set res=%errorlevel%
"%SAFE%\safekit" printe "start_prim failed"
rem uncomment to stop SafeKit when critical
rem "%SAFE%\safekit" stop -i "start_prim"
:end
stop_prim.cmd
@echo off
rem Script called on the primary server for stopping application services
rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"
rem ----------------------------------------------------------
rem
rem 2 stop modes:
rem
rem - graceful stop
rem call standard application stop with net stop
rem
rem - force stop (%1=force)
rem kill application's processes
rem
rem ----------------------------------------------------------
rem stdout goes into Application log
echo "Running stop_prim %*"
set res=0
rem action on force stop
if "%1" == "force" (
%SAFE%\safekit printi "Force stop: kill processes of Samsung SSM application"
%SAFE%\safekit kill -name="watchservices.exe" -level="terminate"
%SAFE%\safekit kill -name="java.exe" -argregex=".*systemmanager.*" -level="terminate"
%SAFE%\safekit kill -name="pg_ctl.exe" -level="terminate"
%SAFE%\safekit kill -name="postgres.exe" -level="terminate"
%SAFE%\safekit kill -name="HAServerService.exe" -level="terminate"
%SAFE%\safekit kill -name="systemanager.exe" -level="terminate"
goto end
)
rem %SAFE%\safekit printi "Stopping HA Server Service"
rem net stop "HA Server Service"
%SAFE%\safekit printi "Stopping SSM Watch Services Manager"
net stop "SSM Watch Services Manager"
%SAFE%\safekit printi "Stopping SSM System Manager"
net stop "SSM System Manager"
%SAFE%\safekit printi "Stopping PostgreSQL (postgresql-9.1 - PostgreSQL Server 9.1)"
net stop "postgresql-9.1 - PostgreSQL Server 9.1"
del C:\PostgreSQL\9.1\data\postmaster.pid
rem Wait a little for the real stop of services
"%SAFEBIN%\sleep" 10
if %res% == 0 goto end
"%SAFE%\safekit" printe "stop_prim failed"
:end
Network load balancing and failover |
|
Windows farm |
Linux farm |
Generic farm > | Generic farm > |
Microsoft IIS > | - |
NGINX > | NGINX > |
Apache > | Apache > |
Amazon AWS farm > | Amazon AWS farm > |
Microsoft Azure farm > | Microsoft Azure farm > |
Google GCP farm > | Google GCP farm > |
Other cloud > | Other cloud > |
Several modules can be deployed on the same cluster. Thus, advanced clustering architectures can be implemented:
A software publisher uses SafeKit as an OEM software for high availability of its application | A distributed enterprise deploys SafeKit in many branches without specific IT skills | SafeKit is deployed in two remote sites without the need for replicated bays of disks through a SAN |
|
||
“SafeKit is the ideal application clustering solution for a software publisher. We currently have deployed more than 80 SafeKit clusters worldwide with our critical TV broadcasting application.” |
||
|
||
“Noemis, a value added distributor of Milestone Video Surveillance, has assisted integrators to deploy the SafeKit redundancy solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides a great support.” |
||
|
||
“Thanks to a simple and powerful product, we gained time in the integration and validation of our critical projects like the supervision of Paris metro lines (the control rooms).” |
In video surveillance systems and access control, Evidian SafeKit implements high availability with synchronous replication and failover of
Sebastien Temoin, Technical and Innovation Director, NOEMIS, value added distributor of Milestone solutions:
"SafeKit by Evidian is a professional solution making easy the redundancy of Milestone Management Server, Event Server, Log Server. The solution is easy to deploy, easy to maintain and can be added on existing installation. We have assisted integrators to deploy the solution on many projects such as city surveillance, datacenters, stadiums and other critical infrastructures. SafeKit is a great product, and Evidian provides great support. Happy to help if you have any questions."
Use cases:
Harmonic is using SafeKit as a software OEM high availability solution and deploys it with its TV broadcasting solutions over satellites, terrestrials, cable, IPTV.
Over 80 SafeKit clusters are deployed on Windows for replication of Harmonic database and automatic failover of the critical application.
Philippe Vidal, Product Manager, Harmonic says:
“SafeKit is the ideal application clustering solution for a software publisher looking for a simple and economical high availability software. We are deploying SafeKit worldwide and we currently have more than 80 SafeKit clusters on Windows with our critical TV broadcasting application through terrestrial, satellite, cable and IP-TV. SafeKit implements the continuous and real-time replication of our database as well as the automatic failover of our application for software and hardware failures. Without modifying our application, it was possible for us to customize the installation of SafeKit. Since then, the time of preparation and implementation has been significantly reduced.”
The European Society of Warranties and Guarantees in Natixis uses SafeKit as a high availability solution for its applications.
Over 30 SafeKit clusters are deployed on Unix and Windows in Natixis.
Fives Syleps implements high availability of its ERP with SafeKit and deploys the solution in the food industry.
Over 20 SafeKit clusters are deployed on Linux and Windows with Oracle.
Testimonial of Fives Syleps:
"The automated factories that we equip rely on our ERP. It is not possible that our ERP is out of service due to a computer failure. Otherwise, the whole activity of the factory stops.
We chose the Evidian SafeKit high availability product because it is an easy to use solution. It is implemented on standard servers and does not require the use of shared disks on a SAN and load balancing network boxes.
It allows servers to be put in remote computer rooms. In addition, the solution is homogeneous for Linux and Windows platforms. And it provides 3 functionalities: load balancing between servers, automatic failover and real-time data replication.”
Air traffic control systems supplier, Copperchase, deploys SafeKit high availability in airports.
Over 20 SafeKit clusters are deployed on Windows.
Tony Myers, Director of Business Development says:
"By developing applications for air traffic control, Copperchase is in one of the most critical business activities. We absolutely need our applications to be available all the time. We have found with SafeKit a simple and complete clustering solution for our needs. This software combines in a single product load balancing, real time data replication with no data loss and automatic failover. This is why, Copperchase deploys SafeKit for air traffic control in airports in the UK and the 30 countries where we are present."
Software vendor Wellington IT deploys SafeKit high availability with its banking application for Credit Unions in Ireland and UK.
Over 25 SafeKit clusters are deployed on Linux with Oracle.
Peter Knight, Sales Manager says:
"Business continuity and disaster recovery are a major concern for our Locus banking application deployed in numerous Credit Unions around Ireland and the UK. We have found with SafeKit a simple and robust solution for high availability and synchronous replication between two servers with no data loss. With this software solution, we are not dependent on a specific and costly hardware clustering solution. It is a perfect tool to provide a software high availability option to an application of a software vendor."
Paris transport company (RATP) chose the SafeKit high availability and load balancing solution for the centralized control room of line 1 of the Paris subway.
20 SafeKit clusters are deployed on Windows and Linux.
Stéphane Guilmin, RATP, Project manager says:
"Automation of line 1 of the Paris subway is a major project for RATP, requiring a centralized command room (CCR) designed to resist IT failures. With SafeKit, we have three distinct advantages to meet this need. Firstly, SafeKit is a purely software solution that does not demand the use of shared disks on a SAN and network boxes for load balancing. It is very simple to separate our servers into separate machine rooms. Moreover, this clustering solution is homogeneous for our Windows and Linuxplatforms. SafeKit provides the three functions that we needed: load balancing between servers, automatic failover after an incident and real time data replication."
And also, Philippe Marsol, Atos BU Transport, Integration Manager says:
“SafeKit is a simple and powerful product for application high availability. We have integrated SafeKit in our critical projects like the supervision of Paris metro Line 4 (the control room) or Marseille Line 1 and Line 2 (the operations center). Thanks to the simplicity of the product, we gained time for the integration and validation of the solution and we had also quick answers to our questions with a responsive Evidian team.”
The software integrator Systel deploys SafeKit high-availability solution in firefighter and emergency medical call centers.
Over 30 SafeKit clusters are deployed on Windows with SQL Server.
Marc Pellas, CEO says:
"SafeKit perfectly meets the needs of a software vendor. Its main advantage is that it brings in high availability through a software option that is added to our own multi-platform software suite. This way, we are not dependent on a specific and costly hardware clustering solution that is not only difficult to install and maintain, but also differs according to client environments. With SafeKit, our firefighter call centers are run with an integrated software clustering solution, which is the same for all our customers, is user friendly and for which we master the installation up to after-sales support."
ERP high availability and load balancing of the French army (DGA) are made with SafeKit.
14 SafeKit clusters are deployed on Windows and Linux.
Alexandre Barth, Systems administrator says:
"Our production team implemented the SafeKit solution without any difficulty on 14 Windows and Linux clusters. Our critical activity is thus secure, with high-availability and load balancing functions. The advantages of this product are easy deployment and administration of clusters, on the one hand, and uniformity of the solution in the face of heterogeneous operating systems, on the other hand."
Evidian SafeKit mirror cluster with real-time file replication and failover |
|
|
|
|
|
|
|
Fully automated failback procedure > |
|
Replication of any type of data > |
|
File replication vs disk replication > |
|
File replication vs shared disk > |
|
Remote sites and virtual IP address > |
|
|
|
|
|
Uniform high availability solution > |
|
|
|
Evidian SafeKit farm cluster with load balancing and failover |
|
No load balancer or dedicated proxy servers or special multicast Ethernet address > |
|
|
|
Remote sites and virtual IP address > |
|
Uniform high availability solution > |
|
|
|
|
|
Application High Availability vs Full Virtual Machine High Availability > |
|
|
|
|
|
|
|
Byte-level file replication vs block-level disk replication > |
|
|
|
|
|
Virtual IP address |
|
|
|