Hanwha SSM (Samsung Security Manager): The Simplest High Availability Cluster with Synchronous Replication and Failover

Hanwha SSM: The Simplest High Availability Cluster with Synchronous Replication and Failover

Evidian SafeKit brings high availability to Hanwha SSM, the CCTV video surveillance system. This article explains how to implement quickly a Hanwha SSM cluster without shared disk and without specific skills. The high availabity module hanwha_ssm.safe and a free trial are offered in the installation section.

Note that you can implement with the same product real-time replication and failover of other applications (database or other): see other examples of mirror modules here.

This clustering solution is recognized as the simplest to implement by our customers and partners. It is also a complete solution that solves hardware failures (20% of problems) including the complete failure of a computer room, software failures (40% of problems) including smooth upgrade server by server and human errors (40% of problems) thanks to its simplicity of administration.

How the Evidian SafeKit software simply implements Hanwha SSM high availability with real-time synchronous replication and failover without shared disk

How the Evidian SafeKit mirror cluster implements Hanwha SSM high availability with synchronous replication and failover?

On the previous figure, the server 1/PRIM runs Hanwha SSM services (any edition). Users are connected to the virtual IP address of the mirror cluster. SafeKit replicates files opened by Hanwha SSM services in real time. Only changes in the files are replicated across the network, thus limiting traffic (byte-level file replication). Names of file directories containing Hanwha SSM services data are simply configured in SafeKit. There are no pre-requisites on disk organization for the two servers. Directories to replicate may be located in the system disk. SafeKit implements synchronous replication with no data loss on failure contrary to asynchronous replication.

In case of server 1 failure, there is an automatic failover on server 2 with virtual IP failover and restart of Hanwha SSM services. Then, when server 1 is restarted, SafeKit implements automatic failback with reintegration of data without stopping Hanwha SSM services on server 2. Finally, the system returns to synchronous replication between server 2 and server 1. The administrator can decide to swap the role of primary and secondary and return to a server 1 running Hanwha SSM services. The swap can also be done automatically by configuration.

Demonstration of a mirror module

This demonstration shows the configuration of a generic mirror.safe module with Microsoft SQL Server but the configuration is the same with other applications. Note that with the hanwha_ssm.safe module instead of the generic one, restart scripts are preconfigured for Hanwha SSM.


Key differentiators of Hanwha SSM replication and failover with the Evidian SafeKit mirror cluster

Evidian SafeKit mirror cluster with real-time file replication and failover

All clustering features All clustering features

Like  The solution includes all clustering features: server failure monitoring, network failure monitoring, software failure monitoring, automatic Hanwha SSM restart with a quick recovery time, a virtual IP address switched in case of failure to automatically reroute clients

Dislike  This is not the case with replication-only solutions like replication at the database level

Dislike  Quick Hanwha SSM restart is not ensured with full virtual machines replication. In case of hypervisor failure, a full VM must be rebooted on a new hypervisor with an unknown recovery time

Like   The cluster configuration is very simple and made by means of a high availability application module. There is no domain controller or active directory to configure on Windows. The solution works on Windows and Linux

Like   Other services and other replicated directories can be added to the module to complement the SafeKit / Hanwha SSM high availability solution

Synchronous replication Synchronous replication

Like  The real-time replication is synchronous with no data loss on failure

Dislike  This is not the case with asynchronous replication

Fully automated failback procedure Automatic failback

Like  After a failure when a server reboots, the replication failback procedure is fully automatic and the failed server reintegrates the cluster without stopping Hanwha SSM on the only remaining server

Dislike  This is not the case with most replication solutions particularly with replication at the database level. Manual operations are required for resynchronizing a failed server. Hanwha SSM may even be stopped on the only remaining server during the resynchonization of the failed server

Replication of any type of data

Like  The replication is working for Hanwha SSM but also for any files which shall be replicated

Dislike  This not the case for replication at the database level

File replication vs disk replication File replication vs disk replication

Like  The replication is based on file directories that can be located anywhere (even in the system disk)

Disike  This is not the case with disk replication where special Hanwha SSM configuration must be made to put the application data in a special disk

File replication vs shared disk File replication vs shared disk

Like  The servers can be put in two remote sites

Dislike  This is not the case with shared disk solutions

Remote sites Remote sites

Like  All SafeKit clustering features are working for 2 servers in remote sites. Performances of replication depends on the interconnect latency for real-time synchronous replication and on the bandwidth for resynchronizing data on a failed server

Like  If both servers are connected to the same IP network through an extended LAN between two remote sites, the virtual IP address of SafeKit is working with rerouting at level 2

Like  If both servers are connected to two different IP networks between two remote sites, the virtual IP address can be configured at the level of a load balancer. SafeKit offers a health check: the load balancer is configured with a URL managed by SafeKit which returns OK on the primary server and NOT FOUND else. This solution is implemented for SafeKit in the Cloud but it can be also implemented with a load balancer on premise

Quorum Quorum

Like  With remote sites, the solution works with only 2 servers and for the quorum (network isolation), a simple split brain checker to a router is offered to support a single execution

Like  This is not the case for most clustering solutions where a 3rd server is required for the quorum

Uniform high availability solution Uniform high availability solution

Like  SafeKit implements a mirror cluster with replication and failover. But it imlements also a farm cluster with load balancing and failover. Thus a N-tiers architecture can be made highly available and load balanced with the same solution on Windows and Linux (same installation, configuration, administration with the SafeKit console or with the command line interface). This is unique on the market

Dislike  This is not the case with an architecture mixing different technologies for load balancing, replication and failover

FAQ on Evidian SafeKit [+]

Installation of SafeKit for Hanwha SSM high availability with synchronous replication and failover

Package installation on Windows

Configuration instructions

The configuration is presented with the web console connected to 2 Windows servers.

Important: all the configuration is made from a single browser.

Launch the web console in a browser by connecting to http://localhost:9010 (next image)

Start the SafeKit web console for configuring the Hanwha SSM module

Enter IP address of the first node and click on Confirm (next image)

SafeKit web console - first node in the Hanwha SSM cluster

Click on New node and enter IP address of the second node (next image)

SafeKit web console - second node in the Hanwha SSM cluster

Click on the red floppy disk to save the configuration (previous image)

In the Configuration tab, click on hanwha_ssm.safe (xxx.safe in the next image) then enter hanwha_ssm as the module name and Confirm (the console finds hanwha_ssm.safe in the demo/ directory that you previously filled)

SafeKit web console - start configuration of Hanwha SSM module SafeKit web console - enter Hanwha SSM module name

Click on Validate (next image)

SafeKit web console - enter Hanwha SSM module nodes

Change the path of replicated directories only if necessary (next image) and enter a virtual IP address. A virtual IP address is a new unused IP address in the same IP network as the IP addresses of the two nodes. The virtual IP address automatically switches in case of failover.

SafeKit web console - enter Hanwha SSM parameters

For information:

Click on Validate (previous image)

SafeKit web console - stop the Hanwha SSM module before configuration the configuration

Click on Configure (previous image)

SafeKit web console - check the success green message of the Hanwha SSM configuration

Check the success green message on both servers and click on Next (previous image).

SafeKit web console - select the Hanwha SSM node with the up-to-date database

Select the node with the most up-to-date replicated directories and click on start it to make the first resynchronization in the right direction (previous image). Before this operation, we suggest you to make a copy of replicated directories before starting the cluster to avoid any errors.

SafeKit web console - the first Hanwha SSM node starts as primary and is alone

Start the second node (previous image) which becomes SECOND green (next image) after resynchronisation of all replicated directories (binary copy from node 1 to node 2).

SafeKit web console - the second Hanwha SSM node starts as SECOND

The cluster is operational with Hanwha SSM services running on the PRIM node and nothing running on the SECOND node (previous image). Only modifications inside files are replicated in real-time in this state.

Be careful, components which are clients of the Hanwha SSM services must be configured with the virtual IP address. The configuration can be made with a DNS name (if a DNS name has been created and associated with the virtual IP address).


Check with Windows Microsoft Management Console (MMC) that the Hanwha SSM services have been started on the primary server by the start_prim script and stopped on the secondary server by the stop_prim script.

Stop the PRIM node by scrolling down the menu of the primary node and by clicking on Stop. Check that there is a failover on the SECOND node. And with Windows Microsoft Management Console (MMC), check the failover of Hanwha SSM services (stopped and started in the stop_prim and start_prim scripts).

Event logs of module and application

To see the module log of the primary server which contains events in the cluster (next image):

SafeKit web console - Module Log of the PRIM Hanwha SSM server

To see the application log of the primary server which contains output messages from restart scripts (start_prim and stop_prim) (next image):

SafeKit web console - Application Log of the PRIM Hanwha SSM server

To see the logs of the secondary server (previous image), click on W12R2server75/SECOND (it will become blue) on the left side and repeat the same operations. In the secondary module log, you will find the volume and the reintegration time of replicated data.

Advanced configuration

In Advanced Configuration tab (next image), you can edit internal files of the module: bin/start_prim and bin/stop_prim and conf/userconfig.xml (next image on the left side). If you make change in the internal files here, you must apply the new configuration by a right click on the blue icon/xxx on the left side (next image): the interface will allow you to redeploy the modified files on both servers.

SafeKit web console - Advanced configuration of Hanwha SSM module

Configure boot start (next image on the right side) configures the automatic boot of the module when the server boots. Do this configuration on both servers once the high availability solution is correctly running.

SafeKit web console - Automatic boot of Hanwha SSM module


For getting support on the call desk of https://support.evidian.com, get 2 Snaphots (2 .zip files), one for each server and upload them in the call desk tool (next image).

SafeKit web console - Hanwha SSM snaphots for support

Internal files of the Windows hanwha_ssm.safe module


<!DOCTYPE safe>
<service mode="mirror" defaultprim="alone" maxloop="3" loop_interval="24" failover="on">
  <!-- Heartbeat Configuration -->
  <!-- Names or IP addresses on the default network are set during initialization in the console -->
  <heart pulse="700" timeout="30000">
    <heartbeat name="default" ident="flow">
  <!-- Virtual IP Configuration -->
  <!-- Replace
     * VIRTUAL_TO_BE_DEFINED by the name of your virtual server 
        <interface check="on" arpreroute="on"> 
               <virtual_addr addr="VIRTUAL_TO_BE_DEFINED" where="one_side_alias" />
  <!-- Software Error Detection Configuration -->
  <errd polltimer="10">
    <!-- Samsung SSM process -->
         <proc name="pg_ctl.exe" atleast="1" action="restart" class="prim" />
         <proc name="ServiceManager.exe" atleast="1" action="restart" class="prim" />
			<proc name="HAServerService.exe" atleast="1" action="restart" class="prim"/>
  <!-- File Replication Configuration -->
  <!-- Adapt with the directory of your PostgreSQL database and logs
  <rfs async="second" acl="off" nbrei="3">
     <replicated dir="C:\PostgreSQL\9.1\data" mode="read_only">
        <notreplicated path="pg_log"/>
        <notreplicated path="postmaster.pid"/>
     <replicated dir="C:\Program Files (x86)\Samsung\SSM\SystemManager\MapFile" mode="read_only"/>
  <!-- User scripts activation -->
  <user nicestoptimeout="300" forcestoptimeout="300" logging="userlog" />


@echo off

rem Script called on the primary server for starting application services 

rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"

rem stdout goes into Application log
echo "Running start_prim %*" 

set res=0

rem TODO: set to manual the start of services when the system boots

net start "postgresql-9.1 - PostgreSQL Server 9.1"
if not %errorlevel% == 0 (
  %SAFE%\safekit printi "PostgreSQL (postgresql-9.1 - PostgreSQL Server 9.1) start failed"
  goto stop
) else (
  %SAFE%\safekit printi "PostgreSQL (postgresql-9.1 - PostgreSQL Server 9.1) started"

net start "SSM System Manager"
if not %errorlevel% == 0 (
  %SAFE%\safekit printi "SSM System Manager start failed") 
  goto stop
) else (
  %SAFE%\safekit printi "SSM System Manager started"

net start "SSM Watch Services Manager"
if not %errorlevel% == 0 (
  %SAFE%\safekit printi "SSM Watch Services Manager start failed"
  goto stop
) else (
  %SAFE%\safekit printi "SSM Watch Services Manager started"

rem net start "HA Server Service"
rem if not %errorlevel% == 0 (
rem  %SAFE%\safekit printi "HA Server Service start failed"
rem  goto stop
rem ) else (
rem   %SAFE%\safekit printi "HA Server Service started"
rem )

if %res% == 0 goto end

set res=%errorlevel%
"%SAFE%\safekit" printe "start_prim failed"

rem uncomment to stop SafeKit when critical
rem "%SAFE%\safekit" stop -i "start_prim"



@echo off

rem Script called on the primary server for stopping application services 

rem For logging into SafeKit log use:
rem "%SAFE%\safekit" printi | printe "message"

rem ----------------------------------------------------------
rem 2 stop modes:
rem - graceful stop
rem   call standard application stop with net stop
rem - force stop (%1=force)
rem   kill application's processes
rem ----------------------------------------------------------

rem stdout goes into Application log
echo "Running stop_prim %*" 

set res=0

rem action on force stop
if "%1" == "force" (
  %SAFE%\safekit printi "Force stop: kill processes of Samsung SSM application" 
  %SAFE%\safekit kill -name="watchservices.exe" -level="terminate"
  %SAFE%\safekit kill -name="java.exe" -argregex=".*systemmanager.*" -level="terminate"
  %SAFE%\safekit kill -name="pg_ctl.exe" -level="terminate"
  %SAFE%\safekit kill -name="postgres.exe" -level="terminate"
  %SAFE%\safekit kill -name="HAServerService.exe" -level="terminate"
  %SAFE%\safekit kill -name="systemanager.exe" -level="terminate"
  goto end

rem %SAFE%\safekit printi "Stopping HA Server Service"
rem net stop "HA Server Service"

%SAFE%\safekit printi "Stopping SSM Watch Services Manager"
net stop "SSM Watch Services Manager"

%SAFE%\safekit printi "Stopping SSM System Manager"
net stop "SSM System Manager"

%SAFE%\safekit printi "Stopping PostgreSQL (postgresql-9.1 - PostgreSQL Server 9.1)"
net stop "postgresql-9.1 - PostgreSQL Server 9.1"
del C:\PostgreSQL\9.1\data\postmaster.pid

rem Wait a little for the real stop of services
"%SAFEBIN%\sleep" 10

if %res% == 0 goto end

"%SAFE%\safekit" printe "stop_prim failed"