Appendix B - Configuring High Availability in Snare Central

This functionality is available only in v8.4.0 or newer.

 

About Snare Central High Availability.

Snare Central 8.4.0 server can be configured to be part of a Snare High Availability (HA) cluster. The server can be configured as Primary (Active) or as Secondary (Passive) in an active/standby setup both sharing the same cluster's virtual IP address and configuration.

When the High Availability function is configured correctly and enabled, it allows the whole Snare Central Server functionality to automatically restart and reroute all event processing capabilities to another capable system in the event of a failure.

The Primary node will replicate its configuration to the Secondary node when High Availability is configured the first time and whenever is required in order to keep both servers up-to-date so the operation can continue “normally” at failure time.

HA functionality can optionally, be configured to synchronize events from Secondary into Primary after returning from a fail over. This means that once the Primary node returns to normal operation, any events stored in SnareArchive in the Secondary node during the down time, will be automatically synchronized to the Primary node. Optionally, after events have been copied to the Primary node, all events will be deleted form the Secondary. These options are disabled by default.

In this manner, Snare Central 8.4.0 has the capability to monitor services and systems and to automatically fail over to the standby node if problems occur.

When Snare High Availability is enabled Health Checker will contain the status of the cluster and sync state.

Standard Configuration

 

Other more complex Collector/Reflector scenarios

 

Prerequisites

For Snare High Availability to be configured, it is necessary to have:

  1. Two fully installed, fully licensed Snare Central Servers v8.4.0 (or above). The Secondary node shall be a fresh install.

  2. Both in the same LAN or VLAN address space, each one with its own IP address.

  3. Both shall have the same default gateway configured.

  4. Both shall have email setup configured.

  5. The servers need to be NTP synchronized (via a third NTP server).

  6. Also a third IP address in the same address space is required as this will be the actual cluster's virtual (or floating) IP address.

  7. Support of gratuitous ARP functionality is required in the default gateway in order for virtual IP sharing to work properly.

 

Please note that this virtual IP address is the address that all agents and syslog devices will use to send events and to check for licensing.

If separate network interfaces are needed for heartbeat monitor of the cluster, the administrator needs to manually configure the second interface in the snare user Administration Menu in both servers prior to enabling Snare High Availability. It important to note that only two network interface cards per server are supported in a clustered environment. The procedure to add a second NIC is described in the following page: Adding a second Network Interface Card to Snare Central server

 

Configuration

To configure the High Availability feature in Snare Central, go to Snare Wizard as the Administrator user and open the “High Availability” section. The section is shown in the following image:

Once enabled is selected, the Administrator needs to provide the following information:

  • Role of this server in the cluster (Primary/Secondary).

  • The local IP address that is going to be used for heartbeat synchronization with the other cluster members.

  • The IP address of the remote cluster member.

  • The virtual IP address that is going to shared between all cluster members.

  • A working email address for sending notifications.

  • The common password used between the members of the cluster. The password has to be 8 characters long and contain at least one lowercase letter, one uppercase letter, a number and a symbol.

  • The password of the Snare user in the remote node.

It is a requirement that the first node in the cluster to be configured be the Primary node and only one node shall be configured as Primary.

Once both nodes are configured one as Primary and the other one as Secondary, It is required to launch a manual initial files synchronization from the Primary node, with the “Sync All” button. After the first initial file synchronization, Snare High Availability feature will automatically synchronize whatever configuration is required from the Primary into the Secondary node and in case of failure of the Primary node, automatic switch to the Secondary will occur instantly.

To verify if the configured parameters are working correctly, use the Test button in the Primary node. It will trigger a transition to the the Secondary node until the Terminate Test button is clicked to revert the transition. You can see the current status in the Health Checker. Please follow the instructions on screen.

The status of the node and some cluster statistics can be reviewed in the Health Checker as shown in the next image:

 

When Snare High Availability is enabled in both nodes, the Snare Synchronization server is started in both servers to keep the Secondary node up-to-date with any configuration changes made on the Primary node (like adding a new destination in Reflector, for example).

SnareSyncd checks if any of the files specified in SnareSyncd.json have changed and if they have, it automatically triggers a synchronization task to the Secondary node and reloads the configuration in the remote node. SnareSyncd also keeps statistics about the files that were updated and the last time this happened.

Storing events in a NAS

It is possible to share a Network Storage between Primary and Secondary but care shall be taken to make sure that the Data Synchronization (Sync Events back checkbox) is disabled. If you need to information on how to mount a NAS, please refer to the “Disk Manager” section of this user guide: Disk Manager