Managing Large Clusters

With large clusters comes the headache of keeping each server in the cluster up to date and synchronized. This not only applies to the applications installed on the cluster servers, but also to the rules that are being executed on each of those servers.

Composable Architecture Platform solves this problem by delegating it to a small cluster of slave consoles. Each slave console manages its own section of the cluster. It keeps track of which servers have which X Engine installed and will automatically deploy, stop, start and manage servers in its assigned segment of the cluster.

Structuring for clusters

Each slave console is assigned a repository on the master console from which to deploy rules and configurations. This repository has one consideration that must be followed:

Avoid editing configurations or rule sets directly in the repository, otherwise they will be propagated and deployed to the entire cluster instantly. This means that any mistake will be propagated too. Instead, you should promote the rules from a UAT or staging repository into the cluster repository when they are approved for deployment (using Configuration copy with dependent rule sets).

Cluster Node Definitions

In the master console, a Cluster Node Definition is used to set up a cluster node.

The node name is an identifier that is specified in the slave console’s configuration.properties file alongside a password. This is used to authenticate the slave console to the master console. For additional security, you can also lock the access from a given slave console to a single IP address.

The link URL is the link to the slave console’s web application. This is used to allow the master console to “drill down” to the slave console.

The feed repository is the repository from which all servers in the cluster will receive updates and the feed configuration is the configuration that will be deployed from that repository.

Once a new Cluster Node Definition has been created, a new element appears at the top of the administration tree:

Clicking on this element allows you to see the status of your cluster node:

Managing a cluster node

Once your cluster node shows up in the administration tree, you can manage all of its defined servers as a single unit. This means that you can start and stop all of the X Engines in the servers attached to a node at once.

If a given server in the node is offline for maintenance when you issue a start or stop command, the slave console will remember the desired state of the cluster and set it appropriately when the server comes back online.

Please be patient after issuing a start or stop command as it can take some time to be reflected in the master console (20-30 seconds is normal).

You can also drill down to the individual slave console directly from the master console. To do this, simply click on Maintain. The console for the slave console appears in a pop-up window.

You will notice that you are not required to log on, and you see a reduced set of features. The logon is handled by a single sign on process, and the reduced feature set reflects the limited tasks that can be performed on a slave console (no rule edits, user edits etc.). Of course, you can still get trace data, test data and performance data from individual servers in the cluster.

Items to configure in the slave console

There are in fact only four items to configure on the slave console: The data base connectors, the log adaptors, the credential vault and the servers themselves. These items all need to be manually defined within each slave console.

Defining Cluster Data Base Connectors

Even though JDBC drivers in theory could be replicated down from the master console, they have been kept separately in the slave consoles for segmentation purposes.

This segmentation comes into play when individual slave consoles are deployed into a cluster that is geographically separated. Under these circumstances, the actual databases themselves may also be separated and simply replicating to each other.

Within these geographical locations, the actual addressing of the database servers may therefore be different (in some configurations). To address this problem, each slave console has its own addressing information for each database connector. This allows the rules in the master console to be written without specific knowledge of the location into which they may be deployed.

Setting up connectors in a slave console is identical to setting it up in the master console.

Defining Log Adaptors and Credentials

Log adaptors and credentials are also not replicated (for the same reasons as data base connectors. These items are set up the same way as in the master console.

Defining Cluster Servers

Defining servers in a slave console is identical to doing it in the master console. Please note that any given server should only ever be configured in a single slave console at any time.

Last updated

General Website Terms & Conditions and Privacy Policy

Terms of UsePrivacy Policy

Contact Us

Get Help

© Copyright TomorrowX 2024 Composable Architecture Platform and Connected Agile are trademarks and servicemarks of TomorrowX .