Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This document describes how to run Composable Architecture Platform using the Tomorrow-Software-Server-2021-noJRE-10.0.0.zip
distribution for a macOS environment. This example is using macOS Catalina 10.15 and JAVA JDK 14.
macOS capable of running JAVA JDK 11+
JAVA JDK 11+
Tomorrow-Software-Server-2021-noJRE-10.0.0.zip (or other approved) distribution for Composable Architecture Platform
Chrome browser
Root user access permissions
JAVA environment - Composable Architecture Platform requires Java v11+ JDK Runtime Environment to run so check if Java is installed and the Java running version.
The Composable Architecture Platform installation uses the open source Jetty application server.
Enable macOS root user - To run Composable Architecture Platform on macOS, the "root" user needs to be enabled. If the root user has previously been enabled, the root user password will be required. To enable the root user, please follow the instructions at the following Apple support page:
https://support.apple.com/en-us/HT204012
Will either return “command not found” when no Java installation has been installed or display the current Java installation details.
e.g. Java version details
If you don’t have Java or have an older Java version, then you’ll need to upgrade with the following steps:
Locate and download the Java SE Development Kit 14 (JDK) macOS Installer from https://www.oracle.com/java/technologies/javase-downloads.html
Once downloaded, located and double click the downloaded file (jdk-14.0.1_osx-x64_bin.dmg) and follow instructions to complete the installation
To confirm that you have successfully installed the JDK 14, rerun the Check Java installation and version steps above
Download the Composable Architecture Platform distribution zip file
Unzip the downloaded Composable Architecture Platform zip file into the desired destination folder.
You may consider moving the zip to another folder (e.g., Applications) before unzipping.
Unzipping will produce a new folder called “Tomorrow-Software-Server-10.0.0”. You may then proceed to archive or trash the downloaded zip file.
Note: the location of the file named Product Reference.pdf is in the Documentation folder. This is the comprehensive document for the entire Composable Architecture Platform.
Important: Don’t refer to Product Reference.pdf until the latest updates have been applied via console updates after installation, as there may be an update available.
Open a new Finder window (Command + N) and navigate to the folder used to unzip the Composable Architecture Platform (e.g., Applications)
Double click “Tomorrow.command”
If you encounter insufficient privileges to launch the script in Terminal, then modify using chmod 755 Tomorrow.command
before launching.
Enter the macOS root user password in the Terminal window. After a successful start you should see the following:
Navigate to http://localhost/console
Use the default administrator credentials to access the console:
When launching the console application, there may be other demo applications and a built-in proxy server that will also launch at the same time. Default ports 80 and 443 are used to run the console application, and therefore must be available before launching. To modify default ports, refer Product Reference.pdf section: Port numbers and how to change them.
To stop Composable Architecture Platform use Control-C in the active terminal screen. This will terminate the instance.
Port
Use
80
HTTP port for the console, demo applications and the built in proxy
443
HTTPS port for the console, demo applications and the built in proxy
This section covers the tasks you need to perform to install the various components of a Composable Architecture Platform topology. We assume some familiarity with the application server and database system you have chosen to use.
The system requirements for an installation of Composable Architecture Platform vary depending on the load and uptime requirements.
The minimum requirements for any console server are:
1GB of memory (2GB recommended)
1 GHz+ processor (3GHz dual-core recommended)
10GB of free disk space
The minimum browser requirement for any client attached to the administration console is:
Microsoft Internet Explorer version 11.0 or later
Microsoft Edge version 20 or later
Mozilla Firefox version 40 or later
Google Chrome (any version)
Other browsers may work but may encounter issues.
The minimum requirements for any application server using the in-line filter installation are:
A Servlet Specification 2.3 (or later) compliant servlet engine
JDK 6.0 or later.
Note: You can install more than one X Engine on a single piece of equipment. You only need one instance of an application server and database for each installation.
The minimum requirements for a feed server are:
1GB of memory (2GB recommended)
1 GHz+ processor (3GHz dual-core recommended)
10GB of free disk space
The minimum requirements for a built in forwarding proxy are:
1GB of memory (2GB or at least 10% of protected server capacity recommended)
1 GHz+ processor (2GHz or at least 20% of protected server capacity recommended)
50GB of free disk space
A prerequisite for working with the Composable Architecture Platform X Engine is an instance of a Composable Architecture Platform console server.
To perform the installation, un-zip the Composable Architecture Platform Server distribution somewhere within the local file system and start it using the Tomorrow.bat file (or the Tomorrow.sh file if using Linux, Solaris, or the Tomorrow.command file if using macOS).
The installation uses the open source Jetty application server. If you need to change ports or use of SSL, you can configure the application server using the http.ini
configuration file found in the subfolder server/start.d/.
Please note that the server installation does not lay down a proper database. We strongly recommend that you use a production level database such as DB2, Oracle or MySQL.
Now open a browser and point it to http://localhost/console.
You will see the main page and the console is ready to use. However, we suggest that you first review and complete additional configuration by following the steps in the relevant sections below, and also refer to the earlier section “Getting Started”.
The Composable Architecture Platform Server can be installed as a Microsoft Windows service. This provides a convenient way to manage the server instance. To install as a service, simply follow these steps:
The install script - install.bat, located in the root WinService directory performs the following functions:
Creates a runtime executable folder
Copies the appropriate 32/64-bit executable based on the installed JRE
Sets the correct jvm.dll based on the installed JRE
Installs the service
Double click the file install.bat.
Accept the Security Warning prompt message by clicking Run.
Located in the Windows services application, there will be a new Windows service name displayed as Tomorrow Software
Click Start and the Tomorrow Software service is now running.
Uninstall the Windows service by first stopping the service then clicking uninstall.bat located in the root WinService directory.
Refer to the Apache documentation for details regarding specific tuning/debug parameters
All Multi-Protocol feed servers are subdirectories of the main Composable Architecture Platform folder.
Take note of the subdirectory name, this is required to run as a Windows service (e.g., default subdirectory name is Multi-Protocol).
To install the Multi-Protocol feed server as a service: -
Start a Windows command line
Navigate to the WinService directory
Enter command: installFeed [subdirectory name]
For example:
Microsoft Windows [Version 6.2.9200]
(c) 2012 Microsoft Corporation. All rights reserved.
C:\Users\Administrator>cd /
C:\>cd "TomorrowServer"
C:\TomorrowServer>cd WinService
C:\Tomorrow-Software-Server-10.0.0\WinService>installFeed Multi-Protocol Service Tomorrow Software Multi-Protocol installed
C:\TomorrowServer\WinService>
Located in the Windows services application, there will be a new Windows service name displayed as Multi-Protocol Tomorrow Software Feed, click Start and the feed server Windows service will be running.
Please refer to the full Linux setup instructions found in the section Creating a stand-alone built in forwarding proxy or Installing on Red Hat Enterprise Linux.
The following is a list of the specific Composable Architecture Platform Server components and how to remove them if they not required:
This document describes how to run Composable Architecture Platform using the Tomorrow-Software-Server-2021-noJRE-10.0.0.zip
distribution for a Red Hat Enterprise Linux environment. This example is using IBM Cloud, with a virtual server instance server instance launched with a Red Hat Enterprise Linux 7.x - Minimal Install (amd64) image.
Red Hat Enterprise Linux 7.2 (HVM) – (RHEL)
JRE 11 or above
Tomorrow-Software-Server-2021-noJRE-10.0.0.zip (or other approved) distribution for Composable Architecture Platform
A suitable Linux terminal client and SSH connection to the server established
Root user access permissions
Composable Architecture Platform requires Java v11+ JDK Runtime Environment to run so check if Java is installed and the Java running version.
The Composable Architecture Platform installation uses the open source Jetty application server.
Will either return “command not found” when no Java installation has been installed or display the current Java installation details.
e.g., Java version details
If you have an older Java version, then you’ll need to upgrade with the following Java installation commands
After the yum installation has completed, set the default JDK to be java-11 by using this command:
If there are multiple alternatives, enter the number in front of the java-11 entry, and the correct Java version is now configured.
Download the Composable Architecture Platform distribution zip file and upload to the RHEL instance.
There are many ways to upload a file over to a Linux environment. Here is an example using the secure copy command “scp”.
A temporary directory could also be used or created to upload the file e.g. /tmp
and then move the package to the correct location.
Example:
Unzip the package under the /opt/local
directory.
Note: root permissions may be required to create the local folder so switch to root if needed.
Unzip may need to be installed, do so using this command.
Then unzip to /opt/local
The unzipped contents of Tomorrow-Software-Server-10.0.0.zip will extract, then rename the directory to “Tomorrow” using “mv” command.
Note: the location of the file named Product Reference.pdf is in the Documentation folder. This is the comprehensive document for the entire Composable Architecture Platform.
Important: Don’t refer to Product Reference.pdf until the latest updates have been applied via console updates after installation, as there may be an update available.
Copy the tomorrowstart script file to the directory where running services are located by using this command:
Modify permissions to the user account to execute the scripts. The tomorrowstart and tomorrow.sh files must have read, write and execute permissions set. Switch to a root user if required to be able to change the file permissions.
Now add the executable script under the startup services. Run the below commands to run Composable Architecture Platform as a service. In a RHEL environment use only the chkconfig command. For example, in an Ubuntu environment the update-rc.d command can be used.
Check tomorrowstart script as service is set correctly with the correct levels.
Now start the Composable Architecture Platform service.
When launching the console application, there may be other demo applications and a built-in proxy server that will also launch at the same time. Default ports 80 and 443 are used to run the console application, and therefore must be available before launching. To modify default ports, refer Product Reference.pdf section: Port numbers and how to change them.
The following output example should be seen:
Note: To stop Composable Architecture Platform use:
It is good practice to now reboot of the RHEL server to verify Composable Architecture Platform restarts as a service at startup.
Composable Architecture Platform is now running as a service in RHEL.
Launch the Composable Architecture Platform console application via a compatible browser at this URL: http://[YOUR SERVER NAME]/console
Default administrator credentials:
Component | How to remove |
Built in proxy | Delete the folder /server/webapp/root |
Console | Delete the folder /server/webapp/console |
Multi-Protocol Feed Server | Delete the folder /Multi-Protocol |
Test Server | Delete the folder /server/webapp/testserver1 |
Sample applications | Delete the folder /server/webapp/qwerty |
Multi-Protocol Server | Delete the folder /server/webapp/mpserver1 |
Port | Use |
80 | HTTP port for the console, demo applications and the built-in proxy |
443 | HTTPS port for the console, demo applications and the built-in proxy |
Since Composable Architecture Platform has the ability to dynamically apply software updates without requiring a restart of the application server, and with different users using different versions of the same application; there are some unusual considerations with regards to the console’s deployment structure.
Firstly, the web application installed under the application server path is purely an application loader. The application loader in turn finds all of the required files and executable code under a folder designated as the file location in the web.xml file for the console application.
The file location referenced is typically under the Composable Architecture Platform console home folder in an Applications sub-folder. Below that folder in turn, you can find multiple builds (as they are installed by the update server). Those builds largely conform to a typical web application structure.
A built in forwarding proxy is a networking component and as such can be installed within your network between your load balancer and your existing application servers (such as Microsoft IIS or Apache with PHP).
As a general rule, you should have one built in forwarding proxy for each application server in your network.
The built in forwarding proxy can co-exist on the same operating system as your existing application server, provided you configure non-conflicting ports and proper forwarding instructions in your server definitions.
The most important step in the configuration is to ensure that requests from the load balancer to the built in forwarding proxy are routed correctly. If your load balancer does not perform SSL termination, you will also be required to configure the built in forwarding proxy to correctly serve up your SSL certificate.
The process of creating and configuring a built in forwarding proxy involves a large number of steps and therefore have been relegated to its own section Creating a stand-alone built in forwarding proxy.
In some configurations, a single application is made up of several independent Web applications. This is commonly seen with portals.
To avoid having to see each individual application as a separate server from within the Composable Architecture Platform console, you have the option of installing the filter into a shared folder under the application server. However, if you do so, you will encounter a problem with class loading, as the filter itself will be unable to see and invoke the individual Web application classes.
To avoid this problem, you can deploy the filter classes by themselves from the \Tryout files\nonshared folder under the Composable Architecture Platform Server installation to the lib folder of each application. This will ensure that Composable Architecture Platform picks up a single configuration for all the Web applications, but at the same time uses an individual filter class for each, thus allowing them to interact gracefully with each other.
This option is the highest performing and involves installing the Composable Architecture Platform server inline with an existing Web application. This is accomplished using a servlet filter and works with all Java application servers that support the servlet specification 2.3 or later.
To install inline, copy the file magic-10.0.jar
from the \Tryout Files\jars
folder under the Composable Architecture Platform Server installation to the WEB-INF\lib folder of the target Web application.
Once the copy is completed, you need to update your WEB.XML file to ensure that the Composable Architecture Platform server is started with your target application and is correctly sitting inline before your HTTP request processing by servlets or JSPs.
To do this, edit the WEB.XML
file and insert the lines highlighted below:
If you already have filters installed with your target application, you can place the Composable Architecture Platform filter anywhere in the chain that is appropriate. Please note that some application servers (such as IBM WebSphere) require all filters to be listed before the full list of filter mappings. Failure to follow this rule can cause the Web application to fail to load.
Once the files are in place, you must copy the magic.properties
file from the \Tryout files\configuration
folder under your Composable Architecture Platform Server installation to the WEB-INF\classes
folder on your target system. Edit it to suit the configuration being created. As a minimum, you will need to provide a home folder. This should be an empty folder on the local file system that the server has authority to read, write and update. Please see the section Understanding the magic.properties configuration settings for more detailed explanations of the configuration settings.
Composable Architecture Platform uses a number of TCP/IP ports. In various different configurations and topologies, it may be a requirement to change these ports. The following provides a table of the ports used and how to change them if required.
Installing a stand-alone server is easy. Copy the directory \Multi-Protocol
to a local location on your hard drive and modify the .bat or .sh file to point to a valid Java Runtime Environment. Alternatively, you can start the feed server from within the Composable Architecture Platform Server installation.
You will need to set one or more of the following properties in the configuration file on the server:
This configuration file (magic.properties
) is found in the root folder for each server instance that is installed. The section below explains each setting in detail.
Once the configuration is set, execute the Tomorrow.bat file to start the server (or the Tomorrow.sh file if using Linux or Solaris). You should include a shortcut to this bat file in the startup group for the operating system to ensure it is automatically started whenever the server starts. Alternatively, refer to the sections Installing as a Windows Service or Installing as a Linux Service.
Multiple server instances can co-exist on the same system provided they each have a separate port and home directory name.
In addition to changing the server type, there are some console configuration items that can also be configured from within the console itself.
Changes to these setting are stored in a configuration file with the application itself. The following is a list of the console configuration settings in the file configuration.properties. This file can be found in the Composable Architecture Platform HOME folder under Applications/console/[build number]/WEB-INF/classes. The explanation for each item equally applies to the console setup:
A cluster slave instance has a number of unique properties, the most significant of which is that no repositories are allowed.
To set up a cluster slave console, install a clean Composable Architecture Platform instance and start it up. Log in as admin and click on the Console setup function:
Change the Console type to Cluster Slave Console. You will receive a warning:
Click on OK to accept the change. There is now a new Slave tab showing:
Enter the URL that you would normally use to access the master console (we recommend using HTTPS). You also need to enter the ID by which the slave console is going to be defined in the master console and the password:
Then click on Save:
You will receive a further warning:
Click on OK. An information box appears:
Click on OK. Your console will go gray and the slave console server will terminate. Make sure you restart it to complete the process. Restarting it will result in applications (such as Qwerty, the Test Server and so on) being removed from the installation.
You can verify that the slave console installed correctly by logging into it as normal. You should see a cut down console with the ID of the slave console shown in the header:
At this point you can now turn to the master console and create a new Cluster Node definition as described earlier in this manual. If the cluster shows up as available in the master console and you can maintain it from there, then you should now log out of the slave console and reset the passwords for all of the built in user accounts (admin, super and security) using the forgot password feature. There is no need to have normal user access to a slave console.
The next step is to create a new server definition for each server that is attached to the slave. The slave will automatically detect the status of each of those servers and will ensure that the rules are synchronized with the master and that the X Engine state (started/stopped) follows the desired state as decided in the master console.
Whenever a change is made to the source repository in the master console, those changes are automatically deployed to the slaves and subsequently to the servers defined within each slave.
If it is intended to co-locate master and slave consoles on the same server (by using different ports), then you MUST create an individual server name for each slave. The best way to achieve this is to set an entry in the systems HOSTS
file for each slave. Failure to follow this step will result in the transfer from the master console to the slave console causing a log-out on the master console.
Each X Engine install is highly configurable. The configuration file is installed in the X Engine class path along with the magic jar file.
The following table provides an in-depth explanation of each of the settings that are available in the magic.properties
configuration file:
This guide is supplementary to the reference documentation of Composable Architecture Platform, specifically to help AWS customers with installation, setup, and production considerations when deploying the software to Amazon Web Services (AWS) with a subscription to any of the available TomorrowX platforms on AWS Marketplace .
An active AWS Marketplace subscription to a TomorrowX Platform license, in this guide we will refer to the Delivery Innovation Platform (PREMIUM) subscription.
To check if your organisation has an active subscription, go to AWS Marketplace > Manage subscriptions
in the AWS Console. If not, proceed to subscribe to an Amazon Machine Image (AMI) licensed distribution for AWS via the AWS Marketplace.
Specific AWS account IAM user permissions will be required to interact with AWS services and the software installation, examples of IAM policies are detailed later in this guide.
At the time of writing, this guide has used the installation with the official CentOS 7 (x86_64) - with Updates HVM by Amazon Web Services AMI. Basic Linux commands are required to connect to your instance and perform operational tasks such as server updates, restarts, and SSH key rotation.
Knowledge of the following AWS services:
Amazon Elastic Compute Cloud (Amazon EC2) - required
AWS Marketplace – required
AWS Identity and Access Management (IAM) - required
AWS Service Quotas - optional
Amazon Simple Email Service (Amazon SES) - optional
Delivery Innovation Platform architecture diagram (Marketplace default)
In this guide we are referencing the initial installation components as made available from the launch from AWS marketplace. From there, you will quickly adapt the architectural scenario for scale and most appropriate business use case.
For a better security posture, we provide a sample high availability example for high availability deployed within private subnet behind a load balancer for failover and administration access whereby the Console instance is physically separated to Runtime (n) number of servers to be auto-scaled relative to anticipated traffic load, and availability requirements.
The launch of the TomorrowX Platform from the AWS Marketplace involves a single component. A running Amazon Machine Image (AMI) running in EC2 as detailed in the architecture diagram for running in AWS Marketplace. Both console and server components of the installation are accessed via public internet (http/https), and the console connects to the server via a specific administration management port directly on the host server as detailed in the architecture diagram.
Default configuration example for AWS Marketplace Launch from Website
First time users can launch the console at http://{Instance IP/DNS}/console
e.g. http://12.34.56.78/console
To support multiple users in your AWS account, you must delegate permission to allow other people to perform only the actions you want to allow. To do this, create an IAM user group with the permissions those people need and then add IAM users to the necessary user groups as you create them. You can use this process to set up the user groups, users, and permissions for your entire AWS account.
An IAM user with the following policies attached have been selected to launch the platform from AWS Marketplace in this guide. Please adjust IAM policies appropriately for your AWS account. Select either Launch from Website, or Launch through EC2 option.
Where AWS Managed Policies provide the IAM User with access across EC2 and Marketplace services, you will need to restrict services where appropriate if your organisation enforces finer grained security policies to services or regions.
To set up a user group, you need to create the group. Then give the group permissions based on the type of work that you expect the users in the group to do. Finally, add users to the group.
Once the user group has been created or during creation, you may attach the following policies.
AmazonEC2ReadOnlyAccess
Policy ARN arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess
Provides read only access to Amazon EC2 via the AWS Management Console.
SupportUser
Policy ARN arn:aws:iam::aws:policy/job-function/SupportUser
This policy grants permissions to troubleshoot and resolve issues in an AWS account. This policy also enables the user to contact AWS support to create and manage cases.
EC2InstanceConnect
Policy ARN arn:aws:iam::aws:policy/EC2InstanceConnect Allows customers to call EC2 Instance Connect to publish ephemeral keys to their EC2 instances and connect via ssh or the EC2 Instance Connect CLI.
An IAM Policy must be set for users working with instance metadata to require the use of IMDSv2 as detailed here. The policy specifies that you can’t call the RunInstances API unless the instance is also opted in to require the use of IMDSv2
AmazonEC2FullAccess
AWSMarketplaceFullAccess
ComputeOptimizerReadOnlyAccess
To access AWS Marketplace software licences granted with your organisation, you can create and set the following policy.
LicenseManager-List-Read
The recommended method is for the user within the IAM User Group to assume the required role using STS. AWS provides AWS Security Token Service (AWS STS) as a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users you authenticate (federated users). Reference documentation about the AssumeRole action which returns a set of temporary security credentials that you can use to access AWS resources that you might not normally have access to. These temporary credentials consist of an access key ID, a secret access key, and a security token. Typically, you use AssumeRole within your account or for cross-account access.
Where you are using AWS IAM or other federated identity management across the account, you may create specific roles in order to utilise the AssumeRole action when using the recommended method of AWS STS. You can use the roles as detailed below to implement segregation of user roles to perform specific operation tasks such as support user role, advanced user role, marketplace licenses role, and so on.
An IAM role is an identity you can create that has specific permissions with credentials that are valid for short durations. Roles can be assumed by entities that you trust.
Create the following roles basic and advanced support along with read only access to view AWS Marketplace software licences across the organisation.
Here are the steps for an IAM user or IAM users in a group to assume a segregated role within your account or there is a full tutorial for further explanation here.
Create a role (the role to be assumed) and related policy with privileges. Copy ARN of the role.
Example: Create the following User Role called SupportUser and attach the appropriate policies.
Role name: SupportUser
Attached Policies: SupportUser
AmazonEC2ReadOnlyAccess
EC2InstanceConnect
Grant access to the role by creating a new policy to assume a role and assigning to the user or group.
This can be an in-line policy or an attached policy to the user or group. Example to assume SupportUser role, ARN will also require your account ID.
Add an IAM user to the user group.
Switch role
Login as the IAM user to AWS console and switch roles using the top right menu item.
Select Switch role
Finally, login as the IAM user using your Account ID, role name that you wish to assume, and select a display name. In this example Role is SupportUser, and Account is account ID.
When third parties require access to your organization's AWS resources, you can use roles to delegate access to them. For example, a third party might provide a service for managing your AWS resources. With IAM roles, you can grant these third parties’ access to your AWS resources without sharing your AWS security credentials. Instead, the third party can access your AWS resources by assuming a role that you create in your AWS account.
When you select either Launch from Website, or Launch through EC2 option from AWS Marketplace, the user has the option to use an existing key pair or to create a new key pair to connect via SSH to the new EC2 instance running the TomorrowX Platform.
For other Linux installations the public keys are typically located in the .ssh/authorized_keys file on the instance.
It is strongly recommended as a minimum requirement to set and enforce an operational policy to rotate SSH keys at least every 90 days to allow IAM users who require access to connect directly with TomorrowX Platform instance(s). Set the rotation schedule according to the key and credentials rotation policy in force for your organisation.
Create a new key pair in the EC2 console, type RSA, format .pem. This is a private key that you must download to your local machine.
If you are launching a NEW TomorrowX platform instance, and have been assigned EC2 access permissions, you can select this key pair to SSH connect to the instance when it has launched successfully.
To add or replace a key pair, you must be able to connect to your instance. If you've lost your existing private key or you launched your instance without a key pair, you won't be able to connect to your instance and therefore won't be able to add or replace a key pair.
Generate Public Key from Private Key (e.g., TomorrowX-Platform.pem)
Execute the command ssh-keygen -y
Note: your key file must not be publicly viewable for SSH to work. Use this command if needed: chmod 400 TomorrowX-Platform.pem
400 protects it by making it read only and only for the owner.
You will be prompted to Enter file in which the key is and provide the path for the private key. Where (TomorrowX-Platform.pem) is the new private key.
For centOS installations append the above generated public key to /home/centos/.ssh/authorized_keys
Remove the old key from ~/.ssh/authorized_keys & /home/centos/.ssh/authorized_keys
Now re-connect to the instance using your SSH client command and the new private key, e.g. ssh -i "TomorrowX-Platform.pem" root@[Public IPv4 address]
No other programmatic system credentials or cryptographic keys are required.
Where customer sensitive data is being stored you can choose to use KMS as your encryption solution for your EBS resources associated with your EC2 instances or other data, ensure you utilise and enable automatic key rotation for an existing KMS key.
Customer Sensitive Data
When the instance has launched, the only sensitive data within the installation is the ec2-user password, that is initially set as the instance ID of the new EC2 Instance as detailed in AWS Marketplace launch usage instructions. There is no customer sensitive data stored upon initial deployment.
Where PII or PHI sensitive data could be present you should always encrypt the relevant AWS datastore.
All 3rd party or external services that are utilised to store PII or PHI sensitive data should be encrypted.
A common AWS Service is Amazon Relational Database Service (RDS) where you can encrypt your Amazon RDS DB instances. Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, read replicas, and snapshots. You need to enable encryption at DB level or table level or field level wherever customer data is stored.
Use Amazon EBS encryption as a straight-forward encryption solution for your EBS resources associated with your EC2 instances. With Amazon EBS encryption, you aren't required to build, maintain, and secure your own key management infrastructure. Amazon EBS encryption uses AWS KMS keys when creating encrypted volumes and snapshots.
Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.
You can attach both encrypted and unencrypted volumes to an instance simultaneously.
Where customer sensitive data is being stored then you should encrypt the EBS data volume attached to the EC2 instance. You will need to create a snapshot of the existing EBS volume and then create a new EBS volume using the newly created Snapshot ID ensuring you check the Encrypt this volume option and selecting the appropriate Key Management Service (KMS) key.
With a new encrypted EBS volume available you can detach the existing unencrypted EBS volume and attach the encrypted EBS volume.
The instance launches with administrator access only. NOTE: There is no back door access to the console application. Loss of administrator credentials will result in access denied without user recovery.
A fully encrypted application database is used for the application and user credentials. Console configuration is available for user SSO access to the console.
There are no AWS services data encryption configuration required as part of the deployment of the software. For the initial installation, any sensitive data such as API keys, passwords etc utilise a credential vault designed to be substantially better than keeping passwords in text file or hard-coded yet still reside within the self-contained instance environment of the TomorrowX Platform.
Administrator users only can access the credential vault values through the console, and obfuscated values (such as passwords) in the web page are never exposed. Instead, the HTML (if you were to view the source) simply contains the value “*SAME*” in all password fields that can be edited.
Every time an administrator changes a password value, the action is logged in the TomorrowX Platform audit log.
Instance Metadata Service Version 1 (IMDSv1) and Instance Metadata Service Version 2 (IMDSv2) are enabled by default when using the Launch from Website AWS Marketplace. You can verify this using the AWS CLI modify-instance-metadata-options command.
Example:
Response:
With initial ec2-user credentials created for the software console, you should turn off access to your instance metadata by disabling the HTTP endpoint of the instance metadata service using the AWS CLI command for the instance. You can reverse this change at any time by enabling the HTTP endpoint.
Example:
Response:
You are advised to use of IMDSv2 if the instance will require access to the Instance Metadata Service. Use the modify-instance-metadata-options CLI command and set the http-tokens parameter to required. When you specify a value for http-tokens, you must also set http-endpoint to enabled.
Example:
Response:
List the available CloudWatch metrics for your instance reference documentation is provided here.
MetadataNoToken - The number of times the instance metadata service was successfully accessed using a method that does not use a token.
This metric is used to determine if there are any processes accessing instance metadata that are using Instance Metadata Service Version 1, which does not use a token. If all requests use token-backed sessions, i.e., Instance Metadata Service Version 2, the value is 0.
Cloudwatch alarm on this metric will ensure you are notified if the instance is incorrectly using IMDSv1 without a token and can also be visually displayed by browsing for the MetadataNoToken metric in Cloudwatch for all instances.
Other data type Cloudwatch alarms with notifications for production environments are available to monitor CPU utilization when thresholds are above expected limits, and http(s) listeners for web applications monitoring in combination with EC2 load balancer configuration when instances are deployed as a target group behind a load balancer for scaling the platform, if and when required.
Access to the console is via the URL https://{ip address}/console, where {ip address} is the public or Elastic IP address allocated to the EC2 Instance. If you can log in, then the Platform’s Console is working correctly, and all other issues can be resolved from within the Console.
If the console log in window does not load or does not log you in, you can check the platform’s log files by accessing the EC2 instance via SSH and navigating to the following location: opt/local/Tomorrow/server/logs - the logs will provide information about the issue preventing proper function of the Console.
If you can successfully log in to the Console, use the Servers window to check the health of your server, which is where your solutions are deployed to and run from.
Navigate to Administration -> Server Definitions area to rectify Server definition and reachability issues such as port definition, host name, and Server Encryption Key.
TomorrowX Platform service restarts will also help restore the service application of both the console and server. You need to SSH connect to the instance to perform service restarts.
To stop the service use: # service tomorrowstart stop
To start the service use: # service tomorrowstart start
It is good practice, and highly recommended to routinely update the instance with available packages. For example, run the # sudo yum update
command as root user for centOS or RHEL every month.
The TomorrowX Platform contains its own internal data store for storing user data, preferences, and the created solutions. There is no specific backup strategy in place as part of the AWS Marketplace deployment. After launch AWS customers are highly encouraged to create Elastic Block Store (EBS) Snapshots of running instance volumes using Lifecycle Manager within the EC2 console.
Tag the instance volume with name Backup and value set to true
Create new lifecycle policy, with policy type EBS snapshot policy
Set Target resources to Volume with Target resource tags Key to Backup with value true
Set the desired backup schedule to create the volume snapshot, for example daily every 24 hours. Select the desired backup and retention policy for your organisation.
To restore a volume snapshot as a new instance, select Create image from snapshot from the Actions menu. A new Amazon Machine Image (AMI) will then be created, from which you can Launch instance from AMI from the Images > AMIs to restore your instance. Please note that user credentials will be those of the instance volume from which the backup was created, new ec2-user credentials are not created with the new instance ID using this method.
If you wish to take a manual backup of the TomorrowX Platform installation:
SSH connect to the instance
Stop the TomorrowX Platform service using the command:
Zip the entire contents of the TomorrowX Platform installation directory. Default installation path is opt/local/Tomorrow where Tomorrow is the installation directory
Copy the zip file to the backup target location of choice
Start the TomorrowX service using the command:
You can restore this folder to your new instance location, ensuring startup service is reinstated to the new instance whilst respecting hardware configuration of the original installation from where the backup has been taken
The following pages describe the service endpoints and service quotas for AWS services. To connect programmatically to an AWS service, you use an endpoint.
Service quotas, also referred to as limits, are the maximum number of service resources or operations for your AWS account.
With Service Quotas, you can view and manage your anticipated quotas for AWS services from a central location. Quotas, also referred to as limits in AWS services, are the maximum values for the resources, actions, and items in your AWS account. Each AWS service defines its quotas and establishes default values for those quotas. Depending on your business needs, you might need to increase your service quota values. For adjustable quotas, you can request a quota increase. Smaller increases are automatically approved, and larger requests are submitted to AWS Support. You can track your request case in the AWS Support console. Requests to increase service quotas don't receive priority support. If you have an urgent request, contact AWS Support. AWS Support might approve, deny, or partially approve your requests.
To request a service quota increase:
In the navigation pane, choose AWS services.
Choose an AWS service from the list or type the name of the service in the search box.
If the quota is adjustable, you can choose the button or the name, and then choose Request quota increase.
For Change quota value, enter the new value. The new value must be greater than the current value.
Choose Request.
If you are expecting to scale your application or service or create multiple public internet facing instances you should proactively Request a quota increase for the EC2-VPC Elastic IPs as the default quota value is set to “5”.
Similarly, if you are using Amazon’s Simple Email Service (Amazon SES) aws.amazon.com/ses/, for console set up user credential reminders, server fail notifications, or the application is using email alerts then you should adjust the default quota to the anticipated sending volume and rate. Typically, AWS will increase the default quota sending limit from 200 to 50,000 upon request approval.
In a high availability deployment scenario, ensure you check and adjust default service quotas for application load balancers and RDS services appropriately.
Elastic Load Balancing endpoints and quotas
Amazon Relational Database Service endpoints and quotas
Ensure you review your architectural diagram to check all AWS services used and quotas applied to your account.
AWS Customers are strongly recommended to subscribe to AWS Business support plan, or greater to receive appropriate support from AWS for all production deployments and related AWS services used.
The default behaviour for the X Engine is to fail open in any case where an internal failure in a rule occurs. For most situations this will be appropriate.
However, for complex online environments where the rule set is well established and known to be solid, there may be a case for altering this to fail closed (automatically recover) from a failure. This behaviour can be altered by changing the configuration for the X Engine.
Should a failure occur, an email could be automatically sent to one or more recipients alerting them to the fact that it has happened. The email will contain a failure notification and a Java stack trace of the error that was detected. This later approach makes it easy to determine precisely what caused the error and allows the recipients of the notification to quickly forward it to technical support for further investigation, if required.
Using the email feature requires an SMTP server to be defined for the server.
You can also use the Fail Safe Point rule to encapsulate code that is considered at risk of failing, but for which you do not wish the entire X Engine to terminate.
If you have access to trace data you also have access to system failure information in the server status. More details on this are found in the .
This guide walks through the process of embedding Tomorrow Software into a CI/CD pipeline and creating a containerized deployment. It is assumed that the reader is familiar with the basic steps of deploying configurations within Tomorrow Software and understands the basic concepts of Docker and Jenkins.
The Tomorrow Software console provides a proprietary integrated solution development environment including version management of all repositories (rulesets, configurations, content files etc.) created within the console. As a result, Tomorrow Software will retain the ‘source’ mastering responsibility and the process described below does not implement a traditional source code repository solution for driving CI/CD pipelines.
Build pipelines are typically initiated via changes to the source code repository using triggers on check-in or merge requests.
The Tomorrow Software console provides sandpit functionality for local rapid solution development configuration and testing. Locally defined service endpoints are used to interface with the delivery pipeline.
A predefined server definition within the Tomorrow Software console is used to stage a release candidate. Whereas the traditional operational model is to push the configuration directly to a remote (production) endpoint, the staged candidate is stored locally on the console server. A command (CLI) function is provided to package the staged release in a format that can be incorporated into a container image.
The process described in this document details the steps to create a JBoss/WildFly container with the running Tomorrow Software configuration based on a server definition called UATServer1. By having “functional environment” named servers the console can manage the specific configuration such as credentials that may differ as candidates are promoted through environments.
Pass the Tomorrow Software server name into pipeline as a parameter
Create deployment structure in workspace
Call Tomorrow Software console CLI function to get home location of console.
Copy required configuration files into workspace
Retrieve deployment configuration using console CLI function
Edit Tomorrow Software configuration files with target parameters
Build WAR file
Build Docker image
The first step is to create a server definition in the Tomorrow Software Console for the target (UATServer1) server.
Once the server has been created, deploy the BaseWebTrial example configuration. This will publish the default configuration files to the file system and provides a known starting point to verify files are correctly included and propagated into the container.
The first pipeline stage prepares the workspace and prepares all files. Two Tomorrow Software console CLI commands are executed, getInstallLocation and getServerHome. The first returns the absolute folder path that the console server is installed; the second command returns a zip-file with the deployed server contents
Stage 2 involves updating the magic.properties file to set the target home directory, autostart and the admin port details.
The third stage involves wrapping the config and jar files into a WAR file for deployment into the web-application server. Ant is used to build the WAR and it is driven by an XML file generated from within the pipeline script.
The fourth stage in this pipeline example creates a Docker image from the configured files using the repository tag tomorrow/jboss. The Dockerfile used to construct the image is dynamically generated from the Jenkins pipeline script.
Following successful pipeline execution. The image tomorrow/jboss is available.
Verify the image runs correctly with the Tomorrow Software WAR file installed
A final sanity check to ensure that the deployed BasicWebTrial configuration is correctly deployed in the /Tomorrow folder of the image and matches the original software folder investigated.
Where a configuration needs to connect to an external database, the JDBC driver file will need to be included into the Pipeline Stage-3 (WAR File) and a shell command to move the file from the configuration base directory to the staging folder.
The following example shows the updates required to include a Postgres JDBC driver as part of the build process.
Include the EDB driver as a Data File into the configuration as per below.
As part of the extract from console function the JDBC jar file needs to be included and copied to the target jars directory and then added to the WAR file create task.
If you have a need to monitor the Composable Architecture Platform engine using an external monitoring service, you can include the following line of code in your application status page to determine if the Composable Architecture Platform engine is active:
For example: The following JSP construct will result in a page that outputs true or false, depending on whether the engine is started or not:
The Composable Architecture Platform Server installation is a complete product set including sample applications. At times you may wish to remove components that you don’t use in a given installation. The easiest way to do this is by changing the server type in the console setup:
You can only change the server type once as it involves deleting several components in the server structure. There are 5 different Console types to select from as shown below:
The effect of changing the Console type is as follows:
Composable Architecture Platform ships with support for JSR168 portlets. Portlets generally behave the same way as a servlet, although there are a number of significant restrictions that need to be taken into account when writing rules for portlets.
Firstly, portlets do not receive IP specific information such as the address of the remote host – nor can portlets set or access cookies. This means that to perform basic site protection or analysis, a site should use a combination of both portlet filtering and what can be referred to as “landing page” filtering.
Many portals (such as Apache JetSpeed) have a generic servlet that is always invoked and used to delegate the browsers post to the portlet engine. This generic servlet is ideal as the “landing page”. Other portals have an initial JSP or other means of receiving the initial access request to the portal, and from there, form the overall content. Whatever the method, you should always attempt to have an Composable Architecture Platform generic filter sitting at the very front of your portal.
Most portals, however, create intricate URLs that do not inform the rule writer about where the data posted from the browser is going to end up in the application. You are therefore required to install the Composable Architecture Platform portlet filter in front of each portlet (notice the distinction between the generic and portlet filter).
Each portlet filter should only be installed for portlets that are actually worth protecting. This is true both from an effort and performance perspective.
To install the portlet filter, you must edit the portlet.xml configuration file to wrap each individual portlet in the application. The illustration below shows how those tags are modified to include the filter.
First the <portlet-class> tag is modified to refer to the class software.tomorrow.engine.server.RulesEnginePortletFilter.
Next an initialization parameter is added to allow the filter to identify which portlet to invoke (the original class name now removed from the <portlet-class> tag).
These steps are repeated for each portlet that requires protection.
The installation is shipped as single instance combining the console and server components. This ensures all available architectural deployment options can be considered as and when solutions are created and released through the development lifecycle into production. The instance may need to connect to various on-premise, hybrid, or external integration points (e.g., databases, CSV data files for processing, or 3rd party API services). Refer to the section for more details for architecting these scenarios.
For any advanced, or new scenarios not listed here, contact us directly for guidance as detailed on the AWS Marketplace listing
Please refer to the product reference section - in order to manage the default accounts and change passwords.
If you are new to AWS Identity and Access Management, then reference getting started reference documentation here: .
Refer to documentation about AWS Managed policies here
.
More details can be referenced here.
For more details about Amazon EC2 key pairs and Linux instances refer to
AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the cryptographic keys that are used to protect your data.
When you enable automatic key rotation for a KMS key, AWS KMS generates new cryptographic material for the KMS key every year. For more information about KMS key rotation refer to
Please refer to the product reference section - if you require further details.
For general information about the AWS Instance Metadata Service refer to
After the instance successfully launches in EC2, the TomorrowX Platform will then start as a running service. When running, it will immediately invoke a token authenticated API GET request to retrieve the metadata instance-id using IMDSv2 as follows: This is the only request made to the Instance Metadata Service, initiated from the new instance itself, not externally.
The instance-id value is used as the unique password to then auto-create the ec2-user credentials, which provides software console access only to the AWS customer launching the instance. The AWS Marketplace usage instructions then guide the user to the section, such as changing user password and setting user access roles.
A prerequisite for setting Cloudwatch alarms is to create an Amazon Simple Notification Service (SNS) topic - , and then creating a suitable subscription to that topic in order to receive alerts via the alert delivery method preference option. E.g., email, SMS, http(s) web server endpoint, and other options are available.
You are recommended as a minimum to Create status check alarm using AWS Cloudwatch - aws.amazon.com/cloudwatch, to monitor Status check failure:either at appropriate intervals which will be dependent on the application or service being created, and assigned service level as to whether it’s a development, test, or production instance. This alarm will monitor both instance and system status. The AWS/EC2 namespace includes the following status check metrics. By default, status check metrics are available at a 1-minute frequency at no charge. Refer to Status check metrics reference documentation here
Sign in to the AWS Management Console and open the Service Quotas console at .
See price plans
– calls the Tomorrow Software console CLI interface
– creates the Tomorrow Software WAR file
Important: It is common for portals to split their portlets into multiple applications. Please see the previous section for instructions on how to link multiple applications together as a single managed entity.
Port Number
Use
How to change
80
HTTP port for the console, demo application and the built in proxy
Edit the file /server/start.d/http.ini and locate the following text:
## Connector port to listen on
jetty.http.port=80
Modify the value to be the desired port
443
HTTPS port for the console, demo application and the built in proxy
Edit the file /server/etc/jetty-ssl.xml and locate the following text:
default="443" /></Set>
Modify the value to be the desired port
8080
Optional browser proxy
Under administration, click on the server where the browser proxy is running. Under the "Forwarding" tab, locate the browser proxy port and set it to a different value (0 to stop it completely). Then redeploy the configuration to the server and start it. The port will be changed.
9931
Default Multi-Protocol server Standalone instance
In the Multi-Protocol folder, the file magic.properties contains the port number.
9936
Default Stress Test Server
In the Stress folder, the file magic.properties contains the port number.
9930
Console management port
This and the following management ports are all set under their relevant web application properties files. This properties file is located in:
/server/webapps/[name]/WEB-INF/classes
[name] console
9932
Test server
management port
[name] testserver1
9933
Qwerty demo
management port
[name] qwerty
9935
Default Multi-Protocol server management port
[name] mpserver1
9944
Built in Proxy management port
[name] root
Setting
Details
Home
The home folder for the console. Configuration and rule sets will be stored within subfolders under this folder. The folder must exist, and the console application server user must have read and write authority to it.
The value can be an absolute or relative file path.
Languages
A CSV list of the languages supported (and made available) in the console.
Locales
A CSV list of the locales that map to the languages installed.
Schema
The database schema used for the console configuration database.
Catalog
The database catalog used for the console configuration database.
DriverClass
The name of the actual JDBC driver class as it is loaded by the connection pool. Even though configuration information could be stored in DB2 (or some other production level database), it is recommended to use the default Derby driver for configuration. In production environments, the Derby driver should not be configured for use within the product. This will ensure that it is impossible to write a rule set that modifies the product’s own configurations.
DriverConnection
The connection string used by the JDBC driver to connect to the database.
DriverUser
DriverPassword
The user ID and password to use for the JDBC connection
MailServer
This is the system wide SMTP server used for all sending of emails from within the Composable Architecture Platform console (mostly password notifications).
MailUser
MailPassword
If the SMTP server requires authentication, specify the user ID and password in these settings.
MailingHost
The name of the host being used to identify the sending host in any email. This value allows the emails to link back to the original web page.
MailingSender
The email address used to send password reset notifications and user ID notifications
MailUseSSL
Specifies if the mail server uses SSL for connectivity. If yes, this value should be set to "true" otherwise it should be "false" or not exist
MailUseTLS
Specifies if the mail server uses StartTLS for connectivity. If yes, this value should be set to "true" otherwise it should be "false" or not exist
AuthenticationPlugins
A list of plug-ins that can alter the logon behaviour of the Composable Architecture Platform console.
UpdateServer
The URL of the Composable Architecture Platform update server. This server will be checked regularly for new available updates. This setting is optional.
ProxyHost
ProxyPort
ProxyUser
ProxyPass
ProxyDomain
These settings are used to define a proxy server that the console must pass through to access the update server or other www based services.
These settings are optional, however, if the ProxyHost is set, all other settings must be defined.
If the web proxy used NTML authentication, the proxy host should be the fully qualified name of the proxy (for example: myproxy.mycompany.com) and the ProxyDomain must also be set to the local domain. For other non-Microsoft proxies, the ProxyDomain should be blank.
UserRegEx
PassRegEx
These settings specify how user IDs and passwords are validated in the form of regular expressions. The default is to require a minimum of five characters/numbers, but alternatives can be specified here. Please note that the regular expression format is as used by JavaScript.
MasterConsole
MasterId
MasterPwd
These settings change the behaviour of the console from being a master console to being a slave console (Cluster Node). To make a console a slave console for cluster management, the MasterConsole setting must contain a URL pointing to the root web path of the master console (for example: https://192.168.1.1/console). The use of SSL is recommended for a master console connection.
The MasterId and MasterPwd refers to the corresponding values specified for this slave console in the master console.
Please see the section on managing large clusters for more information.
AccessManager
The access manager field can be used to override the method for user authentication. The currently supported access managers are for either the SAML or LDAP Authentication Plugins.
This is set on the AccessManager property as follows: LDAP
software.tomorrow.authenticate.LDAPAuthenticationPlugin SAML
software.tomorrow.authenticate.SAMLAuthenticationPlugin
Additional properties are required for either the LDAP and SAML Authentication Plugins. Please see below.
LDAPDomain
LDAPHost
LDAPSearchBase
LDAPTimeZone
These properties are used to define how the console connects to an LDAP server when the LDAP Authentication Plugin is set as the access manager
The LDAP Domain is set as follows:
LDAPDomain=mycompany.com
The LDAP Host is set as follows:
LDAPHost=ldap://myldaphost.mycompany.com
The LDAP Search base is used to locate the correct segment in the LDAP server. A typical configuration would be:
LDAPSearchBase=DC=mycompany,DC=com
The LDAP TimeZone denotes the time zone that will be used for the users authenticated via LDAP. For example:
LDAPTimeZone=GMT
SAMLMetaData
This property is used to define how the console connects to an identity provider (IdP) using SAML when the SAML Authentication Plugin is set as the access manager. SAML metadata value is the XML document which contains information necessary for interaction with SAML-enabled identity or service providers or the http reference to the meta data URL. SAMLMetaData=[XML or HTTP reference to meta data URL]
Setting
Details
homeDir
The home directory for the X Engine. All files and rule sets deployed to the server will be stored in this folder and backup copies of any rule sets that have been replaced will be stored in a subfolder called “backup”.
The value can be an absolute or relative file path.
Port
The TCP/IP port where the X Engine will listen for instructions from a Composable Architecture Platform console.
If using vertical scaling (multiple application server instances on the same server sharing a common configuration folder), then provide a list of port numbers separated by commas. Make sure to create a server instance in the console for each of the ports specified in the port list. Each server clone can then be managed as an individual X Engine. Please note that for vertical scaling, rules for the "Master" (first port in the list) will be stored in the top level home directory, whereas rules for any one of the "slaves" will be deployed to a sub-directory of the home directory named "CloneX" - where X is a sequential number.
validIPs
This directive is a comma separated list (CSV) of IP addresses of Composable Architecture Platform consoles that are allowed to send commands to this server. This setting is optional.
For stronger security, specify a list of trusted console servers that can communicate with this server.
failOpen
Determines the behaviour in case of a failure. If set to true, the X Engine will terminate itself if an internal problem occurs. If set to false, the X Engine will attempt to recover from any failure.
Note: use the Fail Safe Point rule to manage this in a finer grained manner.
preserveStream
This setting is specifically used for implementations that store and forward requests provided to it over HTTP. Examples of this include the Java/PHP Bridge and the built in forwarding proxy.
Generally speaking, if this setting is used in inline filter installations it should be set to false (unless this causes the application to somehow fail). Setting it needlessly to true will impact performance.
maxRequestSize
This setting is for filter and built in forwarding proxy installations only and determines the maximum request size the filter will accept in bytes. This can be used to protect an application against excessive sized upload attempts and also applies to multi-part POSTs.
If not specified, the default setting is 10MB.
bannedVariableNames
This setting nominates variables that are never allowed to be set within the X Engine when the variables are accessed inline in a proxy or servlet filter. For example, to make sure a user is unable to see an end user’s password, nominate the field name for the password here. Consequently, the rule set will not see the variable.
The list is a CSV list of disallowed variable names and is case sensitive.
bannedVariableMasks
This setting determines what happens when an attempt to set a given banned variable is done within the X Engine. It is a CSV list of tokens that must match the list in bannedVariableNames. Valid tokens are:
REMOVE meaning the variable will not be set at all
MD5HASH meaning the variable will be set, but encoded with an MD5 Hash
SHA1HASH meaning the variable will be set but encoded with an SHA-1 Hash
PCIMASK meaning the variable will be treated as a credit card number where only the first 6 and last 4 digits will be visible.
Masks and hashes can be combined using the & operator. For example: SHA1HASH&PCIMASK will result in two new variables:
[fieldname]_SHA1HASH and [fieldname]_PCIMASK
EncryptionKey
If the connection between the console and the X Engine is to be encrypted, specify a secret password here. The same password must be provided when the host is defined in the console. If an incorrect match between the console and the host is made, the server will be seen as offline in the console. Note that this field is case sensitive.
This setting is optional.
EncryptionAlgorithm
If using an encrypted connection between the console and the X Engine, specify the encryption algorithm to be used here. The available algorithms depend on the cryptographic extension installed with the JDK version.
If left blank, PBEWithMD5AndDES will be used if an EncryptionKey is specified.
|
|
|
Tenancy | Default |
Default VPC | Yes |
IPv4 CIDR | e.g.172.31.0.0/16 |
DNS hostnames | Enabled |
DNS resolution | Enabled |
|
|
|
Available IPv4 addresses | 4090 |
Default subnet | Yes |
Auto-assign public IPv4 address | Yes |
IPv4 CIDR | e.g.172.31.64.0/20 |
Auto-assign public IPv4 address | Yes |
Hostname type | IP name |
Auto-assign IPv6 address | No |
Auto-assign customer-owned IPv4 address | No |
Resource name DNS A record | Disabled |
Resource name DNS AAAA record | Disabled |
|
|
|
|
0.0.0.0/0 | e.g. igw-ae0737cb (internet gateway ID) | No |
172.31.0.0/16 | local | No |
|
|
|
|
|
Inbound rules (allow) | All traffic | All | All | 0.0.0.0/0 |
Outbound rules (allow) | All traffic | All | All | 0.0.0.0/0 |
|
|
|
|
Inbound rules (IPv4) | HTTP HTTPS SSH | TCP TCP TCP | 80 443 22 |
Outbound rules (IPv4) | All traffic | All | All |
Role Name | Description | Attached policies |
SupportUser | Users who need to SSH connect to instances to perform routine maintenance and server reboots. |
|
AdvancedSupport | Users who require EC2 and Marketplace full access to launch and configure instances. Deploy and advanced technical support. |
|
MarketplaceLicences | Users who require access to list software licence subscriptions across the organisation. |
|
Type | Effect |
Demo Server | None |
Console only | All applications (including the forwarding proxy) are removed from the server. The only items left are the console itself and the test server. |
Forwarding Proxy with console | All demo applications are removed. The only remaining items are the console, the proxy and the test server. |
Forwarding Proxy without console | All demo applications as well as the console and the test server are removed. The only remaining active item is the proxy. |
Cluster Slave Console | All demo applications and the forwarding proxy are removed. All repositories are removed, and the data folder is cleaned. |
This is a Getting Started guide supplementary to the reference documentation of Composable Architecture Platform (CAP), specifically to help Google Cloud customers with installation, setup, and production considerations when deploying CAP to Google Cloud Platform (GCP) from the available TomorrowX solutions listed on Google Marketplace. If you are new to CAP, an introduction to CAP can be found here. You can find the TomorrowX partner profile in the Google Cloud Partner directory. For first time users click the GET STARTED button on the CAP Product Details page.
At the time of writing, this guide has been created with an installation using a Red Hat Enterprise Linux (8.10) Google Cloud public image. Basic Linux commands are required to connect to your instance and perform operational tasks such as server updates, restarts, and SSH connection. Google Cloud's Red Hat Enterprise Linux FAQ page covers frequently asked questions around support, migration and licenses when running Red Hat Enterprise Linux (RHEL) on Google Compute Engine. Optional suggested reading: Installing on Red Hat Enterprise Linux
To determine the installed JDK version, SSH connect to the VM instance and use the command
java -version
You may need to set JAVA_HOME
Example:
The CAP installation is shipped as single VM instance combining the console and server components. This ensures all available architectural deployment options can be considered as and when solutions are created and released through the development lifecycle into production. The instance may need to connect to various on-premise, hybrid, or external integration points (e.g., databases, CSV data files for processing, or 3rd party API services). Refer to the section Architectural Scenarios for more details for architecting these scenarios.
In this guide we are referencing the initial installation components as made available from the launch directly from Google Cloud marketplace. Using this solution deployment you will be free to adapt the architectural scenario for scale and most appropriate business use case.
For a better security posture, we provide a sample high availability example for high availability deployed within private subnet behind a load balancer for failover and administration access whereby the CAP Console instance is physically separated to Runtime (n) number of CAP Agents to be auto-scaled relative to anticipated traffic load, and availability requirements.
For any advanced, or new scenarios not listed here, contact us directly for guidance as detailed on the Support tab of Google Cloud Marketplace product details listing.
Either select an existing project resource in your GCP organisation, or create a new project for the CAP installation. From the dropdown organisation field in the top banner you are prompted to select an existing resource as follows.
Alternatively you can create a new project by selecting the NEW PROJECT option in the top right where you'll be prompted to define the project name, organisation, and location.
When the new project has been created, it will shortly show as an available resource to select in the banner dropdown select field. You can then proceed to click the get started button.
Now that you've agreed to the terms, you can continue to launch or deploy
Once terms have been agreed the Getting Started button is replaced, and you are now ready to launch and a deploy CAP VM.
When you press launch for a new project, you will be prompted to enable following APIs required to deploy CAP VM product from Marketplace. Click ENABLE, and be patient for a few minutes whilst these services are enabled.
After successfully enabling APIs you will be presented with the deploy page, for a new project you will be required to create a new service account to run the deploy processes for CAP. A new service account will be created with the following roles:
Complete the required fields including selecting the compute zone where the CAP VM will be deployed.
Scroll further down the deploy page, and a General Purpose E2-Standard VM is pre-selected as default (2vCPU 8GB Memory). This selection is ideal for a first time deployment to run the CAP Console and Proxy Servers on this single VM. Boot Disk size of 20GB is configurable depending on how much data you are planning to store on this single VM.
The default networking confguration will create firewall rules to accept the following traffic.
If you are planning to use the built in proxy (BIP) browser proxy then a new firewall rule to allow TCP port 8080 traffic from the test client browser will additionally need to be created once the VM instance is running. This is to avoid security exposures for the default deploy configuration.
Once the configuration has been defined for your selections, go ahead and click DEPLOY at the bottom of the page.
Once deployed, select the DETAILS tab to access the Admin Url which you can access via a browser.
First time users can launch the console from the Admin Url as detailed on the Google Marketplace Solution Deployments Details page at https://{Instance IP/DNS}/console e.g. https://12.34.56.78/console
To retrieve the password, select the Resources tab on the Solutions Deployment page, and click on the Compute Engine resource name of the VM instance that has been successfully deployed.
The Compute Engine VM Instances basic information page will open from this link, where you will be able to copy the Instance ID value which is used as the unique administrator password for first time login to the CAP console for User ID gcp-user.
Please refer to the product reference section - Essential things to do first in order to manage the default accounts and change passwords.
Connect via SSH to the new VM instance via the SSH dropdown options list on the Compute Engine VM Instances basic information page. Read more information about how to connect to Linux virtual machine (VM) instances: Connect to Linux VMs
Example gcloud command:
Read more: About Google Cloud SSH Connections
When the instance has launched, the only sensitive data within the installation is the gcp-user password, that is initially set as the instance ID of the new VM Instance as detailed in Google Cloud Marketplace solution deployments details page. There is no customer sensitive data stored upon initial deployment.
Where PII or PHI sensitive data could be present you should always encrypt the relevant AWS datastore.
All 3rd party or external services that are utilised to store PII or PHI sensitive data should be encrypted.
After the VM instance successfully launches in Google Cloud Compute, CAP will auto-start as a running service callef tomorrowstart
. When running, it will immediately invoke a token authenticated API GET request to retrieve the metadata instance-id as follows:
http://metadata.google.internal/computeMetadata/v1/instance/id
This is the only request made to the Instance Metadata Service, initiated from the VM instance itself, not externally.
The returned instance-id value is used as the unique password to then auto-create the gcp-user credentials, which provides admin console access only to the GCP customer launching the instance. The Google Cloud Marketplace usage instructions then guide the user to the Essential things to do first section, such as changing user password and setting user access roles post deployment.
The Ops Agent is the primary agent for collecting telemetry data from your Compute Engine instances. Combining the collection of logs, metrics, and traces into a single process. Ops Agent is not installed as default as a Marketplace Solution Deployment, if required you will be prompted to install Ops Agent on the observability tab on the Compute Engine VM Instances basic information page to capture and monitor this data for the VM instance.
If you install the Ops Agent, then you might be charged for the metrics, logs, or traces that the agent sends to your Google Cloud project. For pricing information read more here
If the console login window does not load or does not log you in, you can check the log files by accessing the VM instance via SSH and navigating to the following location: opt/local/Tomorrow/server/logs
- the logs will provide information about the issue preventing proper function.
If you can successfully log in to the Console, use the Servers window to check server health where your solutions are deployed to and run from.
Navigate to Administration -> Server Definitions area to correct Server definition and connectivity issues such as port definition, host name, and Server Encryption Key.
The tomorrowstart service restarts will also help restore the service application of both the console and server. You need to SSH connect to the instance to perform service restarts.
To stop the service use:service tomorrowstart stop
To start the service use: service tomorrowstart start
It is good practice to routinely update the VM instance with available packages. For example, run the sudo yum update
command as root user to install RHEL patches and updates .
CAP contains its own internal data store for storing user data, preferences, and the created solutions. There is no fixed backup strategy in place as part of the Google Cloud Marketplace deployment.
Read more in the section Backup and Restore
If you wish to take a manual backup of the CAP installation:
SSH connect to the VM instance
Stop the tomorrowstart
service using the command:
Zip the entire contents of the TomorrowX Platform installation directory. Default installation path is opt/local/Tomorrow
where Tomorrow
is the installation directory
Copy the zip file to the backup target location of choice
Start the tomorrowstart
service using the command:
You can restore this folder to your new VM instance location, ensuring the tomorrowstart service is reinstalled to the new instance whilst respecting hardware configuration of the original installation from where the backup has been taken.
Basic Support is included for all Google Cloud customers.
Read more about Google Cloud Basic Support or get more information to Sign up for other Customer Care offerings.
Google Cloud Compute Engine - required Google Cloud Marketplace – required
Before you deploy, you must check the agree to the CAP details and terms to deploy the CAP product, and AGREE.
Allow TCP port 22 traffic from the Internet (for SSH connection) Allow HTTP traffic from the Internet (port 80) Allow HTTPS traffic from the Internet (port 443 - note SSL certificate is not installed)