All pages
Powered by GitBook
1 of 16

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Examples

GUIDES

Hello, World!

Version: 10.0 / Modifications: 0

Introduction

Hello, World!

As with all new programming languages, the "Hello, World!" program generally is a computer program that outputs or displays the message "Hello, World!". Such a program is very simple in most programming languages and is often used to illustrate the basic syntax of a programming language. It is often the first program written by people learning to code.\

✨ Now step inside and follow these steps to complete your very first composition with Composable Agentic Platform.

Requirements

  • A running local, or cloud hosted instance of X.

  • Installed console version 10.0.0.21050 or later.

  • Chrome or Firefox browsers are supported.

  • Ports 80 and 443 are required to be available to run the console and Programmable Data Agent.

For the purposes of these instructions [your server name] = localhost

For example: http://[your server name]/console/ = http://localhost/console

You need access to a console login screen like this:

Composable Agentic Platform - Console -

Say Hello [content file]

Click the link to open http://localhost/hello.html in a new browser tab:

Hello World - Content file -

You’ll see a simple html content file called hello.html has already been pre-deployed and is served up to the browser by the running Programmable Data Agent.

Go ahead and enter your name in the form and press the Say Hello button. The form submission responds with Hello.

SayHello Response

Background Information

The Programmable Data Agent loads hello.html, prompting the user to enter a name and to click a button labelled Say Hello. When the button is clicked, the text entered should be appended to "Hello". For example, if the text entered is "World!" then the result will be "Hello World!"

Objective

The user experience needs improving because any text entered is currently ignored. Can you follow this guide to improve the user experience?

First up, let’s go and see where the hello.html file lives….

Login [console]

Login to the console using the default credentials. In your case if you are working on the localhost console, use the default credentials: http://localhost/console/

  • User ID: admin

  • Password: admin

Open Repositories

Once logged in, press Start followed by Repositories.

We typically call these “repos”. It’s the home, or workspace in the console of where your work lives.

Navigate to Repositories

Now Click on the Hello World repository folder (no need to expand the folder tree just now, as that’s where you can save and restore your repository backups – we’ll get to that soon enough!).

Now press View and then expand the Content Files folder.

Hello World repo
Hello World Content Files - navigation -

Content files can be HTML, XML, images, or any other binary content that may be required to be served when requested.

Content files can also be dynamically modified by content rule sets, we’re not covering those in this example.Content files live within a content path that must map to the content path of the application. In our simple example, hello.html is served under localhost being the root directory, so therefore it resides in the top-level Content Files folder.

http://localhost/hello.html

As the Hello World configuration has already been deployed from the console to the target server Programmable Data Agent, this is why the page loads when requested.

Update Content Files

So, let’s inspect the html file. Click on hello.html, and a new portal window will open for the file. Click on the Update button as follows.

Updating the hello.html

A new browser window opens to show a html editor for the hello.html content file. Note the input parameter name on the form is set to Name. We don’t need to make any changes to the html file so you can close this window.

hello.html content file

So that’s a small introduction to Content Files. Next, let’s take a look at Rule Sets.

SendResponse [rule set]

With the Hello World repository open, expand the Rule Sets folder, then click the SendResponse rule set and press Update in the portal window that opens.

Update a Rule Set

The rules editor is the graphical design tool for composing and maintaining rule sets. The rules editor is launched as a separate browser window from within the console application when you press Update.

Rules Editor – example for reference only

Go ahead and browse the vast catalogue of what we describe as “digital blocks” on the left-hand side. The catalogue is grouped into collections. To use any block in the catalogue, expand the group folder, then click and drag a block onto the main canvas as shown.

In this example, you can expand the Alert group folder and drag the Send Kapow SMS block onto the canvas.

Send Kapow SMS block

Rules Properties – example for reference only

Now click to select the Send Kapow SMS block on the canvas, and the left-hand side catalogue will switch to the Properties tab.

Prosperities tab for a rule

Each block has properties you need to set when composing, along with adding a more meaningful description (like adding comments in code).

In this example you can set the properties as two variables called MESSAGE and MOBILE. The properties of this the block requires these in order to perform its intended function. These variables would need to contain the values of the SMS message, and the phone number to send the SMS message to.

Everything else is taken care of.

Checking the help section for a Rule

Each block has additional online help you can access by right-clicking over the selected block and pressing Help.

Give it a try.

Set Variable

So, let’s get back to our example. Click to select the first block called Set Variable and view its Properties.

Selected blocks banner colour turns grey.

Set Variable block

The block does exactly what it says on the tin. It sets a new variable. In this example we’ve set the variable name to RESPONSE. With the value set to a snippet of html code. We enclose this snippet in quotes.

“<html><body><h1>Hello "+NAME+"</h1></body></html>”

Note how this value has been constructed in three parts.

“STRING”+VAR+”STRING”

You’ll remember from earlier, the form submission responds with “Hello”, that’s because the NAME value hasn’t been defined or “passed into” this rule so therefore it processes NAME as a blank value, so the value of RESPONSE would look like this on exit.

“<html><body><h1>Hello </h1></body></html>”

HTTP Response

Click to select the second block called HTTP Response and inspect the Properties. Selected blocks banner colour turns grey.

You can also COPY/ CUT / DELETE / PAST block(s) with a simple right click.

How easy is that?!

HTTP Response block Properties

Guess what!?

This block also does exactly what it says on the tin. It responds to an http request with content of the response data that has been set in the property. In this case the variable RESPONSE is the html snippet value set in the preceding Set Variable block.

You’ll see this block also requires an HTTP Status code and Content Type set.

This rule performs the final response behavior by the Programmable Data Agent you’ve already experienced when you clicked the http://localhost/hello.html link and pressed the Say Hello button.

Rule Info

Click on the fourth tab called Rule Info for the SendResponse rule set.

Rule Info Tab

The Export to Group and Short Description represent this rule set as a new block that can then be (re-)used in other compositions. We will use the Send Response rule that lives in the Hello World Grouped folder in the next steps.

Description, Export and Short Description inputs

Note it has the Parameter Type set to Input, Parameter Name set to NAME, and has been given a Label of Name.

Parameters section

We’ve finished looking at the SendResponse rule set now, so go ahead and close it by closing the Rules Editor window.

Do NOT save any changes if prompted to do so.

SayHello [rule set]

So, let’s create a new rule set that will pass the html form’s Name value into the response.

Create a new rule set

Click on the Rule Sets folder in the Hello World repository. In the portal window that opens, set the File Name to SayHello (case sensitive) and press the Create button.

Create New Rule Set

Now open the newly created SayHello rule set for editing. Click on the SayHello rule set that has now appeared in the rule set folder of the Hello World repository and press Update just as you did to inspect the SendResponse rule set.

Update the newly created Rule Set - SayHello

Search the catalogue

Go to the search tab and search for “Response” and drag the Send Response block onto the rules editor canvas.

Alternatively, you can find the same block in the catalogue from the first Grouped tab, located in the Hello World group folder. This is because the Rule Info tab of the SendReponse rule set has an export group defined as Hello World.

Either method is fine to search the catalogue and drag blocks onto the canvas.

Hello World Folder

Wire blocks together

Click on the Send Response block

yes, we’ve turned a rule set into a new block in the catalogue for re-use

and once again just set the properties. So, now set the Name property to Name (case sensitive, no quotes).

Remembering this was the input parameter set in the hello.html content file we looked at earlier.

Click and hold over the orange cog, then click-release over the green dot to “wire” the first block into the rule set in a right to left direction. Incidentally, all subsequent blocks are wired from the block exit chain point (right hand side) to the input of the next block (left hand side).

Press SAVE and close the rules editor window as shown.

Save the Rule Set

That’s all you need for your new rule set.

HelloWorld [configuration]

The HelloWorld configuration defines the input into the Programmable Data Agent and the rule sets to run.

General tab

Expand the configurations folder and click the HelloWorld configuration. The General tab is the default view, and ensure you now select the SayHello rule set from the dropdown list of available rule sets. This is the “initialising” rule set that is processed by the Programmable Data Agent on the very first transaction it receives.

Embedded (dependent) rule sets that have been wired within the SayHello rule set are also deployed along with it’s parent, so you only need to set the top-level ruleset.

Therefore, any dependent rule sets will get deployed along with the configuration without having to define them.

You’ll note here that there are three other types of rule set that can be set to initialize and run when processing data. These are for (1) CONTENT, on (2) STARTUP, and on (3) COMPLETION. These are not required in this example.

Specifying the initial Rule Set in our Hello World Configurations

Timers tab – information for reference only

Just to mention in passing, there is a fifth rule set you can set in the Timers tab of the configuration. These are rule sets that are initiated and run (as the name suggests) on a timed basis. For example, when a rule set is required to perform a defined process say, every 24 hours.

Timers Tab

Input source tab

Click on the Input Source tab and inspect the different sources of data options available.

Input Source Tab

For this example, we are configuring the Programmable Data Agent to process web application data, but as you can see this is just one of a multitude of available options to define in the configuration, dependent on the composition and data sources being processed.

Configuring Programmable Data Agent for web application data

Databases tab – information for reference only

Click on the Databases tab. It’s here where you define the databases being made available to the Programmable Data Agent. You are not required to define a database for this example so there’s no need to configure a database.

Example only:

Databases Tab

If you are interested, database connectivity specifying JDBC driver, connection string and schema credentials is an administrator set-up task in the console. You don’t need to complete that right now.

Deploy

With the new SayHello rule set defined as the rule set in the configuration, you can go ahead and press the Deploy button.

Deploy our repo

Select Programmable Data Agent as the target server and press the Deploy button.

Specifying the Target Server and deploying

Wait for the deployment to complete and the server restart in a few seconds, and you’ll see the Programmable Data Agent server details are shown.

Test

Click the link to open http://localhost/hello.html in a new browser tab and refresh the page.

hello world form

Enter World! the press the Say Hello button, and if successful you’ll receive a Hello World! response.

SayHello Response

[the crowd erupts into wild applause 👏🏻🍾]

Want some more?

Then read on….

Performance data and live probes

With the Hello, World! example now working successfully, let’s give you a glimpse under the hood of the Programmable Data Agent.

Go back to the console and click Get performance data in the Programmable Data Agent server portal window you have open.

Get Performance Data

On the next window click View Rules Performance

View Rules Performance

The rules editor window opens in a new window. Double click the Send Response block.

SayHello Send Response block

Place a probe on the Set Variable block. Right click over the green exit chain point and click New probe…

New probe for SendResponse block

Click the Create button. Live probes are triggered by variables and values, and occurrences thereof. We can leave these blank to just trigger on the next transaction.

Create New probe

The exit chain point turns yellow to show the probe is set.

Yellow probe

Now go to the browser tab of the demo page showing the SayHello output and click the back button so that the input hello.html page shows.

Input a new name Probe into the input field and click Say Hello, the page responds as expected with Hello Probe. Go back to the rules editor window with the probe set and you’ll see the exit chain point has turned red to show the probe has been triggered.

Two probes

Right click on the red exit chain point and click View probe.

View a probe

You can now see the transaction data that has just been processed by the Programmable Data Agent. The contents of the two variables NAME and RESPONSE.

[NAME]=[Probe]
[RESPONSE]=[<html><body><h1>Hello Probe</h1></body></html>]
Transaction data with the content of the two variables NAME and RESPONSE

Aside from helping you view live data to assist with composing or troubleshooting your solution, it also provides a superior debugging tool that can even be used on production servers without the need for logging.

Frame Busting

Frame busting refers to the ability of an application to avoid being encapsulated within an IFRAME. The later approach can be used to not only make one site impersonate the capabilities of another, but more sinisterly, it can be used to overlay a different user experience on top of an IFRAMEd site and allow events to flow through to the IFRAME.

Using this approach, a user can inadvertently be tricked into performing actions within an application without even knowing that they are interacting with it.

A July 2010 study by Gustav Rydstedt, Elie Bursztein and Dan Boneh of Stanford University and Collin Jackson of Carnegie Mellon University named: "Busting Frame Busting: A Study of Clickjacking Vulnerabilities on Popular Sites", explores the risks and problems associated with framing. It can be found here:

http://seclab.stanford.edu/websec/framebusting/

The study mentioned above forms the basis of the following case study.

Frame busting defense

The defenses we will introduce in this case study are rather simple; we will add some JavaScript and a few extra HTTP headers to the logon page of the Qwerty app. Depending upon the application, it may also be relevant to add this code to other pages, but for now we will just select the logon page for simplicity.

The JavaScript we will add looks as follows:

<style>
html { visibility : hidden;}
</style>

<script>
if (self == top) {
document.documentElement.style.visibility='visible';
} else {
top.location=self.location;
}
</script>

The above script has been placed in the public domain by the authors of the study.

In simple terms, it sets the entire page invisible through use of a CSS directive and only makes it visible if the page itself is the top frame and JavaScript is enabled.

In addition to the above code, we will add a couple of HTTP Headers that take advantage of built in frame busting defenses in certain browsers. The headers to set are as follows:

X-FRAME-OPTIONS: SAMEORIGIN
X-Content-Security-Policy: allow *; frame-ancestors 'self'

Planning the rules

The rules required for this case study are extremely simple. Our plan is to:

  1. Determine whether we are on the logon page.

  2. If yes, add the frame busting code.

Getting started

The very first step as always is to create a repository. In this case we will name it "Frame Busting Example".

Once done, copy and paste the JavaScript code into a text file named "framebust.js" and upload it to the data folder in the repository.

Then create a new blank rule set named "FrameBust".

Creating the rules

The first rules we need simply determine if we are on the logon page:

FrameBust rule set

These rules are the same as in most of our other examples, so we will just list the properties here for quick reference:

Name Splitter properties
Switch properties

Once the properties are set, simply add a chainpoint to the Switch rule and name it "logon.jsp".

We next add the rules to inject the JavaScript and headers:

FrameBust rule set

We read the frambust.js file into a variable, we then set a couple of variables to the header values we need, and finally we add the JavaScript and headers to our response. The properties look as follows:

File Reader properties
Set Variables properties

Values are: SAMEORIGIN,allow *; frame-ancestors 'self'

HTTP Response Addition properties

Header field names are: X-FRAME-OPTIONS,X-Content-Security-Policy

That is it, save the rule set and create a configuration to test it.

Creating the configuration

The configuration for this rule set is very simple, we create one named "FrameBustTest". The following shows the relevant sections that need to be defined:

Create new Configuration, general tab
Input source tab

Testing

Qwerty is a suitable test application for this case study because it uses frames to encapsulate the logon and other internal pages.

When navigating to Qwerty landing page URL in the browser you will see is as follows:

http://localhost/qwerty/

To test the new rule set, deploy the configuration to the Qwerty demo server and start it. Then refresh the Qwerty logon page.

Whilst you will not see any visual differences in the appearance of the Qwerty application, the Qwerty landing page URL in the browser will now look like this:

http://localhost/qwerty/logon.jsp

We can proceed to navigate to other pages in the Qwerty application outside of the main Qwerty frame.

For example, these pages would normally all be loaded from within the Qwerty frame, but are now visible in the main browser address bar:

  • http://localhost/qwerty/main.jsp

  • http://localhost/qwerty/setup.jsp

  • http://localhost/qwerty/pay.jsp

We have successfully "Busted" out of the frame.

Browser Certificate Installation Guide

Version: 10.0 / Modifications: 0

Introduction

This manual describes how to install browser certificates for testing access and modifications to sites that are protected by HTTP Strict Transport Security (HSTS). It is assumed that the reader is familiar with the basic steps of deploying configurations within Composable Agentic Platform and knows how to view the console output associated with the Composable Agentic Platform proxy server.

When using the Composable Agentic Platform browser proxy for accessing secure web sites over HTTPS, you will encounter certificate warning in the browser, just like the following:

Certificate warning

These warning are relatively easy to get around by clicking on the Advanced button and adding an exception.

However, with the advent of HTTP Strict Transport Security (HSTS) this has now become impossible to do as the browser will refuse to add the exception:

Not possible to add an exception for the certificate

The following guide provides instructions on how to overcome this problem by installing a trusted certificate authority into your browser that Composable Agentic Platform in turn will use to generate valid replacement certificates for each SSL site on the fly.

Getting started

Before you begin you should make some updates to your Composable Agentic Platform installation.

Required Updates

The first step is to update/install the following components via the update server:

  • Composable Agentic Platform console (10.0.0:21050 or later)

  • Base Rules (2021-07-16 or later)

  • BIP Runtime (2018-08-07 or later)

  • HTTP Rules (2021-07-15 or later)

Locating the certificate

After the BIP Runtime extension has been installed, locate the folder named ‘Certificates’ under the Composable Agentic Platform Server installation:

Certificates folder

Our certificate is found in that folder with the name: root.pem

Installing the certificate in Firefox

To install the certificate authority in Firefox, start by selecting Options from the main menu:

Firefox Settings

The select the Privacy & Security section and click View Certificates:

View Certificates in Privacy & Security tab

In the certificate manger, select the Authorities tab:

Authorities tab in Certificate Manager

Click on Import… then open the**root.pem** file from the location described earlier (the Certificates folder).

You will be given the option to select the level of trust for the certificate. Select “Trust this CA to identify websites” and click on OK:

Trust new Certificate Authority

Click on OK again to close the certificate manager.

Routing Firefox through the Composable Agentic Platform browser proxy

To be able to see traffic flowing between Firefox and your target site, you must configure Firefox to use the proxy. Under the Options Advanced settings, select the Network tab and click on Settings.

Browser Network Settings

Configure the proxy as shown and click on OK:

Connection Settings

You can now close the Settings tab in Firefox.

The certificate is now installed, and you are ready to see traffic.

Installing the certificate in Chrome/Edge for Windows

Please note that by using the Chrome installation method, other browsers (such as the Microsoft Edge browser will be affected as well).

We will therefore only show the Chrome approach.

Important: To install the certificate, the user MUST have administrative privileges on the system.

In the Chrome browser, select Settings:

Chrome Settings

Scroll down the page that appears and click on Privacy and Security

Locate the HTTPS/SSL section and click Manage certificates…

Manage Certificates

In the dialog box that appears, navigate to the Trusted Root Certification Authorities tab and click on Import.

Trusted Root Certification Authorities

This takes you to the certificate import wizard:

Certificate import wizard

Click on Next

Specify file for certificate

Important: PEM files are not available as a default filter. To locate the file, select All Files (*.*):

Select poot.pen file from certificates

Locate and select the root.pem file, then click on Open

The file name now appears in the Certificate Import Wizard and you can click on Next.

Select the certificate store as shown and click on Next:

Select certificate store

You will be presented with a review page. Click on Finish.

A security warning appears. Make sure you click on Yes:

Security Warning window

The certificate will be imported:

Successful message for certificate import

Close the certificates list:

Certificate list window

Routing Chrome/Edge through the Composable Agentic Platform browser proxy

Please note that by using the Chrome installation method, other browsers (such as the Microsoft Edge browser will be affected as well). We will therefore only show the Chrome approach.

Within the Chrome advanced settings, locate Network and click on Change proxy settings…

Change proxy settings

In the internet properties that appears, click on LAN settings:

LAN settings

Set the proxy server as shown and click on OK:

Proxy Server section

Then click OK again to close the internet properties and close the Settings tab in Chrome. The certificate is now installed and you are ready to see traffic.

Installing the certificate into the OSX Key Chain for Safari and Chrome

Please note that both Safari and Chrome use the same certificate store so this installation applies to both.

To install the certificate, navigate to the Certificates folder and double-click on the root.pem file. The Keychain Access utility will launch and requires you to enter your Admin User credentials:

Login windo for Keychain access

Enter your password and click on Modify Keychain

This will launch the Keychain Access utility with the certificate imported into the System keychain:

Keychain Access

Double-Click on the TomorrowX CA certificate to bring up the details:

TomorrowX CA Certificate details

Expand the Trust option and set the drop-down ‘When using this certificate’ to Always Trust:

Always trust for TomorrowX CA

Close the pop-up details window and enter your administrator password to update. The entry will now have a blue circle with a white cross to indicate a trusted certificate and will have the following text: “This certificate is marked as trusted for all users”:

TomorrowX CA marked as trusted for all users

Testing the certificate installation

Now that your certificate is installed, switch to the Composable Agentic Platform console, select the Product Trial repository and deploy the BasicWebLister configuration to the proxy server.

Wait for the proxy server to start.

You are now ready to test if you can bypass HTTP Strict Transport Security (HSTS) protection. In your browser go to https://www.google.com

Google should load as normal:

Chrome homepage

And you should see traffic in the proxy console:

Traffic in the proxy console

CSRF attack prevention

Before explaining how to combat CSRF (Cross Site Request Forgery), a quick explanation of the technique behind it is in order.

A cross site request forgery relies on a user visiting a malicious site, shortly after they have logged into a genuine site, and whilst they still have a session cookie active with the genuine site.

By making the user's browser send malicious requests directly back to the genuine site, the malicious site can exploit the fact that the user is already logged in, to effectuate such things as placing orders in the user's name, sending emails using the user's credentials or posting comments to other users in what may well be a trusted user's name. The list of exploits is endless and only really subject to the vulnerabilities of the site being attacked.

Ways to make the user visit the malicious site whilst still being logged into the genuine site includes phishing, posting of links in comments on the genuine site, or even just "trial and error" by posting links on sites that may also be frequented by users of the genuine site.

The limitation of the CSRF attack is that it is always "blind". The attacker cannot see what the application responds with, or what the current state of the session is; due to restrictions imposed by browser security models that say that a request from one server (domain) cannot be sent to another.

CSRF defense techniques

How to best protect your site against CSRF attacks depends on how it was written. Generally speaking, most applications perform actions as a result of an HTML form being posted to the site. Some sites also respond with actions to a GET request

For example: "http://www.mysite.com/delete.jsp?orderToDelete=12345"

This example will focus on protecting applications that use a form POST. This is done by adding a hidden field to every form presented by the application. This hidden field contains a random value that is unique to the specific session of the user. We will require that this field is always present on a form POST, making it virtually impossible for a malicious site to second guess what a valid POST request might look like.

The technique for protecting a site that uses GET requests is similar, simply requiring the addition of an additional URL parameter to every URL that takes parameters, instead of a hidden form field.

Planning the rules

The first step in implementing our CSRF defense is to create a simple plan of action i.e. what do we intend to do, and how do we wish to go about doing it. It is a good idea to write this down in plain English and then use that text as a guide whilst designing the rule structure. In this case, the plan reads as follows:

  1. If a POST request comes in whilst there is an active session, then make sure it has our hidden field, and that it is the hidden field we have generated for that session. If the field is not present, we should respond to the user with an HTTP Status code of 403 (Forbidden).

  2. Whenever a new page is provided by the application, make sure we add a large random number as the hidden field to every form presented by the application. The large random number we use should be generated once for the session and then be stored in it for easy reference and good performance.

That sounds easy enough; so, let's begin...

Getting started

Create a new repository named "CSRF Example" and add a new rule set named "CSRF".

Filter out static content before it hits the core rules using a Name Splitter and Switch rule as shown:

Static content filtering using Name Splitter and Switch rules
  • The Name Splitter conveniently extracts the extension of the object being requested using the following properties:

Name Splitter properties
  • The Switch rule operates on the EXT variable. By adding new chain points for each type of static content they are eliminated from reaching the rule set.

Switch properties

As we are dealing with Web Applications, and we need to know information such as the method used (POST/GET), the first step is to add an HTTP Request Tracker rule from the HTTP group in the rules catalog to the CSRF rule set:

HTTP Request Tracker added

A good technique for rule writing is to start by determining the "flow" of events or pages that will subsequently have rules applied to them.

In our case we have two flows:

  • The verification of the forms

  • The addition of the form fields.

So, our next action is to add a Sequencer rule from the Flow group in the rules catalog:

Sequencer rule added

Implementing step 1

Now, the first step in our written plan is to check if we are dealing with a POST request in the session, and if the form posted has our hidden field. The first part is very easy:

HTTP Session Check with If Condition

Only the If Condition requires some properties:

If Condition properties

The next step is simple. We need to look up the current hidden field from the session:

HTTP Section Reader with If Condition rules

Once again, there are not many properties:

HTTP Session Reader properties
If Condition properties

The variable names and values we have chosen are arbitrarily selected, although they should be meaningful and memorable.

In this example, we have decided that the hidden field is stored with a session key named "CSRF.key" and that the hidden field on all forms is named "CSRF". We could have chosen any names as long as we use them consistently when we add the field to the form and store the session key.

All that is left for the first step is to make sure that if the key doesn't match, then the user receives a 403 error.

403 error flow

Once again, the properties are very simple:

HTTP Response properties

We use a Set Completed rule after the response, as once we have decided that the user should be rejected, there is no need to proceed with the rest of the rule set. Instead we simply terminate the flow.

Implementing step 2

We are now ready to implement the second part of the plan. The first step in doing so is getting the actual response from the server so that we can add the hidden field if we need to.

2nd implementation in the Sequencer

The HTTP Server Execute rule takes care of this, even if you are writing rules using a built in forwarding proxy.

Once again, the properties are very simple as we are just interested in the application response:

HTTP Server Execute properties

Once again, we need to check if a session is present, but after the HTTP Server Execute rule, as that rule may in fact result in a session being created:

HTTP Session Check added

If there is a session, then we need to add our unique CSRF key to it. The first step in doing that is to see if we already have that key:

HTTP Session Reader and If Condition to check if the key is blank

Once again, not many properties:

HTTP Session Reader properties
If Condition properties

If we don’t have it, we need to create it, which is easy:

Random Number and HTTP Session Writer to create and store a rando number

The properties for these rules are as follows:

Random Number properties
HTTP Session Writer properties

The session key we use is the same "CSRF.key" that we used in step 1.

All that remains now is to add the field to the form and send the response back to the user.

Thankfully there is a dedicated rule that handles the first problem, the "Insert Hidden Field" rule.

Insert hidden field rule added

Note that we are handling various loose ends too: connecting a Session not found to the HTTP Response, and connecting the existing session key to the Insert Hidden Field rule.

The final properties that must be set are as follows:

Insert hidden field properties
HTTP Response properties

Testing

Our rule set is now complete, and we are ready to test it. A good sample application for this test is the Qwerty application. Create a configuration for the test named "CSRFTest" and set it as follows:

General tab for CSRFTest configurations
Input source tab for CSRFTest configurations

(Only relevant sections shown)

Once you have set up your configuration, deploy it to the Qwerty demo server and try testing it.

You will see in the Qwerty application, in the "Set up 3rd Party Accounts" page, that there is now a CSRF hidden field added to the page:

CSRF hidden field

Use the performance data to further verify that everything is working as you expected.

Adding more protection

If you look further through the page source of the Qwerty application, you may also notice the following link:

A link with to a GET request with params

This is a classic case of a GET request that can be exploited using CSRF. In this basic case study, we only protect POST requests of forms. However, if your application also uses actions on GET request, you can fairly easily amend the rule set to also cover GET requests.

This involves manipulating any URL parameters in the pages that are used for actions.

You can do this using the String Replacer rule, especially if your application uses ".jsp" or ".do" or ".aspx" as URL identifiers for active content.

For example, you could replace ".jsp?" in every page with ".jsp?CSRF=0123456789&" and then check for the field on every URL that ends in ".jsp" and has PARAMETER_NAMES (from HTTP Request Tracker Rule) not equal to blank. If you do that you will achieve the same result as the Insert Hidden Field rule does in this case study.

Additional CSRF notes

The above example is based on implementing the CSRF problem as a single rule set.

Raspberry Pi with PiFace Reference

Introduction

Welcome to the Tomorrow Software reference for interacting with the PiFace Digital 2 I/O board for Raspberry Pi. In this guide we will provide instructions on how to set up a Raspberry Pi and PiFace combo to accept button input and control a few LEDs and relays.

Licensing

The licensing of the PiFace Extension is the same as most other extensions that we provide. You simply need a valid Tomorrow Software license.

The PiFace Extension uses the Pi4J open source (LGPL V3 license) library. This is a free unencumbered license for private and commercial use.

Prerequisite

It is assumed in this document that you have prior experience with Tomorrow Software and that concepts such as server definitions and rule writing are familiar to you.

Getting started

The very first thing you need to get started is some hardware. The following photo shows the most essential components:

What you need is as follows:

  • HDMI cable plus a TV/monitor with HDMI input (not shown)

  • Micro-USB power supply (Preferably 2A)

  • Raspberry Pi 2 board

  • Case designed for the Raspberry Pi and PiFace together (optional)

  • Multi-meter (Optional but really handy)

  • USB Wi-Fi dongle

  • Raspberry Pi Noobs SD Card

  • PiFace Digital 2 board

  • Standard USB mouse

  • Standard USB keyboard

Hardware Assembly

The assembly of the hardware is incredibly simple:

  • Mount the PiFace on top of the Raspberry Pi board

  • Insert the Wi-Fi dongle, keyboard and mouse into the USB slots

  • Remove the micro-SD card from inside the Noobs SD pocket and insert it into the bracket on the underside of the Raspberry Pi

  • Connect the HDMI cable from your Raspberry Pi to your monitor

  • Connect the power supply and wait for it to boot up

Initial configuration

Once the operating system has booted, you will see the following image:

Using your cursor keys, space bar to select and Tab key to navigate options, set up your time zone, locale and select the option to boot to desktop.

Enabling SPI

The PiFace board communicates with the Raspberry Pi over an interface known as SPI. This interface is not enabled by default, so we need to do so. From within the configuration tool, select Advanced Options and SPI.

Enable SPI and load by default. Once done, return to the main menu, hit the Esc key and type:

This will force a reboot and after a startup you now end up in LXDE:

From here, we need to configure our Wi-Fi connection. Click on Preferences then Wi-Fi Configuration.

Next click on Scan. After a short while, your Wi-Fi network should appear and you can double-click on it to provide a password. Once done, simply click on Add and your internet connection will be established.

Wait for the IP address to show up and note it down for later.

Updating and upgrading

Because our project requires the latest drivers and software, the next step is to update the operating system.

Open a terminal window and type the following commands:

These two commands will take quite a while to complete, depending on your internet speed. Please ensure both tasks complete without errors before continuing.

Installing Pi4J

The Tomorrow Software PiFace extension relies on an open source project known as Pi4J. We need to install this next. At the command line, type:

Optional USB drive support

Next, we need to get the Tomorrow Software installed. There are two options:

  • Downloaded from the web

  • Install using a USB thumb drive

If you have received the software on a USB thumb drive, you need to perform some additional configuration. If you downloaded the image, please skip to the next section.

In the terminal window, create a folder where the USB drive will be mounted:

Next, we need to edit the file system table:

Add the following line to the end of the file:

IMPORTANT: This has to be ONE line in the file

Press Ctrl-X and a capital Y, followed by Enter to save.

Then reboot:

Once the reboot has completed, insert the thumb drive and make sure you can access it.

Allow root access

Tomorrow Software is required to be installed as the user root as it uses ports such as 80 (http) and 443 (https).

To achieve this, you need to be able to switch to root using the su command.

To enable root access, type the following command:

Pick a good password and enter it twice.

Starting the file manager

We are now ready to start the file manager in root mode to copy the image into place.

At the command prompt, type:

Enter the password you just set up, then type:

This will start the file manager as root.

Locate the “Tomorrow-Software-Server-10.0.0.zip” image you either downloaded or on your thumb drive, then right click and select Copy.

Change the folder to /opt and create a new folder named “local”.

Copy the zip file to this location, right click it and select “Extract Here”.

In the terminal window (as root), create a symbolic link to the distribution as follows:

Setting the software to auto-start

Right click the file tomorrow.sh in /opt/local/Tomorrow/server/bin, select Properties, then the Permissions tab and make sure Execute is set to “Only owner and group”.

Copy the file tomorrowstart from /opt/local/ Tomorrow /server/bin to /etc/init.d.

Right click the file, select Properties, then the Permissions tab and once again make sure Execute is set to “Only owner and group”.

Then enter the following commands in a terminal window (logged in as root).

Starting the instance

Everything is now ready for the first run of the Tomorrow Software engine. Reboot your Raspberry Pi. You can either do this from the menu or by typing:

Once rebooted, wait for CPU to settle down after startup – it can take quite a while (2-3 minutes on a PI 2). Do NOT attempt to log in during this phase.

Defining the console type

Logging in to the instance should happen from some other computer. The best way to do this is to modify the hosts file on the computer in question to give it a valid name. For example: homeauto.local

Then, simply open a browser and point it to the following URL:

Log in using the user admin and the password admin. You will access the main console. Select Administration then Console Setup:

Change the console type to “Forwarding Proxy without console” and click on Save.

This will shut down Tomorrow Software on the Raspberry Pi. Give it a minute or two to complete, then return to the Raspberry Pi and reboot it.

Setting up the server definition

At this point there will no longer be a console running on the Raspberry Pi. It is instead required to be managed from another Tomorrow Software console instance. To enable this, we need to log in to that alternate console instance and set up a new server definition:

As well as the basics above, we also need to set up the protected hosts, remove the client IP restrictions and disable the browser proxy:

Make the required changes and click on Save.

If all your settings are correct, your instance will now show green in the Servers section:

Required Updates

The next step is to update/install the following components via the update server:

PiFace Rules

Testing the setup

It is now time to test all the setup work. We will start by turning on LEDs on demand.

Switching LED rule set

From within the Tomorrow Software console, create a new repository named “LED Test”, then create a new rule set named “LEDSwitch” in that repository.

Hit update on the rule set and create the following:

Properties are:

Click on the Save button to save the new rule set.

Test configuration

Return to the console to create a new configuration in the LED Test repository:

Click on Create to create the configuration.

Deployment and Testing

It is now time to deploy the configuration to the PiFace Server. Deploy the configuration selecting the “Restart immediately” option.

Wait for the deployment to complete. This can take several minutes, especially the first time. Once the deployment is complete, return to a browser and enter the following URL:

Provided you have followed every step above, LED 4 on the PiFace board will now turn on. You can turn it off using:

Responding to button presses

When a button is pressed or released, this needs to trigger an event. For this purpose, there is a rule named “PiFace Button Listener”, which applies to each button.

You place these rules in a startup rule set.

The following shows a startup rule set that will turn LED 1 on when button 1 is pressed and turn it off when button 2 is pressed:

We also need to modify the configuration to accept the startup rule:

Deploy the configuration the the PiFace server and once again enter the following URL in a browser:

This will trigger the Programmable Data Agent startup and activate the button listeners. Now try to press button 1 on the PiFace. LED 1 will turn on. If you press button 2, LED 1 will turn off.

Notice that LED 1 is linked to a relay. You can hear it click when the LED turns on or off.

sudo reboot
sudo apt-get update
sudo apt-get upgrade
curl -s get.pi4j.com | sudo bash
mkdir usbdrv
sudo nano /etc/fstab
/dev/sda1 /home/pi/usbdrv   vfat  uid=pi,gid=pi,umask=0022,sync,auto,nosuid,rw,nouser 0   0
sudo reboot
sudo passwd root
su
gksudo pcmanfm
cd /opt/local
ln -s Tomorrow-Software-Server-10.0.0 Tomorrow
cd /etc/init.d
update-rc.d tomorrowstart defaults
sudo reboot
http://homeauto.local/console
http://homeauto.local/?onoff=on&LED=4
http://homeauto.local/?onoff=off&LED=4
http://homeauto.local/?onoff=off&LED=4
most essential components
Raspberry PI setup window
SPI
LXDE
wpa_gui
file manager as root
Console Setup in Administration
Basic tab
Forwarding tab
Servers
LEDSwitch structure
properties
properties
properties
properties
General tab
Input Source tab
Buttons structure
properties
properties
General tab

Using the Push Notification Framework

Push notifications is rapidly emerging as one of the most efficient ways of sending information to users without going through email, SMS or other channels (such as Messenger or Slack).

Push notifications have a very high click-through rate and is supported by all modern browsers and platforms except Apple’s.

The push notification framework provides a simple way to add push notifications to your application with the ability to fall back to alternatives if the user is on an unsupported platform.

A push notification appears as a message in the user’s notification section. For example, in Windows a message could look like this:

Notification on windows

It consists of an Icon, A headline and a text body (where supported).

If the user clicks on the message, an event is generated that will open a web page specific to the message and will also send a notification back to the server that the user clicked the message.

There are some restrictions to using push messages:

  1. The web site sending the message MUST be using a secure protocol (https), even during development

  2. The user must be on a supported platform

  3. A set of cryptographic keys must be created to sign messages

The push notification framework helps you manage the last 2 of those 3 items above. To install certificates within your application, please refer to the product reference.

Please note: This manual will reference the Push Notification Demo repository, which can be obtained from the update server.

Getting started

The push notification framework consists of 3 rules and a precisely structure HTML page. In the following section we will cover these rules in detail.

Initialize Push Notifications

Initialize Push Notifications rule

This rule does two things:

  1. It either obtains, reads, or creates credential keys to use with the notifications

  2. It initializes a data set used to store notification user information

Server Keys

The condition for creating keys is that no keys are present in the credentials vault or in the file system, so they must be created. This is done directly on the target server as two new files the first time the rule is executed:

Server keys

You can choose to simply leave these files on the server (in which case you should also place them in the Data Files section of the repository you are working with and ensure they are deployed using the Register Data Files rule).

The preferred way however is to store the keys in the credential vault. This is a simple exercise of opening the key files with a text editor and copying the text from them to the appropriate keys in the vault:

Maintain credential vault window

Once the keys are in the vault, the files can be removed from the target server.

Subscriber Data Set

The data set created by the rule is named “WebPushSubscribers” and is entirely managed by the framework. You can however query and work with the data set in rules as well if you wish. To do so, you will need to know the field names which are: subscriber, target, endpoint and group.

  • Subscriber refers to the user id within your application for a logged in user

  • Target is the type of communication. For example: Push, Email, SMS etc

  • Endpoint is the key to sending, it can be a Push key, email address, phone number (or whatever else is a valid definition of where the message should end up depending on the target)

  • Group is the target group. It can be the same as the subscriber (for direct communication) or it can be a subscribed group (such as offers, recalls, alerts etc).

The Push Notification Controller

The Push Notification Controller rule manages everything related to interacting with the browser to ensure push notifications can be subscribed to and delivered. It automatically generates correct and tested JavaScript pages and a default icon for the rest of the framework to use.

Push Notification Controller rule

Even though the controller manages all these interactions, you always have the option of doing your own additional processing (For example when a user subscribes or unsubscribes or performs a click through on a notification).

The controller only needs a few properties:

controller properties
  • The Database is the database where subscriber information should be store.

  • The Subscriber is the user ID of the user involved in the interaction. Note that generally speaking it is best to have a user logged in so that you can target specific users, rather than just a generic group of people.

  • The Default URL to open is a fallback mechanism for browsers that do not yet support a target URL to open for each message. In that case, clicking on the notification should send them to a sensible page (such as a login page).

  • The Is default URL also welcome page is important only to ensure that if the user already keyed in the welcome URL and already has that open (without a page, such as https://example.com/ and not https://example.com/index.html) the browser should not open index.html but rather just focus the original welcome page).

Wiring it up

The Push Notification Controller is designed to be the last rule in our normal application flows. In the sample repository that looks like this:

Controller structure

It is important to note that the sample repository is cut down to an absolute minimum for maximum clarity. Your production repository should follow the guidelines set out in the Best Practices manual.

The demonstration repository entry point

To help you experience push notifications we have created a simple entry page called index.html. It presents as follows:

index.html in the browser

Returning to this page logs out any user. To log in as one of the two users just click the relevant button.

No passwords required.

Create a subscription page

To go along with the Push Notification Controller, you will need a subscription page served up as content. There is a very minimum sample page in the Push Notification Demo repository named subscribe.html. It presents as follows (after checking that notifications are possible):

subscribe.html page

Should your browser NOT support push notifications, you will receive the following page instead:

Message for not supporting the push notifications on that specific browser

And finally if you are trying with something like Internet Explorer you will receive this message:

Message for not supporting the old browser used to open the page

All of the above sections are simply DIVs in the sample HTML file:

subscribe.html file content

The important thing to understand is that the various IDs of each DIV must remain in place.

You can change the DIVs to <section> or <span> tags (or whatever you like), but the IDs must be present so the Push Notification Controller rule can take charge of the page in the background.

Mandatory IDs

There are several critical IDs that must remain in place. They are as follows:

ID

Function

webpushSupportedNotSubscribed

This ID is used to identify a section that is displayed when push notifications are supported, but the browser is not yet subscribed.

webpushNotSupportedNotSubscribed

This ID is used to identify a section that is displayed when push notifications are not supported, but the browser is not yet subscribed.

webpushSupportedButBlocked

This ID is used to identify a section that is displayed when push notifications are supported, but the user has previously declined permission to send notifications

webpushSupportedButError

This ID is used to identify a section that is displayed when push notifications are supported, but an unexpected error was encountered when try to register

webpushSubscribed

This ID is used to identify a section that is displayed when the user is already subscribed to notifications

webpushGroups

This ID is used to identify a section that displays a list of groups that the user can choose to subscribe to alongside the individual subscription

webpushChecking

This ID is used to identify a section that is displayed while the browser is checking the availability of push notifications

webpushTooOld

This ID is used to identify a section that is displayed if the browser is too old to support the push notification syntax.

webpushStyleDisplayBlock

This hidden ID is used to identify a value that will be used to turn items with display=”none” to visible. The default is “block”, but should you need other values (such as “inline-block”) you can change this value to achieve that effect.

webpushSubscriber[target]

These hidden IDs are used to identify a value that will be used as the target value for any alternative notification methods. For example, if both Email and SMS are available, the IDs:

webpushSubscriberEmail

webpushSubscriberSMS

Must exist with appropriate values (email address and phone number).

webpushSubscribeButton

This ID is used to identify the subscribe button. The button can in theory be something other than a button, but it must support the disabled property.

webpushUnsubscribeButton

This ID is used to identify the unsubscribe button. The button can in theory be something other than a button, but it must support the disabled property.

Radio and Checkbox Groups

In addition to the IDs, there are 2 named radio button groups:

radio inputs

and:

notificationsubscribeoption radio

Notice the slight difference in the name that separates the two groups. It’s a common mistake to copy from one group to the other and forget to correct the name.

Alongside each radio button that is NOT a Push notification, you will need to specify a hidden value for each:

notificationsubscribeoption radio

This provides the framework with information on how to define the destination of the non-Push notifications.

The next section is the checkboxes to enable additional groups the user can subscribe to.

snippet from subscribe.html file

You can have an unlimited number of these groups. Each selection by the user will automatically subscribe them to that group and notifications can be sent to all subscribers.

Buttons

The framework requires two buttons on the page:

two buttons

Both these buttons should be disabled in the HTML by default. The framework will enable the right button at the right time.

Connecting to the framework

The final step is to connect the HTML to the framework. This is done by importing a javascript file that is dynamically generated by the framework. You do not need to have this file anywhere in your repository, it is fully generated along with all dependencies.

importing js file

And with this, you now have a fully functioning push notification page, so it is time to look at how to send them.

Sending Push Notifications

Sending push notifications involves either sending an individual message or sending notifications to an entire group of people. In the demonstration repository there is a sample sending page name send.html:

Send notifications by send.html

This page enables you to send individual push notifications to the two users or you can send a recall notification to all users that has subscribed to recalls.

The page shown after the message is the page that will be opened when the user clicks the notification.

The Send Push Notification rule

All notifications (regardless of the method of sending) can be managed with the Send Push Notification rule:

Send Push Notifications rule

There are several options to define how the push notification looks and acts:

Rule properties
  • The Database relates to the location where subscriber data is stored.

  • The Audience should be either the internal user ID that the application can use to identify a user or a notification group name.

  • The Sender Email should be the email of someone who can assist with technical queries from the external push notification servers used to send the notifications. Those servers are managed by organizations such as Google and Microsoft and the email is used for relaying complaints or warnings.

  • The Expiry is provided in minutes with a maximum of 24 hours allowed.

  • The Title is the key short description of the notification

  • The Icon is an icon to show to the user when the notification is displayed. If no icon is provided a default will be displayed.

  • The URL to open is the URL that will open when the user clicks on the notification. Not all browsers support this and should they not, the default URL to open from the Push Notification Controller will be used instead.

  • The Message allows for a longer notification message to be displayed. Not all browsers support this.

  • The Tag is a value that can be used to avoid sending the same notification to the user over and over. Messages with the same tag name will only be available once in the user’s notification system. The following Re-notify is related to this. It determines if the user should get another notification as a result of an unopened tag group or not.

  • Vibrate can be used to control the vibrations of the user’s device. It is specified as a series of on/off pairs in milliseconds. For example: “100,200,100,200” would mean vibrate for 100ms, pause for 200ms, vibrate for 100ms, pause for 200ms.

Message personalization

This rule notably includes the ability to personalize the message being sent and permits the sending through alternative channels.

For every message, the following variables are available: NOTIFICATION_TARGET, NOTIFICATION_USER, NOTIFICATION_ENDPOINT, NOTIFICATION_MESSAGE, NOTIFICATION_URL and NOTIFICATION_TITLE

The rule writer can use the first 3 to determine where to send a message and identify the relevant user being notified – and can use the last 3 to customize the message.

In our demonstration repository we do this by inserting the user name into the message for group messages:

Personalization for a Send Push Notification title

The Personalize title rule has the following properties:

properties

This is based on the title being sent to the recall group looking like this:

message

So for each message being sent, the Personalization chain point will insert the actual user name into the title.

Alternative notification methods

If the user that signed up for notifications did not have a supported browser, we can offer alternatives (such as email, SMS or other targets).

To support the sender managing those alternative channels for us, the Alternative chain point is called whenever a target is different to “Push”. In our demonstration repository we showcase this with a simple output to the console:

Alternative method for Send Push Notification

However, the rule writer has access to all the 6 variables listed previously at this point in the flow and can use it to send the notification to the right target using the rules most relevant for that:

properties

Google Analytics

Google Analytics lets you do more than measure sales and conversions. It also gives insights into how visitors find and use your site, and how to keep them coming back.

This case study demonstrates Tomorrow Software as an easy integration option for adding tracking code to web pages typically done so outside of the normal software development life cycle (SDLC). Not only does this provide an easy, and rapid deployment of such third-party services, but also ensures that as and when new pages are introduced it provides comfort that tracking code will be ‘appended’ to each and every page the web application responds with back to the user’s browser.

This example is a common method whereby you can simply read a JavaScript file containing the required tracking code, insert your account ID and append it to any web page.

For information regarding the Google Analytics service please refer to:

https://www.google.com/analytics/web/

GoogleAnalystics Reporting dashboard

Planning the rules

The first step of any rule writing is to determine what we want to do and how it can be accomplished.

Before you begin you will need to ensure that you have a valid Google Account email address and password for using the service, or alternatively sign up, it only takes a couple of minutes. https://accounts.google.com

Login with Google Account

We will discuss tracking code throughout this case study, which is only accessible once you have logged in to Google Analytics.

To access your tracking code:

  • Log in to Google Analytics https://www.google.com/analytics/web/.

  • From the Admin page, select the .js Tracking Info property from within the list of accounts. Please note that tracking code is profile-specific.

  • The tracking code can be copied and pasted from the Website Tracking text box from the Tracking Code menu item.

Tracking info

The code will be similar to the below (where x replaces your specific account code 'UA-xxxxxxx-x' ):

<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-xxxxxxx-x', 'auto');
ga('send', 'pageview');
</script>
  • Replace the code with 'UA-xxxxxxx-x' as we can set the account ID in Tomorrow Software rules later, which makes managing the rules and different Google Analytics accounts much easier.

  • Copy and paste the above JavaScript code into a file named "google.js" and save somewhere local i.e. your desktop, for use later on in the exercise.

It is this tracking code that performs the task of collecting the browser data of visitors.

Getting started

Start by creating a new repository called “Google Analytics Example”.

It’s recommended that the process involved in adding the Google Tracking code be split into two:

  • setting a variable which holds the unique Google User account 'UA-1234567-1’.

  • and then inserting this value into the tracking code itself.

This means that you can subsequently update the account or the code separately in future deployments, or when Google amend their tracking code.

Keeping this in mind, you should create the following blank rule sets:

  • GoogleAnalytics: this rule set will be responsible for creating the new UA variable plus reading the tracking code and adding it to the page.

  • Qwerty_test: this rule set will allow you to test how a deployment can work in the demonstration Qwerty example application.

The two new blank Rules will be visible now within the repository.

Google Analytics Example folder

Uploading the Google Tracking Code

In the Tomorrow Software console select the Data Files folder, then upload the ‘google.js’ file you created above and saved to your desktop.

Ensure you upload to the newly created “Google Analytics Example” repository that will now be available in the drop-down list of available folders.

New data file

Press upload and the file will now be visible in the repository in data files for the rules to use.

added google.js file

GoogleAnalytics Rule Set

GoogleAnalytics rule set

Using a Set Variable rule set a new variable called Google_UA with the value “UA-1234567-1” where 1234567-1 is replaced with your specific Google Analytics user account.

Set Variable properties

Then using the File Reader read the google.js file and give it a variable name called ‘GOOGLE_ADD’

File Reader properties

Next use a String Replacer rule to insert the newly created Google_UA variable into the tracking code .js file, followed by the HTTP Response Addition rule, to append the Google Tracking code to the response.

GoogleqAnalytics rule set

The string replacer rule will basically look through the code (which is now called ‘GOOGLE_ADD’) and replace any found content with the value of the variable we have defined ‘Google_UA’.

String Replacer properties

The HTTP Response Addition rule will now take effect and provide the amended google.js file as an addition to the page response and will activate this in the user’s browser.

Http Response Addition properties

The final step for this rule set is to add a couple of Exit rules called “OK” and “Fail” which will assist in rules performance to tell you if the rule is working, and to help with embedding this as a rule set within another rule set.

GoogleAnalytics rule set

Qwerty_test Rule Set

This rule set will allow you to see an example deployment to the Qwerty demo application.

Of course, with every response from the application there is static content which you don’t wish to add Google tracking code to, so take a couple of simple steps to filter off transactions which don’t require code appending.

For example, a jpg image may be served up on a page each and every time a user navigates to this page so adding code to this page will not provide any additional customer insight.

Qwerty_test rule set

Using the Name Splitter rule to identify the URI extension is a useful way to filter out unwanted data before reading the Google Analytics rule set.

Name Splitter properties
  • Variable Name: URI

  • Last Name Variable: we are only interested in the last part of the URI so we name this variable EXT.

  • Split Pattern: “.” is the identifier of the URI which tells the rule which part of the value we want to use or split.

Using the Switch rule set the Switch Variable properties to EXT as created above and proceed to ‘Add Chain Points’ for the static content you wish to ignore such as gif, css, html, js, jpg.

Switch properties

The final step is to connect the newly created GoogleAnalytics.xml rule set now located in the Rule Sets folder.

Rules Sets folder

Setting up the configuration file

Finally, you can set up the configuration file. Click the Configurations menu, select the “Google Analytics Example” repository from the drop-down list, and enter some basic information about the rule to load.

The following screen shots show the information required for the “General”, and “Input Source” tabs.

Configuration for Google Analytics Example repo
input source tab

You can now click the “Create” button to create your configuration file. Once created, click the “Deploy” button to deploy it to your Qwerty demo server.

Future considerations

The above case study shows how to implement Google Analytics tracking code in a specific environment, though of course each individual application will be different.

Validate the code is working

You will be able to log into your Google Analytics and select real-time traffic reports within the reporting dashboard, to validate the tracking code has been inserted, and is working correctly on your website.

real-time traffic report

You can also right click the page in the browser to view source code to verify the Google tracking code has been correctly inserted into the target application page.

Web Development Guide

Version: 10.0 / Modifications: 0

Introduction

Developing Web Applications using the Composable Agentic Platform Framework often requires collaboration between server-side rule writers and client-side web page designers.

The Composable Agentic Platform Framework provides a methodology to make this collaboration as easy and seamless as possible.

This guide provides best practices examples of the various workflows that go into the development of a new application and the maintenance of an existing one.

Starting a new Web Application

As a best practice, web applications should be designed from the perspective of function. This means rule writers must have basic HTML and JavaScript skills, but are not required to understand the intricacies of making a web page responsive and attractive once functionality is achieved.

Modern web applications rely heavily on AJAX, functional JavaScript, JavaScript and JQuery libraries and other tools that define functionality, but not necessarily look and feel (which is typically achieved using CSS).

So, a web application should be started by the rule writers and contain the bare minimum of design elements to be functional. After that the role of the web page designers will be to make those pages shiny and friendly.

Throughout this manual we are going to work through a very simple example to illustrate the steps involved. Each section will have either [Rules], [Admin] and/or [Web] in their headlines to denote which role the section applies to. This allows you to quickly skip sections that may not be relevant to your job.

Setting up the tutorial repository [Admin]

Before you can start on this example, please install the “Web Application Tutorial” repository from the update server and create roles and users for the rule writers and web designers. Then deploy the repository to a proxy server for the users to access.

The web designer role should have:

Web Designer permissions
  • Server Permissions: VIEW, STOP, START and DEPLOY CONTENT for the Proxy Server

  • Repository Permissions: VIEW, VIEW CONTENT, EDIT CONTENT and DEPLOY CONTENT for the Web Application Tutorial repository.

Tutorial Application Functionality [Rules][Web]

The tutorial application is a simple data maintenance app. You can add and delete quotes from your favorite authors. The landing page ([server]/index.html) looks like this:

Quotes form

To create a new quote, enter an author and click on Create:

Inputs filled

The new quote is now stored:

First created Quote

And you can add a few:

List of added quotes

In turn, to remove a quote, just click on the adjacent Delete button.

So, we now have a fully functional (albeit very ugly) web application.

Tutorial Application Artifacts [Rules]

The repository installed contains a configuration, some basic rules and content for an app that can be learned in a few minutes. We hasten to add that the application is not secure, has no validation and the patterns used can be a reference for functionality only.

The Startup rule set defines the Data Set used, the SubmitManager rule sets determines what happens when a button is clicked and the ContentManager rule set grabs the row snippet from the content files, builds the table rows and inserts them into the page. We will explain the page components in the next section for web designers.

It is now imperative that you train the web designer in the application functionality so that modifications made can be tested by the designer directly.

Understanding the Composable Agentic Platform Interface [Web]

When you first log into the Composable Agentic Platform console, you are greeted with the following interface:

Composable Agentic Platform console interface

Before you do anything else, you should change your assigned password to one of your own choice. You do this by clicking on the Password button (top right).

The desktop

The 8 radio buttons each represent a different desktop. For now, just leave the first one active only.

To get started, click on Start -> Servers:

Start Menu

Window controls

This will open a new window with the servers you can control:

Servers Window

You can minimize, maximize or close the window using the controls in the top right corner:

Minimize, Maximize and close buttons

To move the window around, click down on the window header and drag it where you want it.

To resize the window, use the resize anchor in the bottom right corner:

anchor to resize the window

Every window you open, and position stays in place if you log out and back in, so it pays to spend a little time organizing your desktop.

Server management

In the window you just opened, you can see the Proxy Server. Click on it to see the current state of the server. It should look like this:

Proxy Server window

You will likely be limited to Stopping and Starting the server and accessing the console. The console window is mainly used by server-side developers but can be useful if you are trying to identify a problem.

In most cases it will show something as simple as this:

Proxy Server logs

Servers are instances of application containers, they are not physical servers. There is normally no risk associated with stopping and starting development servers.

Repository Management

To work with your assigned repositories, click on Start -> Repositories:

Start Menu

You will receive a list of repositories that you can access:

Repositories Window

Click on the Web Application Tutorial:

Web Application Tutorial repo

Content Files

Then click on the View button. This will open the list of content files for you and close the repository window. Here we have expanded all the subfolders to make it easier to see:

Content Files folder expanded

So the entire application at this point contains just 2 files and 1 folder: “index.html”, the folder “snippets” and the file “quotefragment.snippet”.

At this stage, to make your life as easy as possible, we suggest you arrange your desktop similarly to the below:

Arranged open windows

You can (with some difficulty) work with the files directly through the console, uploading and downloading files as you need, but the Composable Agentic Platform console contains a much easier way to manage the lifecycle of you web development using your local file system, which we will discuss next.

Tutorial Application Initial Design Life Cycle [Web] [Rules]

Whenever you are assigned a repository where you need to exclusively maintain or modify content files, the simplest and easiest way is to work in a “scratch” folder.

A scratch folder can either contain ALL the content files in the repository or just a sub-set. The benefit of using a scratch folder is that anything you add or change will be synchronized with the target server immediately, allowing you to test your changes without having to perform any form of deployment task.

It is important to understand that those synchronizations are temporary. They will be removed when you either: Restart the server or redeploy the repository to the server. More about this later.

For now, start by clicking on “Content Files” in the repository tree:

Content Files of Web Application Tutorial repo

Creating the scratch folder content

Since we are just starting the application, the correct approach now is do download all the artifacts and use them to create our scratch folder. To do this, click on Download. The repository content will download as a zip file:

downloaded zip file of our repo content

Create a new scratch folder somewhere in your file system and copy the zip file to it and unzip it:

Extracting the downloaded zip file

We now have our artefacts:

unzipped folder

IMPORTANT: You MUST unzip the folder even if your file system supports moving into a zip file directly as the synchronization in most cases does not work with a zip file on it’s own.

So, let’s open the folder:

Folder content

These are our artifacts that we need to edit and make attractive. So now we need to connect the Scratch folder to the server.

Connecting the scratch folder to the server

To connect the folder to the server, we go back to the Composable Agentic Platform console and click on Live Web Development:

Live Web Development button

This gives you an option to select a server that you can connect content to:

Selecting target server

Select the Proxy Server and click on Live Web Development again.

(Should the Proxy Server be greyed out in the list then it is most likely stopped. In that case open the Servers window, select the Proxy Server and click on Start and wait 30 seconds – then return to the window above and click on Refresh).

You are now ready to link the server and the local folder:

Proxy server window for dropping the local folder we have

To do this, drag the root folder inside the scratch folder (Web Application Tutorial) into the drop zone in the Window:

Dragged folder

Your local folder and the server’s temporary content space will now synchronize. Unless you have a slow internet connection, this happens very quickly. You can see the process of what remains to be synchronized in the Queued section in the window.

If everything went well, you will see this:

Proxy Server Connect content window after dropping the local folder

IMPORTANT: If you close the window, your session expires, or you log out of the console then you will be disconnected. Do not despair, simply reconnect and the console will only synchronize from where you left off.

Your local folder and the server are now connected. In the next section we will test it.

Testing the folder synchronization

To test the connection, we are going to open the index.html file and make a very simple change. For this demo we will use the Atom editor, but you can use any tool of your choice as long as it saves to your scratch folder. Here is a listing of the index.html page.

index.html content

You will notice that it is just basic HTML with one little anomaly: The $QUOTES$ tag inside the table. The reason for the placement of this tag is as a placeholder for when the server generates the list of quotes.

It is EXTREMELY important for the application functionality that you do NOT change the name of any of these tags, nor can you change the name or id attribute of any elements contained in the page.

In most cases you can change the tags (for example turning <span> into <div> or a table into <section> tags). However you must verify that the application remains functional in case there are JavaScript dependencies.

NOTE TO RULE WRITERS: When you create tags that contains snippets, it is important to include a comment about where the snippet can be found.

In our example application, the snippet to include looks like this:

Snippet content

As you can see, this snippet represents a single line in the table that the application can use to control the formatting of that line and even some actions as shown above.

So let’s modify the index.html file with something subtle first. Try changing:

<h1>Welcome to the house of quotes</h1>

To:

<h1>WELCOME TO THE HOUSE OF QUOTES</h1>

Wait 3 seconds then refresh the application page:

Updated form live on the browser

Adding artifacts

At this stage you can now add sub-folder to your scratch folder, add css files, images, JavaScript libraries, fonts and anything else that can help you style the page.

For example, we created a new folder named css:

CSS folder added to the root folder of our application

Then we added a file named style.css to that folder:

style.css content

And then we import that style sheet into our main page:

linking the css file to our index.html file

And save the changes. Within seconds the page now presents as:

Updated font on our live application on the browser

And you can test that none of your changes have broken functionality.

At this point we hope that it is clear that working on the web design is now just a matter of respecting the developer tags in the pages and using your skills.

It is important to understand that changes in your scratch directory are always ADDED to the server content delivery. If you delete files, they will not be deleted from the server’s temporary content system, but they will be removed when you make the changes from your scratch folder permanent.

Making the scratch folder permanent

The last step in the initial design phase is to make the changes you have made permanent. To do this you must return to the console:

Proxy Server console

The first thing you should do is disconnect the synchronization by closing the synchronization window and then open the content files window:

Content files window

Scroll down to the bottom of the page and get ready to upload your changes:

Uploading our changes for web application tutorial repo

You should set the Upload type to Folder and then click on Choose Files.

Selecting our folder

Select the application content folder and click on Upload. You will possibly receive a warning:

Modal confirmation for uploaded files

Confirm by clicking on Upload and then click on the Upload Button:

Choose Files

All your changes will now be uploaded to the master content for the repository:

Our new uploaded files

IMPORTANT: Whatever files you upload will be ADDED to the repository and will only create new files or replace existing files with the same name. Files in the master content with names not existing in your scratch folder will stay in place. This concept is useful for application maintenance which we will discuss in the next chapter.

Once your files finish uploading, the last step is to deploy them to the server’s permanent content delivery system.

To do this, click Content Files and then on the Deploy button:

Deploy button

You will be asked to pick one or more servers to deploy your content to:

Target Server

Select the Proxy Server and click on Deploy. The server window will open and if you have a lot of content to deploy you will see a progress bar. However, in our case it will probably be so quick that you will not even notice:

Local Proxy window

So, close the server window, log out and inform the rule writer that you have completed your work.

Tutorial Application Maintenance Life Cycle [Web] [Rules]

The maintenance life cycle of a web application’s content files differs from the initial creation in several ways:

  1. It may be performed by the rule writer in isolation

  2. It may involve the web designer styling new features

  3. It may only involve a small subset of the application

For these reasons you may not always wish to create a complete scratch folder with all the required synchronization of image files etc.

Building a maintenance scratch folder

If you are working collaboratively with other users, you should ALWAYS download the latest files from the repository before making any changes. Do not rely on an old scratch folder as you will possibly remove someone else’s work.

The good news is that anything you place in the scratch folder will either override or add to the content. Nothing will ever be deleted.

So before you synchronize your scratch folder, you can comfortably delete anything you do not intend to change after you have unzipped the download.

Alternatively you can create a scratch folder by downloading individual files from within the portal and copying them into the right location in the scratch folder (remembering to create the correct matching sub-folders if needed):

index.html file details

Once again, after you have completed your scratch folder you should connect it to the server and make your modifications that way. There are two reasons for this:

  1. You can test your changes instantly

  2. You can verify that you are not accidentally stepping on someone’s toes (see below)

Server collaboration check

If you just change one or two files you may be tempted to simply upload your file changes and deploy them. We do however strongly discourage that approach because it means you test after deploy (permanent) and you have not performed a collaboration check.

The collaboration check tells you if someone else is also doing work with a scratch folder at the same time as you are. It will show up before you connect for synchronization as follows:

Warning about someone is working on the same files

If you see this warning you should NOT connect your scratch folder without communicating with the named user.

Template Engine [Web] [Rules]

The platform contains a rule named “Merge HTML Pages”. This rule is used to merge content into a wrapping page with headers and footers.

Merge HTML Pages properties

For example, consider the following master.html template:

master.html content

It contains the headers and footers for the pages, but not the actual content. You can have more than one template in the same application if needs be.

The important part is line 18 in the above code. This is where the content for each page will be inserted, although any <head> tags in the content will be appended to the page just before the </head> in the template.

These rules do of course work with live web development, so the web designer can modify them on the fly while testing content.

For example:

Content page

When merged with the master.html page will produce:

master.html content

Notice line 11 where the <title> tag is not inserted and line 18 where the content is inserted.

It is possible to control where precisely the <head> information is inserted into the template. Sometimes this is necessary to give some JavaScript libraries preference over others. To specify a precise spot in the master template where content <head> information should be insert, place the following comment at that location:

comment

Internationalization [Rules]

If your application needs to be translated into multiple languages and locales, there are tree rules to help you:

Translate HTML page

This rule is usually placed either immediately after all static content is loaded for any given page, but before any dynamic content is injected into the page. The Content Variable should contain the content to be translated.

Translate HTML Page rule

The rule can determine if the content is HTML or simply plain text. If HTML, the rule will isolate constants such as plain HTML text, <select> options, input place holders etc., being fully HTML aware.

The rule is capable of recording text that needs to be translated and can write that text out to one or more files (merging it with existing known translations).

Building translation files depends on the content of the global variable BUILD_TRANSLATION, rather than a property. This approach is to enable the easy switching on/off of translation recording between development and production without the need to change the rule structure.

If the BUILD_TRANSLATION global variable is set to “Yes” the Programmable Data Agent will write out fresh translation files in the format “translation-"+locale+".utf8” to the home folder on the target server.

So to get a translation file that can be handed to a local language expert, let the rule “learn” all the text that should be translated and then have the language expert translate that.

The rule is capable of translation into complex character languages such and Chinese and Japanese. Please see the rule help file for further information.

From International Values

Use this rule to convert international edited values (currency, numbers, dates) to a usable internal format.

From International Values rule

This rule can convert input from any given locale into a consistent US internal format for processing.

All values returned uses US number formatting (. as decimal point) and all dates will be returned in YYYYMMDD format.

To International Values

This rule is the inverse of the previous rule. It takes US formatted numbers and YYYYMMDD formatted dates and converts them to the appropriate local version (for example: MM/DD/YY for the US.

To International Values rule

Working in a HTML aware way [Rules]

The framework contains a HTML aware rule that can significantly simplify the co-operation between the rule writer and the web designer.

Set HTML Elements Values rule

Using this rule, you can set the value of check radio buttons, checkboxes, select options in lists and input fields in a HTML document. You can also set the value of text areas, output tags, spans, labels and divs. For these tags, the value obviously refers to the innerHTML of those tags.

For checkboxes, the value can either be “on”, “true” or “checked” if no specific value has been defined for the checkbox. For lists (select tag), the options to select must have a value attribute.

Tags to set can be identified by name, ID or class name. If more than one instance of a identifier exists, all will be set.

This approach can in many cases be used to avoid using $TAGS$ in the HTML and the web designer can instead provide a page with sample values that makes visual sense on its own and the rule writer simply overrides those sample values before serving the page to the user.

TCL Script Writer Reference

Version: 10.0 / Modifications: 0

Using the scripting interface

The Tomorrow Software console ships with a scripting interface to facilitate automated management by other tools. The scripting interface is based on the TCL (Tools Command Language) version 8.4 syntax and commands, but also includes a number of Tomorrow Software specific commands.

As scripting is a programming interface, the scripting engine is not multi-lingual. It is invoked by a simple HTTP/S POST command to the URL:

http://<server>/console/ScriptRunner

The parameters for the POST are as follows:

All parameters should be UTF-8 encoded.

Learning TCL

It is beyond the scope of this manual to provide complete details of the TCL language. TCL has been in use for many years and plenty of online resources exist for learning the language. An excellent primer can be found here:

Also, several sample scripts can be found in the /Education/script samples folder.

Testing scripts

To assist with testing scripts, a specific page has been made available:

http://<server>/console/scriptRunner.jsp

This page allows you to enter a user ID and password, as well as a script, and submit it to the console. The output from the submission is returned to the browser.

Tomorrow Software specific TCL extensions

Tomorrow Software introduces a number of extensions to the standard TCL language. All of the extensions relate to specific console management tasks.

In addition, output written with the "puts" command is written to the HTTP Response stream rather than STDOUT.

Command: createUser

The createUser command creates a new console user. The command takes a number of parameters to correctly define a user in the console:

The following script snippet shows an example of how to use this command:

This command can only be executed with administrator or security authority.

Command: deployConfiguration

The deployConfiguration command deploys a specific configuration to a nominated server. Only configurations located in a repository can be deployed using this command. The parameters for the command are Server ID, Repository Name and Configuration Name. The following script snippet shows an example of how to use this command:

The command will wait for the deployment task to complete before continuing. The deployment does not result in a server restart. The stopServer and startServer commands should be used after this command to ensure that the deployed configuration takes effect. This command is only valid for production servers.

Command: deleteUser

The deleteUser command deletes a user based on a provided user ID. The following script snippet shows an example of how to use this command:

This command can only be executed with administrator or security authority.

Command: getAudit

The getAudit command retrieves a subset of the internal Tomorrow Software audit log. The following snippet shows an example of how to use this command:

The above commands retrieve all audit log entries after the Java Time Stamp 1425064936463.

This command can only be executed with administrator or security authority.

Command: getConfiguration

The getConfiguration command reads a specific configuration from a specific repository and provides access to all elements of the configuration (including the ability to update it if the user has the appropriate authority.

The following script snippet shows an example of how to use this command:

The above command obtains the BasicWebTrial configuration from the Product Trial repository, outputs the default rule set file name and then sets the maximum number of test records to 20,000 before updating the configuration (writing it to the file system).

The following table provides a list of all of the readable properties on the configuration object:

Each of the above values can also be set using the equivalent setter method (replacing "get"/"is" with "set").

Please note that for any arrays, ALL arrays in a set (attributes, databases, timer rule sets) MUST be set to the same length before invoking update.

The TCL interface only supports updating existing configurations. New configurations cannot be created using TCL and existing configurations cannot be deleted.

Command: getUser

The getUser command reads a specific user and provides access to some elements of that user.

The following script snippet shows an example of how to use this command:

The above command reads the user test123, outputs the name of that user and then sets the role before updating the user.

The following table provides a list of all the readable properties on the user object:

Most of the above values can also be set using the equivalent setter method (replacing "get"/"is" with "set"). The values that cannot be set are: Logon, Created and LastLogon.

This command can only be executed with administrator or security authority.

Command: serverList

The serverList command obtains a list of all configured servers in the console that the user is authorized to view. The response is in the form of an array of server IDs. The following script snippet shows an example of how to use this command:

A sample output from running the above script is as follows:

Command: serverStatus

The serverStatus command is used to interrogate the current status of a server, based on the server's ID. The following script snippet shows an example of how to use this command:

The return value from the command is a server status object. The following methods are available on the object:

Command: setCredentials

The setCredentials command is used to set the value of a given field in the credentials vault. The specific vault and field must exist already.

The following script snippet shows an example of how to use this command:

This command can only be executed with administrator or security authority.

Command: startServer

The startServer command is used to start a nominated server.

The following script snippet shows an example of how to use this command:

The command will wait for up to 30 seconds to ensure that the server is actually started. Provided the server starts, the command will return "1". If the server fails to start, then "0" will be returned.

Command: stopServer

The stopServer command is used to stop a nominated server.

The following script snippet shows an example of how to use this command:

The command will wait for up to 30 seconds to ensure that the server is actually stopped. Provided the server stops, the command will return "1". If the server fails to stop then "0" will be returned.

Command: userExists

The userExists command checks if a given user ID exists. The command returns "0" if the user ID is not found or "1" if the user ID is found. The following script snippet shows an example of how to use this command:

This command can only be executed with administrator or security authority.

Command: updateApplication

The updateApplication command updates a console application (such as Qwerty or the console itself. The following script snippet shows an example of how to use this command. In this case the console itself will be updated:

This command can only be executed with administrator authority.

Command: updateExtension

The updateExtension command updates/installs an extension from the update server (such as the Base Rules or the Http Rules). The following script snippet shows an example of how to use this command:

This command can only be executed with administrator authority.

Command: updateRepository

The updateRepository command updates/installs a repository from the update server (such as the Product Trial repository). The following script snippet shows an example of how to use this command:

This command can only be executed with administrator authority.

Command: userList

The userList command obtains a list of all users in the console. The response is in the form of an array of user IDs. The following script snippet shows an example of how to use this command:

A sample output from running the above script is as follows:

This command can only be executed with administrator or security authority.

Parameter

Value

user

The console user ID under which the script will be executed

password

The password for the user

script

The script to execute

Parameter

Value

Logon

The console user ID for the new user

Name

The full name of the user

Password

The initial password for the user

Email

The email address of the user

Type

The user type. Valid values are:

0 = Administrator

1 = Standard User

2 = Super User

3 = Security User

Role

The role name for the user. Can be blank if no role is required.

Time Zone

The new users time zone. Must correspond to the time zone list found in the appendixes of this manual.

Additional Auth

The class name of any additional authentication settings. Can be blank if no additional authentication is required. Please note that only basic authentication selections are available. Overrides (such as the number of digits for one-time emails) are not supported. Currently the following are valid additional auth classes:

software.tomorrow.authenticate.OneTimeEmailPlugin

software.tomorrow.authenticate.LocalHostPlugin

createUser test123 "Test User" test123 [email protected] 1 "" GMT ""
deployConfiguration Qwerty "Product Trial" BasicWebTrial
deleteUser super
set clause "WHERE ACTIONTIME>1425064936463 ORDER BY ACTIONTIME DESC"
set auditRows [getAudit $clause]
puts "Row count = [$auditRows length]<p>\n"
for { set i 0 } { $i < [$auditRows length] } { incr i } {
puts "[$auditRows get $i]<p> \n"
}
set cnf [getConfiguration "Product Trial" BasicWebTrial]
puts "Configuration rule set [$cnf getRuleSet]"
$cnf setTestDataDepth 20000
$cnf update

Method

Return value

getAttributeLabels

An array of strings with the input field labels of the configuration

getAttributeNames

An array of strings with the input field names of the configuration

getAttributeValues

An array of strings with the input field values of the configuration

getContentRuleSet

The file name of the content rule set

getDatabaseAliases

An array of strings with the database aliases of the configuration

getDatabaseDrivers

An array of strings with the database drivers of the configuration

getDatabaseNames

An array of strings with the database names of the configuration

getDatabaseSchemas

An array of strings with the database schemas of the configuration

getDatabaseSystems

An array of strings with the database system names of the configuration

getDescription

The description of the configuration

getDirectory

The directory where the configuration is located

getDoneRuleSet

The file name of the completion rule set

getFileName

The file name of the configuration

getInitRuleSet

The file name of the startup rule set

getInputClass

The class name of the input adaptor used by the configuration

getInputParms

A string with the input parameters passed to the configuration upon startup

getLoopPrevent

The maximum number of chain point interactions before a rule set is considered looping

getName

The configuration name

getPerformanceLevel

The level of performance data collection

0 = Transaction counts

1 = Transaction count and inline time

2 = Transaction count, inline time and URI statistics

3 = All counters

getRuleSet

The base rule set file name

getServerType

The server type

0 = Production

1 = Test

getTestDataDepth

The maximum number of test data collected

getTimerDelays

An array of strings with the timer delay in seconds for each timer rule set of the configuration

getTimerNames

An array of strings with the timer rule set file names for each timer rule set of the configuration

getTimerTypes

An array of strings with the timer rule set types for each timer rule set of the configuration

0 = Real time

1 = Pause

isAutoStart

Set to 1 if this configuration is auto starting, 0 otherwise

isCollectTestData

Set to 1 if this configuration collects test data by default, 0 otherwise

isEchoOut

Set to 1 if this configuration provides an echo of console messages to System.out, 0 otherwise

isFailOpen

Set to 1 if this configuration fails open, 0 otherwise

set usr [getUser test123]
puts "User name [$usr getName]"
$usr setRole analyst
$usr update

Method

Return value

getAuth

The class name of any additional authentication. Can be blank.

getCreated

The time the user was first created in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT.

getEmail

The email address of the user

getLastLogon

The time of the user's last logon in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT.

getLogon

The user ID of the user

getName

The name of the user

getRole

The role set for the user (if any)

getTimeZone

The users time zone. Will contain a value from the time zone list found in the appendixes of this manual.

getType

The user type. Valid values are:

0 = Administrator

1 = Standard User

2 = Super User

3 = Security User

set srvList [serverList]
puts "Server count = [$srvList length]<p>"
for { set i 0 } { $i < [$srvList length] } { incr i } {
puts "Server [$srvList get $i]<p>"
}
Server count = 5
Server Console
Server LocalProxy
Server MPServer1
Server Qwerty
Server TestServer1
set srvId Qwerty
set srv [serverStatus $srvId]
puts "Server status = [$srv getStatus]<p>"
puts "Server is running = [$srv isRunning]<p>"

Method

Return value

getBuild

The base rules build number for the server

isCollectTestData

A flag to indicate if the server is collecting test data.

0 = No

1 = Yes

getConfiguration

The name of the configuration currently deployed on the server

getConfUser

The name of the user that created the current configuration used on the server

getConfVersion

The version of the configuration currently deployed on the server

getDeployErrorCode

Any error code issued (if any) when attempting to deploy the last configuration to the server

getDeployFrom

The repository name from which the configuration was deployed

getDeployTime

The time the current configuration was deployed to the server in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT

getDeployUser

The name of the user that deployed the current configuration to the server

getDescription

The description of the configuration currently deployed on the server

getErrorCode

Any error codes detected on the server. The corresponding error messages are found in the "translation.properties" file for the console application.

getFlightRecorders

A string array with the IDs of any flight recorders in use by the currently deployed configuration

getHost

The host name of the server

getInputAdapter

The class name (identifier) of the input adaptor used for the current configuration

getInputParms

The input parameters provided to the configuration to be used in conjunction with the input adaptor. This is mainly used for file polling servers and in that instance provides the directory that is polled for files. For test servers it provides the input file name to the configuration.

getJavaVersion

The current version of Java used by the server

getLastStarted

The time the Programmable Data Agent was last started in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT

getLastStopped

The time the Programmable Data Agent was last stopped in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT

getLastTransaction

The time the Programmable Data Agent was last invoked in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT

getMajorVersion

The major version number of the Programmable Data Agent

getMinorVersion

The minor version number of the Programmable Data Agent

getOperatingSystem

The operating system and version of the server

isPolling

If the server is polling for data (feed servers)

getPort

The port the server is accepting instructions from

getRevisionVersion

The revision version number of the Programmable Data Agent

getRuleset

The name of the currently deployed rule set

isRunning

Whether the server is currently running (started).

getStatus

The status of the server. 0=Offline, 1=Online

getTestData

The number of available test data lines

isTraceData

If the server has trace data

isTraceMode

If the server is in trace mode

getTransactions

Number of transactions processed since last server start

getVersion

Full version number of the Programmable Data Agent in text format

setCredentials KapowSMS UserID Fred
startServer Qwerty
stopServer Qwerty
set checkUser admin
puts "User $checkUser exists [userExists $checkUser]"
updateApplication console
updateExtension "MaxMind Rules"
updateRepository "Product Trial"
set usrList [userList]
puts "User count = [$usrList length]<p>"
for { set i 0 } { $i < [$usrList length] } { incr i } {
puts "[$usrList get $i]<p>"
}
User count = 3
admin
security
super
https://www.tcl.tk/

Windows Automation Reference

Version: 10.0 / Modifications: 0

Introduction

Welcome to a new dimension of Microsoft Windows automation. Using the Tomorrow Software Windows Automation Extension you can now not just script up the flow of a Windows Application – but you can also combine it with data from many other sources and the powerful rule writing capabilities of the Tomorrow Software Multi-Protocol engine.

The extension is based on the popular AutoIt automation product and we have included tools from that product to help your automation efforts. AutoIt is a free product, however, if you find the product and the Windows Automation extension useful, we would encourage you to make a donation to the creators of AutoIt at:

https://www.autoitscript.com/site/donate/

Licensing

The licensing of the Tomorrow Software Windows Automation Extension is the same as most other extensions that we provide. You simply need a valid Tomorrow Software license.

The license for the AutoIt tools described in this reference guide is found in the “data” folder where you also found this document. In a quick summary it is a classic free software license.

Getting started

Before you begin your first automation project, you need to make some updates to your Tomorrow Software installation.

Required Updates

The first step is to update/install the following components via the update server:

  • Tomorrow Software console (B18020 or later)

  • Base Rules (2018-04-26 or later)

  • Parallel Processing Rules (2018-04-23 or later)

  • Windows Automation Rules (2018-04-23 or later)

If you received this document through some means other than the update server, then you will also need to install the Windows Automation repository.

At this point, stop the Tomorrow Software instance.

Updating the Java Runtime Environment

The JRE that ships with Tomorrow Software is a basic 32 bit JRE. The version may depend on when you received your copy of the product.

To successfully run automation projects, you need to update the JRE to at least version 8 for your platform (32 bit or 64 bit).

You can download the correct JRE from here: http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html

Once you have installed the JRE, you need to update the JRE folder under the Tomorrow Software installation with the JRE that you installed on your Windows PC. You do this by renaming the original JRE folder and creating a new one by copying the JRE from C:\Program Files\Java\jre1.8.0_(version) and renaming it to jre.

Installing the required tools

The final step is to install the Au3Info tool. You need this tool to inspect running Microsoft Windows programs and identify the names of controls that you can manage. The easiest way to install the tool is to download it from the Windows Automation repository’s data folder and save it to your desktop (or some other convenient location).

There are two versions available:

  • Au3Info.exe is for 32-bit Windows systems

  • Au3Info_x64 is for 64-bit Windows systems

Make sure that you download the right version.

Your first automation

In this example we will take you through the automation of creating a document in Windows Notepad and saving it.

Start by restarting the Tomorrow Software Server instance, log in and create a new repository called “Notepad Exercise”.

Then create a new rule set called “NotePadDemo”:

New Rule Set

and open it up in the rules editor.

Starting an application

The very first thing we need to do is start notepad itself. To start an application, simply drag the Run Application rule onto the canvas:

Run Application Rule

And set the properties as shown:

Rule properties

This step alone will cause Notepad to start up. You do not need to provide a directory since notepad will be in the system path.

Since we are going to do something more than just start the application, we need to make sure that it is fully loaded before we start pressing keys. So we add a Wait Active rule:

Wait Active Rule

With the properties set as follows:

Wait Active Rule properties

Identifying windows

Here it is relevant to pause for a minute and look at those properties.

Firstly, the Windows Label. Many of the rules provided in the Windows Automation framework use the Label and Text combination to identify windows to work with. The logic of this combination is as follows:

The Label match is basically starting from the beginning of the label matching as many characters as provided in the rule.

Notepad tabs

In this case we match the entire label.

The optional Text matching refers to a text within the window that was opened. This could be any word visible on the page or within a dialog box. This matching is used for more precise pinpointing of a window.

We will perform such a match later in this section.

Entering text

For now, we will simply send some keystrokes to Notepad to create a document we can save:

Send Text Rule
Send Text Rule properties

Testing

Let’s try and run our three new rules and see what happens. In the Notepad Exercise repository create a new configuration as follows:

New configuration

And set the input source to:

New configuration: input source

We can now deploy our configuration to any convenient active server (You can use a Multi-Protocol server or even Qwerty). As long as you tick the “Restart immediately”, shortly after the deployment is complete, you will see Notepad start up and the text appear:

Notepad editor with the text

Sending formatted text

You may have noticed that the text entered in our example was set as “Raw”:

Properties - Format Raw

Your other option would be to use the formatting text feature:

Properties - Format Formatted

This feature allows you to send specific keystrokes with great ease. For example:

!a would be the same as sending Alt-a
^a would be the same as sending Ctrl-a
+a would be the same as sending Shift-a
#a would be the same as sending Windows key (Win-a)

You can combine these keys: ^!a would be Ctrl-Alt-a.

If you need to send any of those characters without sending them as special keys, you must enclose them in curly brackets. For example {!} to send a !

You can also send normal Windows keys by enclosing them in curly brackets. For example:

{SPACE} {ENTER} {DEL} {TAB} {BS} {HOME} {UP} {DOWN} {LEFT} {RIGHT}

The name used in the brackets can be most normal windows keyboard designations.

If you need to repeat a few keystrokes, you can do this by entering the key name followed by a count. For example:

{DEL 5}

Will result in the delete key being hit 5 times.

So we could in theory expand on our example to make notepad try to close once the text was entered. The keystrokes for closing a Window is Alt-F4. We would do this in formatted text as follows:

Properties - Key Strokes with formatted text

Doing this results in the following outcome:

Notepad modal asking for saving changes to the file

You can try this if you wish, just remember to switch it back to “Hello World!!” and “Raw “afterwards to continue this exercise.

Reading window text

It is one thing being able to send keystrokes, but more often than not for automation, you will need to know the content of specific fields or you may need to be able to set the value of specific named fields without just using keyboard navigation.

This is where the tool from AutoIt (that you installed earlier) comes into play. Start up the correct version of Au3Info:

Autolt v3

Click on the Finder tool and drag it onto the main Notepad window:

Finder into Notepad window

You will see that the tool provides you with the basic Windows information (Title and Class). It also provides us with the Basic Control Info, which is that the field is of the Class “Edit” and it is instance “1”.

What we need at this stage is the ability to identify a specific field in a specific window. The best and safest way to do this is to click on the “Control” tab:

Control tab

And then double-click on the “Advanced Mode” entry. This copies the identifier [CLASS:Edit; INSTANCE:1] to the clipboard for us so that we can use it easily.

So all we need now is to add a “Get Control Text” rule (and a List Variables so we can see what’s going on):

Our rules

The properties for the Get Control Text would be as follows:

Get Control Text properties

The control identifier being easily set by entering two double-quotes and pasting the content of the clipboard from the AutoIt tool in between them.

A quick run and peak at our console will confirm that this is working:

Checking our Programmable Data Agent Console

Text outside controls

There are certain circumstances where text is not necessarily linked to a specific control. The Windows Calculator is one such example. It actually stores the result window not in the control, but in the window itself. If you need to get to this text, you can use the Get Window Text rule instead of the Get Control Text rule.

Hint: When you extract text from the Window itself, it is often formatted across multiple lines. An easy way to get visibility of control characters in text is to escape them as if they would be going into a URL. You can do this with the Escape rule.

Closing windows

It is now time to close our window. This is simply done with the Close Window rule:

Close Window Rule

The properties should look familiar now:

Close Window properties

The result of adding this rule will inevitably be:

Results

Now, we wish to wait for this dialog to appear and then hit Enter to save the file we just created. Once again, this should now be familiar territory:

Our rules structure

With the properties being set as follows:

Wait Active properties
Send text properties

Note the use of Window text in the “Wait for save box” rule. It is conceivable that Notepad my put out many dialogs that is simple labeled “Notepad”, so the extra check for the word “Save” somewhere on the dialog box helps us confirm we are in the right place.

Advanced controls

But now things are getting a little tricky. Once we hit enter, we need to wait for the “Save As” dialog to appear:

Saving our file

On the surface, this may look quite simple. We wait for the dialog box to appear, we find the controls for the directory and file name, put in some values and hit Save.

The first step is not too hard:

Our Wait Active rule
Wait Active props

Next, we discover (using the AutoIt tool) that the directory control is named “ToolbarWindow32”:

Directory control name

However, through experimentation it quickly becomes obvious that you can’t just set the control value to “Address: MyDirectory” using the Set Control Text rule. It simply has no effect. So, we need to introduce a workaround. In this case, some experimentation shows that if you click on the far right corner of the control, you can actually enter a directory name:

Entering a directory name

And the text is preselected, so if we can just do the same mouse clicks in rules, we will be able to override the text in the control and continue. This requires a few steps

Getting a control position

We start by getting the control position so that we can figure out where to click within it:

Get Control Position rule
Get Control Position props

Adding a List Variables rule and running this results in the following output in the console:

Checking the Programmable Data Agent console

So now we know the position and dimensions of the control. The next step is to figure out the correct position to click. A simple calculation rule will take care of that:

Calculation rule
Calculation rule props

All that remains now is to “click”:

Click Control rule
Click Control rule props

Most of the above should now be clear. We are basically clicking the far left side of the control, using the left mouse button. If you don’t provide an X or Y position, the center of the controls axis respectively will be clicked.

All that remains now is to set the control value by sending the right key strokes:

Send text rule
Send text rule props

Notice that we hit the Enter key as part of this exercise. This is because the Save As dialog box changes to the directory entered, once the Enter key is hit.

If you are following this example, make sure that you pick a directory that actually exists. In our example, we have created C:\DemoData purely for this exercise.

The next job is to set the actual file name. Using AutoIt, we discover that the control name for this is “[CLASS:Edit; INSTANCE:1]”. So this looks pretty straight forward. However, setting the control text by itself:

Set Control Text rule
Set Control Text rule props

Does not work well. The resulting file name actually becomes “mydemo.txt*.txt”.

So formatted text once again comes to the rescue. We preface the new file name with a Ctrl-a (select all) followed by Delete to clear the field:

Update the Set Control Text rule props

Note that there are other ways you could achieve the same goal. This is just an illustrative example.

All that remains is to hit the Save button. Any old Windows Keyboard warrior will know that underlined text character in a Windows dialog box can be invoked using Alt+[underlined key]:

Save button

In this case, Alt+S will save the file. So we go ahead and invoke it:

If you run this complete example, you will now have a file in your designated folder called “mydemo.txt”

Handling exceptions

Of course, if you run our scenario twice, you will encounter another message dialog telling you that the file already exists:

Warning dialog of a file name already exists

It is important to handle the kinds of exceptions as otherwise your automation project may become unreliable. In our case, we wait for the “Already exists” dialog to appear, with a timeout telling us if we need to handle it or not:

Wait Active rule for already exists nottice
props
Send text props

In the above example, the file will simply be replaced if it already exists.

Interference

A significant problem with Windows automation is interference. Essentially the automation rules are sending keystrokes and mouse clicks to applications. If someone (a human being mostly) tries to also enter keys or click the mouse at the same time, the automation is likely to fail. For this reason, automations should always run on a dedicated machine with no other activity.

When running a cluster of Tomorrow Software Server instances as a REST service, you need to consider the avoidance of interference traffic impacting automation requests. For example, a load balanced clustered web-based service may have heartbeat health check request pings to confirm service availability, or other unwanted requests; such traffic needs to be filtered (not necessarily blocked) but prevented from reaching the automation rulesets.

Parallel processing

A final issue to be aware of when running automations is that multiple concurrent automations also interfere with each other. For this reason, the best approach is to queue automations if they need to run on the same server. The easiest way to do this is with the “Launch Queued Process” rule:

Launch Queued Process rule

This rule will ensure that Programmable Data Agent wide, only one automation rule set will run at any one point in time. However, the rules are not held up whilst these automation requests are queued.

If you need to wait for an automation process to complete before continuing, the best rule to use is “Wait for Queued Process”. This rule will place the automation request on the queue and will not continue until the automation has completed.

Windows automation as a service

Scaling up

Given you can only run one automation process at any one point in time, you may need a load balanced setup to share automation requests over multiple servers.

The best way to do this is by wrapping the automation request into a REST service and deploying it to multiple virtual server instances behind a load balancer in round robin mode.

Using this approach, the load balancer will find the next available server and distribute the load evenly.

A core virtual server instance should be created so that it can be cloned whenever more capacity is needed.

Set-up

The default BaseApp Tomorrow Software Server service instance is suitable for running as a REST service, please refer to the instructions file Read me.txt located in Tomorrow-Software-Server-10.0.0]/BaseApp/ for set up. Also refer to the Product Reference.pdf section entitled “Removing other unnecessary components” to remove the Tomorrow Software Console and other unwanted demo applications and server instances not required.

Please note that Windows Automation instances cannot be run as a service. They must be started using a bat file in the Windows startup group.

Example to run at start up: Windows Server 2012

Modify Local Group Policy Editor > Administrative Templates > System > Logon > Run these programs at user logon

Run these programs at user logon

Enable this option, press show and enter the following value, and apply/OK to this configuration.

CMD /c "c:\Tomorrow\Tomorrow-Software-Server-10.0.0\Tomorrow.bat"

Where c:\Tomorrow\Tomorrow-Software-Server-10.0.0 is this example directory path.

Applying the updated configurations

When using this option you need to edit the default Tomorrow.bat file to add the following three lines before cd server to accommodate the start up directory path as follows, once again where c:\Tomorrow\Tomorrow-Software-Server-10.0.0 is this example directory path.

cd/
cd Tomorrow
cd Tomorrow-Software-Server-10.0.0
cd server

Active Desktop using RealVNC

A significant limitation with Windows automation (like most GUI automation tools) is that it requires an active desktop to run. So, when you log out of any remote desktop connection or lock the computer, automation is paused/stuck until you reconnect. It is therefore impractical to retain open RDP connection for multiple Tomorrow Software Server instances when running as a REST service with high availability demands. The following is a working example to overcome this limitation.

Example: In Windows Server 2012 set the Turn off the display option to Never in Control Panel Power Options.

Option Never in Control Panel

You still need a way for the remote server to have it’s head/desktop to be unlocked and active. The best way to do this is to use the VNC protocol rather than RDP. There are numerous VNC software (server and client) available that are also free and/or open source.

For this example, we have tested with RealVNC - https://www.realvnc.com/download/vnc/latest/ VNC for Windows version 5.2.3.

Please ensure you refer to Licensing terms https://www.realvnc.com/products/vnc/ as a License key is required to install and use Real VNC for your environment and organisation.

RealVNC VNC Server uses modes to provide remote access to computers in different circumstances, to meet different needs.

VNC Server needs to install on the Tomorrow Software Server instance, and VNC Viewer needs to be installed on a ‘controller’ server.

Given the Tomorrow Software Console server will have access to the server instances, this server is a good candidate to run VNC Viewer, although a dedicated server with access to the instances can perform this connectivity too.

Connectivity between VNC Viewer and VNC Server

VNC Server installs and runs on default port 5900, so ensure any security group policies have been amended to permit connection using this port, together with ports that are running the REST service. The BaseApp to use as a REST service runs as default on port 10001 as defined in the rulesengine.properties settings.

RealVNC installation notes

During the standard RealVNC installation process, ensure you select the appropriate components for your REST service instance and Console Server or controller.

Controller = VNC Viewer REST
Service instance = VNC Server

There is also an install option to add an exception to the Windows firewall during installation, but if you are still experiencing connection problems you may still be required to inspect your server firewall settings.

VNC Setup

Before starting the VNS Server service, it’s useful to know all VNC applications are controlled by VNC parameters, set to suitable default values for most users out-of-the-box.

Please refer to this link for RealVNC parameter names reference information. https://help.realvnc.com/hc/en-us/articles/360002251297-VNC-Server-Parameter-Reference-

The easiest way to set the authentication scheme and credentials for the VNC Viewer controller in order to connect to VNC Server is to start the VNC Server (User Mode) desktop application.

VNC Logo

For example, set the simple authentication scheme using VNC password in the VNC Server – Options > Users & Permissions option as follows.

Creating a password for VNC

Once the authentication scheme and access credentials have been set, and Licensing updated if required, ensure you stop the running VNC Server (User Mode) by pressing the More button, followed by Stop VNC Server as follows.

Stopping VNC Server

The parameter IdleTimeout specifies the number of seconds to wait before disconnecting users who have not interacted with the host computer during that time. The default value for IdleTimeout is 3600 seconds, so you need to set this parameter to 0 in order to never disconnect idle connections. You need to add the IdleTimeout parameter in Windows Registry Editor when running VNC Server as a Windows service as follows.

  1. Using Registry Editor, navigate to HKEY_LOCAL_MACHINE\Software\RealVNC\vncserver.

  2. Select New > String Value from the shortcut menu and create IdleTimeout.

  3. Select Modify from the shortcut menu, and specify appropriate Value data, 0.

Updating a Value

With VNC Server successfully installed and parameters set, amend the VNC Server service with Startup Type set to Automatic.

Startup type set to Automatic

Also, the Allow service to interact with desktop option must be checked as follows.

Allow service to interact with desktop in VNC Server

With the IdleTimeout parameter set to 0 as a minimum, restart the server and START the VNC Server service.

You are now ready to connect to the Tomorrow Software server instance running VNC Server from the controller running VNC Viewer.

Connect to the Tomorrow Software Console server (or controller) using a standard Windows remote desktop connection; install the default VNC Viewer components, and start the VNC Viewer application from the desktop shortcut.

The VNC Viewer application will then prompt to enter the host name or IP address of the REST Service server instance running VNC Server.

VNC Server Name and IP Address in VNC Viewer

With the ‘Let VNC Server choose’ option for encryption selected you will prompted as follows for the password set on VNC Server earlier in the VNC Server – Options > Users & Permissions option.

Authentication in VNC Viewer

If connection is successful the VNC Viewer will launch a connected window to the server, at which point you can login using your Windows user credentials, and you can proceed to repeat the process to make a VNC connection to all VNC Server instances if operating in a scaled cluster.

VNC Server instances

Even with VNC Viewer closed, because it is simply a relay of the host’s screen to your desktop (works differently than RDP), when disconnected, it just stops relaying, and that is all. The relay works like a splitter connection, both the local head/monitor has access, and the VNC Viewer has access.

By this design, VNC will continue to retain an active desktop even though you’re not connected over VNC, as long as the host desktop is logged in and not locked.

The environment – Tomorrow Software Console server, and multiple connected REST service server instances as defined in the Tomorrow Software Console server definitions are now ready for use; the VNC Viewer windows residing on the Tomorrow Software Console server (or controller) can be closed, and the remote desktop connection closed, and desktop will remain unlocked and active.

TomorrowX Portal User Guide

Assuming the Tomorrow Portal repository has already been installed following the steps provided within the installation guide.

Initial Steps

Upon first login to the portal application, you may choose to edit the Unassigned Company and role which are created by default. This is not required and if you prefer you may leave it with its default settings. Please note that you cannot delete this entry. This unassigned company acts as a fallback for users who have yet to be assigned a company and role once registered.

Grouped Permissions

Permissions are grouped by Company/Role/User.

When you first create a resource i.e. a page, you will see that a permission is already assigned for that page. The default permission is always assigned as a global one. Meaning that everyone can view, access, edit or download that resource however the resource is not yet active or visible to anyone. You will need to enable this option for the resource in question so that a user can view, access, edit or download.

Possible Permission Combinations

  • When no company or role or specific user is selected a global permission is assigned and no other permission is allowed to exist.

  • When only a specific company is assigned then all roles and users within that company can view, access, edit or download that resource.

  • When only a specific company and specific role are assigned then all users of that company and role can view, access, edit or download that resource.

  • When only a specific company, specific role and specific user is assigned, then only that user within the company and role selected can view, access, edit or download that resource.

The above combinations can exist in parallel between companies, roles and users in one resource.

Companies

Add a new company

Actions

  • Navigate to Companies > Add New or alternatively use the Add New button on the Company's page.

  • Complete the required fields and submit the form.

Fields

  • Company Name

    • Enter the company name i.e. TomorrowX Limited

  • Domain check

    • Enter the company’s domain name i.e. tomorrowx.com (without www)

  • Unique/Short Name

    • Enter a unique name for the company i.e tomorrowx

  • Approver(s) email

    • Enter a high-level e-mail address for the company. Used for approving employee actions i.e. purchasing a training course or software trial license.

  • Accounting email

    • Enter an e-mail address for the company’s account dept.

  • Currency

    • Enter the 3 letter currency code for the company i.e USD

Edit an existing company

Actions

  • Navigate to Companies > View All and select the company you would like to edit by clicking the Edit link to the right of the company name.

  • Make any relevant changes to the fields and submit the form.

Fields

  • See Fields from 2.0 above +

  • Assign Menu

    • Select a menu to assign to this company - see 5.1 Menus - Add a New Menu

Assign Support Agents

Actions

  • Navigate to Companies > View All and select the company you would like to assign a support agent to by clicking the Support Agents link to the right of the company name.

  • Make any relevant changes to the fields and submit the form.

Fields

  • Assign a New Agent

    • Support Agent

      • Select a user from the list to assign as a support agent

    • Make Head of Support

      • Select this option to set the user assigned above as the Head of Support of a company. Only one agent may be assigned to a company as a Head of Support. To assign a new Head of Support agent, delete the existing one first and then add a new agent.

  • Assigned Agents

View/delete assigned agents

Assign an IP Range

Actions

  • Navigate to Companies > View All and select the company you would like to assign an IP Range to by clicking the IP Ranges link to the right of the company name.

  • Make any relevant changes to the fields and submit the form.

Fields

  • Assign a New IP Range

    • Start IP

      • Enter the starting IP Address. If no IP range is to be used simply enter a single IP Address here. i.e. 10.10.10.10

    • End IP

      • Enter the ending IP Address. If no IP range is to be used leave this field empty. i.e. IP Range: 10.10.10.255

  • Assigned IP Ranges

View/delete assigned IP Addresses

Roles

Add a new role

Actions

  • Navigate to Companies > Roles > Add New or alternatively use the Add New button on the Roles page.

  • Complete the required fields and submit the form.

Fields

  • Role Name

    • Enter a role name i.e. Analyst

  • Assigned to Company

    • Select a company to assign the role to

Edit an existing role

Actions

  • Navigate to Role > View All and select the role you would like to edit by clicking the Edit link to the right of the role name.

  • Make any relevant changes to the fields and submit the form.

Fields

See Fields from 3.1 above +

Users

Add a new user

Actions

  • Navigate to Users > Add New or alternatively use the Add New button on the Users page.

  • Complete the required fields and submit the form.

NOTE: Before creating a user, be sure that the company as well as the role that the user will be assigned has already been created. See 2.1 Companies - Adding a new company.

Newly created user account status is always set to Inactive once created and needs to be updated to an Active state before it can be used. See 4.2 Users - Edit an existing user below.

Fields

  • First Name

    • Enter the user's first name

  • Last Name

    • Enter the user's last name

  • Email Address

    • Enter the user's company e-mail address (must use the same company domain as below)

  • Company Domain

    • Select the user's company domain name

  • Mobile Number

    • Enter the user’s mobile number (used for receiving device verification codes, OTP codes etc.)

  • Company

    • Assign a company to the user - see note above.

  • Role

    • Assign a role to the user - see note above.

Edit an existing user

Actions

  • Navigate to Users > View All and select the user you would like to edit by clicking the Edit link to the right of the user.

  • Make any relevant changes to the fields and submit the form.

Fields

  • User Details

    • See Fields from 4.1 above +

    • User Internal Type

      • Select the appropriate user type i.e Client, Admin, Partner, Unassigned

    • User Account Status

      • Enable/disable the user’s account

  • Trusted Devices

    • View/delete user’s trusted devices

User stats

Actions

View user login stats on this page.

Menus

Add a new menu

Actions

  • Navigate to Config > Menus and select the Add New button on the Menus Overview page.

  • Complete the required fields and submit the form.

Newly created user menu status is always set to Inactive once created and needs to be updated to an Active state before it can be used. See 5.2 Menus - Edit an existing menu below.

Fields

  • Menu Name

    • Enter a name for the menu i.e. Internal User’s Menu

Edit an existing menu

Actions

  • Navigate to Config > Menus and select the menu you would like to edit by clicking the Edit link to the right of the menu name.

  • Complete the required fields and submit the form.

Fields

  • See Fields from 5.1 above +

  • Status

    • Select whether this menu is active or inactive from the drop down menu

MENU LINKS

Add a new menu link

Actions

  • Navigate to Config > Menus and select the Menu Links button.

  • On the Menu Links Overview page, select the Add New button.

  • Complete the required fields and submit the form.

Fields

  • Link Name

    • Enter a name for the menu link i.e. Downloads

  • Link

    • Enter the URL path of the link i.e. user/downloads

  • Link CSS Class

    • Add a custom CSS class name to be used on the <li> element of the menu.

  • Link CSS Icon

    • Add a custom CSS class name to be used on the <i> element of the menu which displays the icon i.e. fa fa-cog. (See for compatible icons)

  • Sub Links

    • Select whether this menu item will have children menu items (Active) or not (Inactive)

Edit an existing menu link

Actions

  • Navigate to Config > Menus and select the Menu Links button.

  • On the Menu Links Overview page select the menu link you would like to edit by clicking the Edit link to the right of the menu link name.

  • Complete the required fields and submit the form.

Fields

  • See Fields from 5.3.1 above +

Add sub menu links

Actions

  • Navigate to Config > Menus and select the Menu Links button.

  • On the Menu Links Overview page select the menu link you would like to add sub links to by clicking the Sub Links link to the right of the menu link name.

  • Complete the required fields and submit the form.

Fields

  • Assign New Sub Links

    • Sub link Name

      • Enter a name for the sub link i.e. white papers

    • Sub link URL

      • Enter a URL for the sub link i.e. user/downloads/white-papers

Assigned Sub Links

  • View/delete sub links from a main menu link

Set link permissions

Actions

  • Navigate to Config > Menus and select the Menu Links button.

  • On the Menu Links Overview page select the menu link you would like to set link permissions for by clicking the Permissions link to the right of the menu link name.

  • Complete the required fields and submit the form.

Fields

  • Assign New Permission

    • Company

      • Select a company from the drop down list to allow access to this menu item.

    • Role

      • If you would like to further restrict access to this menu link select a specific role for the company selected above.

    • User

      • If you would like to further restrict access to this menu link select a specific user for the company and role selected above.

  • Assigned Permissions

    • View/delete assigned menu link permissions

Set up an existing menu

Actions

  • Navigate to Config > Menus and select the menu you would like to set up by clicking the Setup link to the right of the menu name.

  • Complete the required fields and submit the form.

Fields

  • Assign New Links

    • Links

      • Select a link from the drop down to assign it to the currently selected menu

    • Sort Order

      • Enter a number in order to set the position of the menu item within the menu

  • Assigned Links

View/delete menu links

Pages

Add a new page

Actions

  • Navigate to Config > Pages and select the Add New button on the Pages Overview page.

  • Complete the required fields and submit the form.

Fields

  • Page Name

    • Enter a name for the page you are creating i.e. Company Profile

  • URI Collection

    • Enter the URI collection (group of pages) i.e. profile

  • URI Controller (Page)

    • Add a URI controller name (page) i.e. pages

  • URI Method (Action)

    • Add a URI method (action) i.e. view

Edit an existing page

Actions

  • Navigate to Config > Pages and select the page you would like to edit by clicking the Edit link to the right of the page name.

  • Complete the required fields and submit the form.

Fields

  • See Fields from 6.1 above +

  • Status

    • Select whether this page is active or inactive from the drop down menu.

Set page permissions

Actions

  • Navigate to Config > Pages and select the page you would like to set permissions to by clicking the Permissions link to the right of the page name.

  • Complete the required fields and submit the form.

Fields

  • Assign New Permission

    • Company

      • Select a company from the drop down list to allow access to this page.

    • Role

      • If you would like to further restrict access to this page, select a specific role for the company selected above.

    • User

      • If you would like to further restrict access to this page, select a specific user for the company and role selected above.

  • Assigned Permissions

View/delete assigned page permissions

Downloads

Add a new download

Actions

  • Navigate to Downloads > Add New or alternatively use the Add New button on the Downloads Overview page.

  • Complete the required fields and submit the form.

Fields

  • File Name

    • Enter a name for your download i.e. whitepaper 1

  • No. Of Downloads Allowed

    • Enter the max number of downloads allowed - Limit Downloads field below must be set to Limited in order to be active.

  • Limit Downloads

    • In order to limit a file to a certain number of downloads, set this to Limited and set the total number of downloads allowed above.

  • File Upload

    • Choose a file to upload.

  • Mark as Featured Download

    • Select this in order to mark your file as a “Featured Download”. Featured downloads are displayed throughout the site in certain locations.

Edit an existing download

Actions

  • Navigate to Downloads > View all and select the download you would like to edit by clicking the Edit link to the right of the download item name.

  • Complete the required fields and submit the form.

Fields

  • See Fields from 7.1 above +

  • Visibility

    • Set whether the download is visible or not.

Set download permissions

Actions

  • Navigate to Downloads > View all and select the download you would like to set permissions for by clicking the Permissions link to the right of the download item name.

  • Complete the required fields and submit the form.

Fields

  • Assign New Permission

    • Company

      • Select a company from the drop down list to allow access to this download.

    • Role

      • If you would like to further restrict access to this download, select a specific role for the company selected above.

    • User

      • If you would like to further restrict access to this download, select a specific user for the company and role selected above.

  • Assigned Permissions

    • View/delete assigned file download permissions.

Download Stats

Actions

View file download stats on this page.

Solutions

Add a new solution

Actions

  • Navigate to Solutions > Add New or alternatively use the Add New button on the Solutions Overview page.

  • Complete the required fields and submit the form.

NOTE: Before creating a new solution, be sure that you have created the appropriate solution categories and solution industries. See 8.4.1 Add solution category and 8.5.1 Add solution industry.

Fields

  • Solution Name

    • Enter a name for your solution

  • Solution Image

    • Enter the filename of the image to use as the solution image. Be sure to include the file type extension.

  • Synopsis

    • Enter a short synopsis for the solution. Used on sub-solution pages.

  • Description

    • Enter a longer description for the solution which will be displayed on the individual solution page. Use the WYSIWYG editor to style the information entered.

  • Deployment Requirements

    • Enter any deployment requirements for the solution which will be displayed on the individual solution page. Use the WYSIWYG editor to style the information entered.

  • Category

    • Select a category which the solution belongs in.

  • Industry

    • Select an industry which the solutions belongs in.

  • Mark as Featured Solution

    • Select this to mark the solution as a Featured Solution. Featured solutions are displayed throughout the site in certain areas.

Edit an existing solution

Actions

  • Navigate to Solutions > View all and select the solution you would like to edit by clicking the Edit link to the right of the solution item name.

  • Complete the required fields and submit the form.

Fields

  • See Fields from 8.1 above +

  • Status

    • Set whether the solution is active or inactive.

Set solution permissions

Actions

  • Navigate to Solutions > View all and select the solution you would like to set permissions to by clicking the Permissions link to the right of the solution name.

  • Complete the required fields and submit the form.

Fields

  • Assign New Permission

    • Company

      • Select a company from the drop down list to allow access to the solution.

    • Role

      • If you would like to further restrict access to a solution, select a specific role for the company selected above.

    • User

      • If you would like to further restrict access to a solution, select a specific user for the company and role selected above.

  • Assigned Permissions

    • View/delete assigned solution permissions.

Solution Categories

Add a new solution category

Actions

  • Navigate to Solutions > Categories and select the Add New button.

  • Complete the required fields and submit the form.

Fields

  • Category Name

    • Enter a name for the solution category.

  • Category Description

    • Enter a description for the category. Used on solution category pages.

  • Category Image

    • Enter the filename of the image to use as the solution category image.

Edit an existing solution category

Actions

  • Navigate to Solutions > Categories and select the solution category you would like to edit by clicking the Edit link to the right of the solution category name.

  • Complete the required fields and submit the form.

NOTE: Setting a solution category to Inactive will result to hide all solutions assigned to it.

Fields

  • See Fields from 8.4.1 above +

  • Status

    • Set whether the solution category is active or inactive.

Solution Industries

Add a new solution Industry

Actions

  • Navigate to Solutions > Industries and select the Add New button.

  • Complete the required fields and submit the form.

Fields

  • Industry Name

    • Enter a name for the solution industry.

  • Industry Description

    • Enter a description for the industry. Used on solution category pages.

Edit an existing solution industry

Actions

  • Navigate to Solutions > Industries and select the solution industry you would like to edit by clicking the Edit link to the right of the solution industry name.

  • Complete the required fields and submit the form.

Fields

See Fields from 8.5.1 above +

Web Stats

View web stats

Actions

  • Navigate to Web Stats.

  • View user web stats in the table displayed.

    • Clicking on table rows will expand the data displayed for a user.

  • You may add an alias for a specific entry to keep track.

  • View User's Click Path displays individual pages which the user has visited.

    • Clicking on table rows will again expand the row and display more Geo IP information for the specific page which the user has visited.

Portal Settings

Edit portal settings

Actions

  • Navigate to Web Stats > Settings.

  • Complete the required fields and submit the form.

Fields

  • Support Email

    • Enter an email address to use as a support email sender/receiver.

  • Info Email

    • Enter an email address to use as an informative email sender/receiver.

  • Auto Respond Email

    • Enter an email address to use as an auto response email sender/receiver.

  • Maintenance Mode

    • Enable/disable site maintenance mode.

  • Trusted Device Limit

    • Set the max number of trusted devices for users.

DNS Multi Protocol

In the following case study, we will explore adding a new protocol (DNS) to the capabilities of Tomorrow Software.

For simplicity, we will restrict this to just a single DNS A record.

We will show how to proxy the protocol, how to modify the data coming back from the DNS server and how to capture a network packet and use it later as a template for requests from non-Multi-Protocol input adaptors.

Defining the protocol

This case study assumes that you intend to work with a brand new protocol, if using a predefined protocol (such as MySQL or Telnet) then you can skip this section.

Before you can begin to work with a new protocol, you need to define it. In this case study we will create a basic DNS A Record protocol interpreter. It is not a complete DNS example, but will serve well as an example on how to use the multi-protocol capabilities of Tomorrow Software.

The DNS protocol explained

The DNS protocol was chosen for this case study due to its simplicity and because it is well documented.

A simple internet search for “DNS Packet Format” will provide the complete details, but the following is a simplified primer.

At its core, it has the following structure in both the request and response:

A header block:

Followed by the actual questions or answers block. Questions contain the domain being queried, followed by two 16 bit fields, the first of which is the question type (1 = A record, 2 = NS record and so on) and the second of which is the question class (always 1).

The domain name being queried will have its dots removed and each section of the name is supplied with a leading byte providing the section length, followed by a zero byte to indicate all sections have been provided. For example:

labs.tomorrow.eu`` ``will be turned into: [4]labs[8]tomorrow[2]eu[0]

Breaking down the protocol with protocol rules

Before we can start doing anything with the DNS packets, we need to break them down and make them available to our normal rules. We do this in the administration section under “Protocols”.

Just like normal rules, start by creating a rule set named dns_in (as shown) and open it in the rule editor.

You will notice that the rules catalogue for protocols is much smaller than the regular rules catalogue:

You can explore these rules to get a feel for what is available.

Before starting to write the rules, it is important to understand streams, protocol variables, VAO variables, VAO stream variables and stream windows.

Multi-Protocol Streams

Whenever a packet is read using the Multi-Protocol server version of the Programmable Data Agent, it will be read in the form of a stream. For almost all protocols there are two streams: request and response. It is the job of the Multi-Protocol server to break down the binary content of the stream into variables that can be used and manipulated by the regular Programmable Data Agent.

The regular Programmable Data Agent is then capable of modifying the content of the stream before proxying it to the real target server. Upon a reply from the real server, the reply will also be treated as a stream and can equally be broken down and manipulated or simply returned to the original requester.

VAO Variables

Setting a VAO variable directly refers to setting variables in the input for the regular Programmable Data Agent, when the Multi-Protocol server hands over control to the regular Programmable Data Agent.

Protocol Variables

To help the protocol rule writer control the workflow around breaking down a protocol, a set of variables known as protocol variables are used. These are basically String objects, and unlike the regular rules, can be treated as such. This means that assignments to a protocol variable via the Set Variable rule can use all of the regular Java language conventions such as:

Notice the use of “”+ in the last example. This is a convenient way to convert a Java integer into a String object.

VAO Stream Variables

VAO Stream variables on the other hand are directly tied to the request or response stream. If you modify a VAO Stream variable within the regular Programmable Data Agent, then the underlying stream will also be modified. VAO Stream variables use format converters so that the underlying stream can be a binary field, but it will be presented as a regular integer (or some other valid representation) in the regular Programmable Data Agent.

VAO Stream windows

VAO Stream windows are used to handle the very common occurrence where part of a protocol stream may contain information such as the length of another part.

A classic example of this would be the “Content-length” header in a HTTP response stream.

If designating that a VAO stream variable is also a stream window, any modifications that you make to the content of the stream window will automatically be reflected in the value of the variable.

Breaking down the request stream

We are now ready to create our first protocol rules. Return to the dns-in rule set we opened earlier.

According to the protocol definition, the first field we need to read from the stream is the message ID. We do this by adding a “Read Fixed Data Type” rule:

And setting the properties as follows:

Let’s examine what is going on here:

  1. We are using a “Fixed” data type. This refers to data types that have a fixed unchangeable length within the stream. In our case we pick an unsigned integer, MSB first (MSB referring to Most Significant Byte).

  2. We set the length to 2 bytes.

  3. We picked a variable name of messageid. This is the protocol variable name.

  4. We specified that the stream we are going to work with is the request stream. This is optional as the Multi-Protocol server version of the Programmable Data Agent is smart enough to know the main stream being worked with, however for clarity, it is recommended to specify it.

  5. We specified a Stream Variable name of MessageID. This means that when the regular Programmable Data Agent is invoked, it can access and modify the MessageID variable, which will have a direct impact on the stream.

Next, we wire the rule up to the rule set entry point, and also add the “Abort Connection” rule so that we can handle protocol failures gracefully.

The next couple of bytes contain the DNS flags. Since they are bit level, we will read the two bytes as a binary string of 0s and 1s. We once again use the Read Fixed Data Type rule, but this time set the following properties:

This will ensure that the value contained in these 16 bits is represented as a string, looking something like this: “1000010110000000”, where each bit signals a particular meaning as per the DNS protocol specification.

We follow this with 4 simple rules to read the question and answer count:

Each of these new rules reads a 2 byte unsigned integer and are wired to the “Abort Connection” rule on failure.

Next, we need to deal with the actual query payload. For a simple query, this means handling the variable number of elements in the domain name being queried. Theoretically, more than one query could be included in a single DNS request, however for the sake of simplicity, we are ignoring that for now.

The full construct of breaking down the domain name looks like this:

What is happening here is as follows:

  • The Count and DomainElement variables are each being set to the value “1”.

  • The while loop is then created using the following properties:

  • This is followed by a Read Data Type, which is capable of reading a set of bytes with a variable length. In this case, it is the length prefixed String. The properties to perform this read are as follows:

Finally, the Count variable is incremented, using the technique described earlier:

All that remains in breaking down the protocol request is to read the query type and class. As both are simple 2 byte unsigned integers, we can do this with ease:

Finally, we tell the Multi-Protocol server to hand over to the Programmable Data Agent:

The complete rule set looks like this:

Making the protocol rule set available in rules

Before we can use the protocol rule set in rules, we need to give it a short description and check the box that allows rules access:

Setting up Tomorrow Software rule sets

We now have everything we need to perform a test of our protocol breakdown.

The next step is to create the regular rule sets that are going to set up a port to listen on and receive the packet.

Start by creating a new repository and create a rule set called DNSStart. It will only have one rule:

Save the rule and then create another rule set called DNSMain. It will also only have one rule:

The configuration for these two rule sets is also very simple:

It is now time to start up the stand-alone Multi-Protocol server instance. It is found in the Multi-Protocol folder. The easiest way to do this is to execute either the tomorrow.bat file or the tomorrow.sh file.

Once the instance is running, you can deploy your new configuration to it and start it.

Note: If you are not seeing any Multi-Protocol server instances when you try to deploy, please check that you have a server defined with the server type Multi-Protocol, and that it is configured to the correct management port of your Multi-Protocol server instance.

If you check the log for the Multi-Protocol server instance, you should see the following message:

First test

We now need a tool to send DNS packets to any given DNS server easily. There are many options available on the net, and we selected DNSDataView from was chosen for this example.

The first thing we will do is trigger a simple DNS A Record retrieval packet against our Multi-Protocol server instance:

This will obviously not generate a reply yet, but you will see the packets generated in the console output. There will be several, because DNSDataView retries 5 times:

As you can see, the query is broken down into stream variables that are all on the request stream. Also notice how the protocol rules have sliced the query neatly into its three parts: www, testing and com.

Breaking down the response stream

Now that we know the request stream is working, we can proceed to create the protocol rules for the response stream as well. Fortunately, for DNS this is very simple, at least if only dealing with a single DNS A record as in this case study. The first part of the response stream is essentially an echo of the request.

So, start by copying dns_in to a new protocol ruleset named dns_out.

Then we modify each rule to point to the “response” stream and provide a new name for each stream variable by adding the letter R in front of each name:

Once done, we can proceed to read the actual response data.

Things get a little funny here, because the designers of the DNS protocol lived in a time where bandwidth was a scarce resource, so they built “compression” into the protocol.

The way they did it was by manipulating the first bit of the length field of the reply. If this bit is set, then the actual site name (www.testing.com) being replied to can be found by using the rest of the bits + the following byte to create an offset to where the name can also be found in the packet. However, given that in this example we only have one query, we will ignore that and just read the bytes:

With that in mind, the rest of the dns_out protocol rule set becomes fairly simple:

All of the above rules simply read unsigned integers MSB first. Not all have the same length though. RPointer, RType, RClass and RLength are all 2 bytes long. RTTL is 4 bytes long and RIP1 – RIP4 are all 1 byte (each part of the IP address).

The final step to complete the rule set is to name it and allow it to be used in rules:

Proxying a multi-protocol packet

We are now ready to proxy our protocol packet and do something useful with it. We need to return to the regular DNSMain rule set and make some changes:

The first rule you see above is actually the “Proxy Input Request” rule. However, once you change the selected protocol, it automatically changes its name to the protocol it is using.

The complete properties used are:

The Host name/IP shown above is Google’s DNS server. You could choose to use your own to complete this case study.

Testing the proxy

Deploy the dns_example configuration to the Multi-Protocol server instance and restart the rule set. Then go back to DNSDataView and get ready to launch another query. Since we are using Google’s DNS server, we are going to query “www.google.com”.

This time, we get a reply:

And the console shows that the proxy worked:

Looking through the various stream variables, the significant ones are RIP1-4, which tells us that “www.google.com” can be found at 216.58.199.36.

Manipulating a stream

We will now use the regular rule set to manipulate the response stream.

You may notice that the “Set Variable” rule is used to change the RIP4 value to 100. As the Multi-Protocol server version of the Programmable Data Agent is a two-way mapping of the variables to the stream, changing one of the variables also changes the stream.

We will demonstrate this by deploying the rule sets and re-launching the DNS request:

As the tool shows, we have just changed the output of a DNS request in real time.

The usefulness of this is probably limited (given the recursive nature of DNS), but one example could be making the DNS server respond with a different IP address based on the requesters physical location or setting up internal honeypots.

Crafting protocol packets within rule sets

Proxying packets using the Multi-Protocol server instance is one way to use the protocol packets. There may be times when you wish to use a protocol to access an external service. However, crafting network packets by hand is incredibly time consuming and fraught with error risks.

To get around this, Tomorrow Software includes a feature to capture a packet and use it as a template. Capturing a packet is incredibly easy. Simply modify the rule set to write the stream to a file:

Once you have a captured packet, you can easily modify it using simple stream variables. The following shows how the captured packet is read before being sent to the test DNS server using the “Write Stream to Server” rule:

Using this approach, we have added DNS lookup capability to rules using no code whatsoever.

Version 10 | Repository 2023-03-06 21:28:46.0
fontawesome.io

0..7 - 8..15

16..23 - 24..31

Message ID

A unique number that the sender can use to tie a response to a request

Flags

16 bit flags. The most important of those is the first bit, which is 0 for a query and 1 for a response

Number of questions

A simple 16 bit count of questions

Number of answers

A simple 16 bit count of answers

Number of authoritative answers

A simple 16 bit count of answers that are authoritative

Number of additional answers

A simple 16 bit count of additional answers

“abcd”.substring(3,1)
somevariable+”somenewtext”
""+(Integer.parseInt(Count)+1)
http://www.nirsoft.net
Protocols folder
Rule Catalogue folder
dns_in rule set
Read Fixed Data Type properties
dns_in rule set
Read Fixed Data Type properties
dns_in rule set
rules for breaking down the domain name
While Condition properties
Read Data Type properties
Set Variable properties
reading query type and class
Send to Rules rule added
dns_in complete rule set
dns_in description
DNSStart rule set
rule set saved
Configuration general tab
input source tab
message on the Multi-Protocol server instance
Triggering a simple dns
modified Read Fixed Data Type rule properties
modified Read Fixed Data Type rule properties
dns_out rule set
rule set description
DNSMain rule set
DNS Out properties
Query the "www.google.com" domain
got a reply
console with logs
DNSMain rule set
got a reply
DNSMain rule set
DNSLookup rule set

Two Factor Authentication

With online fraud levels ever-increasing, most if not all companies are introducing additional methods of identifying their customers. One popular approach is via a method known as two-factor authentication (or 2FA).

Two-factor authentication consists of requiring online users to identify themselves through an additional method after they’ve logged in with their standard username or password. This could be via the use of a random token generating device or app, or by sending a one-time password to the user’s email address or mobile phone.

Two-factor via an SMS token sent to a user’s mobile phone remains popular, and the cost to company and customers is minimal.

One point to be aware of though, is that the organization must be reasonably confident that the mobile number data they hold, does in fact belong to their customers. It would be prudent to create additional rule sets triggered when a customer attempts to change their mobile phone number, however this is outside the scope of this case study.

In this case study we will outline what is required to deploy a two-factor SMS authentication request seamlessly into an existing application using in-built rules that ship with Tomorrow Software.

Planning the rules

The first step of any rule writing is to determine what to do and how it can be accomplished. Drawing flow charts can be extremely helpful.

Below is a basic example flow chart of how Tomorrow Software may implement a two-factor SMS request.

two-factor SMS implementation

Before beginning, you will need to answer the following:

  1. Where is the login page and where does it go to authenticate the user?

  2. Where is the data that holds the user’s mobile phone number?

  3. What should the rule set do if there is no mobile phone number for a user?

  4. What are the technical details for sending SMS messages?

  5. How long should the Programmable Data Agent wait for a correct response?

  6. How many times should the rules allow someone to enter an incorrect response and what should happen after this given amount?

This case study we will use the in-built SMS aggregator Kapow to send our messages. Your own environment may use internal SMPP calls or different aggregators, which may require you to write your own extension.

Extension writing is outside the scope of this case study but is relatively straight forward for a Java developer.

Getting started

Start by creating a new repository called “Two Factor Example”.

It’s recommended that the processes involved in sending a two-factor message, checking the existence of a two-factor request and checking the response against the stored value, be separated into different rule sets. This provides ease of maintenance in the future, and also allows you to turn two-factor authentication on and off, or change out functionality quickly and easily.

So, keeping this in mind, you should create the following blank rule sets:

  1. TwoFactorLoad – this rule set will be loaded initially and determine whether a two-factor request should be made based on the user’s login status.

  2. TwoFactorCheck – this rule set will check whether there is an existing two-factor request in place and display the embedded two-factor response page if required.

  3. TwoFactor – this rule set will generate the random token and embed it into the message template.

  4. TwoFactorLookup – this rule set will look up the user’s mobile phone number from the database.

  5. TwoFactorSend – this rule will send the message to the user’s mobile phone via Kapow.

Designing the user interface elements

With our two-factor authentication, we need to provide a page that will allow users to enter the token they receive via SMS. This page only needs to be very simple, with an introduction explaining what the user needs to do and a form field for them to enter their token. We will also need two additional pages:

  • One for an incorrect two-factor response,

  • And one for a two-factor time out, since the user will be given a limited time to complete the task.

Within your own web application environment, you will wish to design your pages to fit in with the site’s look and feel, but for this example we will keep it very simple.

You can use the inbuilt content editor to create your pages. to do so follow the steps below.

  1. Expand the “Content Files” menu item and select “Two Factor Example”.

  2. Create a new file called “twofactor.html”.

  3. Copy the below HTML to your clipboard:

<html>
<head><title>Two Factor</title></head>
<body>
<h3>Two Factor Authentication Request</h3>
<p>A two factor token has been sent to your nominated mobile device. You have five minutes to enter the token in the field below.</p>
<p>This process is a part of our ongoing efforts to prevent online fraud. We apologise for any inconvenience caused.</p>
<form action="twofactor.html" method="POST"> <strong>Two Factor Token: </strong> <input name="tokenresponse" type="text" /> <input type="submit" value="Send Token" /> </form>
</body>
</html>
  1. Update the "twofactor.html" file from the console. The embedded HTML editor will op

  2. Click on the HTML button to go to the HTML text.

  3. Paste the HTML shown above into the editor and click "Save".

  4. The page should now look something like this:

Out two factor authentication form
  1. Continue the above process for the following two files. Create new content files called:

    1. twofactorerror.html

    2. twofactortimeout.html

  2. As per above, update each file, click the HTML button and paste the following HTML for each file:

twofactorerror.html

<html>
<head><title>Two Factor</title></head>
<body>
<h3>Two Factor Authentication Error</h3>
<p>The response provided was not correct. Your session has been invalidated. Please log on again.</p>
</body>
</html>

twofactortimeout.html

<html>
<head><title>Two Factor</title></head>
<body>
<h3>Two Factor Authentication Timeout</h3>
<p>Sorry. It took too long to respond to our request. Please try again.</p>
</body>
</html>
  1. Save your files. Your file structure within Content Files should now look as follows:

Saved files
  1. In our example, File Reader rules will be used to read these html files. Therefore, download then upload each file separately from Content Files to the Data Files repository. All files used by File Reader rules must be accessible from the Data Files location by the Programmable Data Agent.

SMS Token Message

Before we begin writing our rule sets, there is one more data file we will create. This file will be a plain text file that will contain the token and SMS message that will be sent to our users.

Begin by creating a new text document in Notepad. Copy and paste the following text into your blank document.

Your two factor token for XYZ Company is [token]. Please enter this token into our website to continue. If you are not currently logging into our website, please contact our customer service team on 01234 5678.

Save the text document as “twofactor.txt”.

Next, go to the “Data Files” section of your Tomorrow Software console. Select the “Two Factor Example” repository from the drop-down list and click the “Browse” button to select the file just created.

Next, click the “Upload” button to upload your file to the console. All files should now be saved within Data Files as follows:

saved files

Two-factor Authentication Rule Sets

As mentioned above, we have five rule sets to deal with a two-factor authentication request. Although all functionality could be contained within the one rule set, we decided to split them out into discrete chunks that all handle a different aspect of the process.

TwoFactorLookup Rule Set

This rule set will handle looking up the user’s mobile phone number from our local database.

To begin with, use the SQL Lookup rule to look up the user’s mobile number in our USERS database. In your web applications, of course, the database, table and field names will differ, but in this example, we are using a database called USERS with a table called “Users” looking for a field called “mobile” where the field “userid” is equal to the variable “userId”.

TwoFactorLookup rule set

Examine the above image to see how we have stored the result from the field “Mobile” into a variable called “MOBILE”. If the record is found, we use the If Condition rule to check that there is actually a value in the MOBILE variable – if there is, we exit the rule set with the value “Continue”. Otherwise we exit with the value “Not Found”.

You can find the Exit Rule in the “Flow” group of rules.

TwoFactorSend Rule Set

This rule set handles sending the token to the user’s mobile handset. This token will be set in the TwoFactor rule set in the variable we will name TOKEN.

The user’s mobile number, as you have seen, has been set in the TwoFactorLookup rule set.

We will use the File Reader rule to read the twofactor.txt file we created earlier into a variable.

TwoFactorSend rule set

Next, we will replace the token with the actual token created by our “TwoFactor” rule set by using the String Replacer rule.

token replaced in the rule set properties

Then we will use Kapow to send the message to the mobile number we found in the “TwoFactorLookup” rule set.

Send Kapow SMS rule set added

IMPORTANT: You will need your own Kapow username and password in the credentials vault to use the service.

Next, we exit the rule set with either “Continue” for a successful send, or “Failed” for a failed send.

TwoFactor Rule Set

This rule set will initialize a two-factor request and save the following variables to the system: a flag that a two-factor request is in progress, what the token actually is, and what the time limit is for the request.

To begin this rule set, we need to set a time stamp as an expiry and create a random token. Next, we need to pass through the TwoFactorLookup and TwoFactorSend rule sets we created earlier.

Use the Timestamp rule found in the “Variable Marking” group followed by the Calculation rule found in the “Math” group to create a time limit.

TwoFactor rule set

Note that timestamps are in milliseconds, so we need to add 300,000 to the current TIMESTAMP variable to get a time five minutes into the future.

Next, we will create a random numeric token by using the Random Number rule, also found in the “Variable Marking” group. Create a random number with 8 digits and save it to a variable named TOKEN.

Random Number bloxk added

Now we can look up the user’s mobile number and send the SMS message to their phone. To do this, use the TwoFactorLookup and TwoFactorSend rule sets from the “Rule Set” group.

We must remember to set the session variables that tell us a two-factor request has been sent, what the time limit is, and what the token is.

First though, we need to set a variable TWOFACTOR to “Y” to tell us that we are in the middle of a two-factor request. Use the Set Variable rule to do this.

Set Variable properties

Next, we can use the HTTP Session Writer rule set to assign the three variables to the session.

HTTP Session Writer block added

Finally, we need to display the two-factor response page to the user.

To do this, we must first save the HTTP request so that later on, if the user enters the correct token in a timely manner, we can restore the application to its normal flow. Use the HTTP Request Saver rule set to do this.

Next, we use the File Reader rule to read our “twofactor.html” file into a variable for display. We will call this variable RESPONSE.

HTTP Request Saver and File Reader blocks added

Finally, we just need to display this content back to the user, followed by a Set Completed rule to tell the system not to go any further.

Set Completed rule added

TwoFactorCheck Rule Set

This rule set will check whether or not a two-factor request is in progress, and deal with any responses or timeouts the system may encounter. This rule set will use a combination of rules we have previously encountered.

The first thing to check is whether or not the time limit has passed.

To do this, we create a new timestamp called TIMESTAMP_NOW, subtract the existing TIMESTAMP from it, and if the remaining time TIME_REMAINING is greater than zero we know the two-factor session is still valid. If not, we will read the “twofactortimeout.html” file and respond back to the user.

TwoFactorCheck rule set
If Condition rule added

If there’s still time left on the authentication process, we then need to check whether or not a response has been entered, and if it has, whether or not it is the correct one.

In our HTML form we set the field name to “tokenresponse” so this is the name of the variable we must check.

If Condition rul for reponse been entered

If there is a value, then we check it against the variable we set earlier called “TOKEN”. If there is no value, or the value is incorrect, we will use the File Reader rule to read the “twofactorerror.html” file and display back to the user.

Additionally, we will reset the TWOFACTOR variable so that the system knows not to check again.

Optionally, we may redirect the user to a specific logout page, but in this example, we will not do this.

more rules for the TwoFactorCheck rule set

If the user has entered the correct response, we will reset the TWOFACTOR variable to “X” so that the rule sets know that the user has already been authenticated.

Finally, we will use the HTTP Request Restorer to place the user back into the original application flow.

Final structure for TwoFactorCheck rule set

TwoFactorLoad Rule Set

Lastly, we will create the TwoFactorLoad rule set which will bring together all of the previous rule sets. This rule set will determine whether or not we need to check for a two-factor request, which only needs to be done if a user has been authenticated by the system, and only on non-media content

for example, not images, stylesheets, javascript et cetera

TwoFactorLoad rule set

Using the Name Splitter rule we can split the URI variable to determine the extension.

In our example we are running JSP pages, so we only want the rule set to continue if the content has the extension “jsp” and the user is currently logged in.

There are several ways to determine if a user is logged in, and which method you use will be dependent upon your specific web application. There may be a cookie or session variable that we can read, or perhaps your web application has a specific URI or query string for pages that are available to logged in users only.

In this case study we will assume that a cookie with the user’s id has been set on login.

We will use the Http Request Tracker rule to expose all cookies. The rule actually exposes all request information into separate variables, but in this case, we are only interested in the “userId” cookie.

HTTP Request Tracker rule added
  • If the userId cookie is set, then we need to check if a two-factor request is in progress, otherwise we will simply exit the rule set.

  • If the userId cookie is set, then we must find out whether we need to initiate a two-factor request, check a two-factor request, or ignore as the two-factor request has already been successfully processed.

First, we will use the Http Session Reader rule to place the relevant session variables into variables our rule sets can query. We will store the TWOFACTOR, TIMESTAMP and TOKEN session variables into local variables.

properties

Next, we use the Switch rule to check the contents of the TWOFACTOR variable.

This is the variable that tells us exactly what we should do.

If the variable is not set, then we need to initiate a two-factor request. If the variable is set to “Y” then a request is already in progress, so we need to look for a token response or time out. If the variable is set to “X” then we know the user has already successfully performed the two-factor authentication, and we can pass them back to the application.

TwoFactorLoad rule set

Use the “Add Chain Point” button to add the “Y” and “X” points to the Switch rule.

Then, connect each chain point to the relevant rule set (found in the “Rule Sets” group) or set completed for already authenticated users.

Setting up the external database

Before you can deploy your rule set, you need to ensure that your database server is set up correctly, assuming that you need to retrieve the user’s mobile number from an external database.

In the following example, we will connect to a MySQL database – however, the process is similar for all JDBC drivers.

The Tomorrow Software Server ships with the Derby database driver, but you can easily add new database drivers to the application. The first thing you need to ensure is that the driver to the database is available in the class path of the program or application that is running Tomorrow Software.

For the Tomorrow Software Server itself, the location is /server/lib/ext/jdbc (we recommend that you create a folder in that location named mysql and that the driver jar file is placed in there).

The MySQL JDBC driver is available from http://dev.mysql.com/downloads/connector/j/

Next, you need to create the Database Connector in Tomorrow Software by clicking the Database Connectors link on the menu.

Simply enter in the class name, URL prefix (e.g., the location of the primary server to access), username and password required to access the database.

Click “Create” and your database is ready to access.

Create a MySQL database

Setting up the configuration file

Finally, you can set up your configuration file. Click the Configurations menu and select the “Two Factor” repository from the drop-down list. Enter some basic information about the rule to load and the databases required.

The following screen shots show the information required for the “General”, “Input Source” and “Databases” tabs.

Creating new Configuration
Input source tab

For the “Databases” tab, click the “+” icon to add a database, type the name of your database and select our newly created MySQL driver from the list.

Databases tab

You can now click the “Create” button to create the configuration file. Once created, click the “Deploy” button to deploy it to the server.

Future considerations

The above case study shows how to implement two-factor in a specific environment, though of course each individual application will be different.

You will also need to consider how you wish to handle users for whom you do not have a mobile number – alternatives could include email, or perhaps you have some kind of external token generator.

Experience Multi-Factor Authentication (MFA) in a live, 1 hour test drive, where you get to deploy MFA to a Login App on AWS.

Buy with AWS

Try free with AWS

Cover

Customer Satisfaction Survey

This case study will show you how to inject a random customer satisfaction survey into the user experience on a site.

We will use a flight recorder to graph the responses and collate comments from the users.

Planning the rules

The first step in implementing our customer satisfaction survey is to create a plan of what we intend to do, and how we wish to go about it. It is often a good idea to write this down in plain English and then use that text as a guide whilst designing the rule structure. In this case, the plan reads like this:

  1. We want to ask random customers about their experience with our site.

  2. We want to have the survey appear on our main page after log-in.

  3. We want to make the survey experience as quick and painless as possible to get the maximum potential responses.

  4. We are going to use the Tomorrow Software flight recorder feature to graph and view the responses.

Getting started

In this case study we are going to split the decision points into three discrete components, following the recommendations mentioned elsewhere in this manual. So, start by adding a new repository called "Customer survey" and create three blank rule sets:

  1. "SurveyLoad", which is the rule set that will pre-check our survey and make sure all of the data we need is collected before we start the survey process.

  2. "SurveySelection", which is the rule set that will determine if a user is selected for a survey.

  3. "Survey", which is the rule set that will contain the survey logic itself.

Designing the user interface elements

The plan involves injecting a customer survey on top of the user experience. We can do this as a pop-up window or we can simply overlay it on top of the site using JavaScript. Given that most users block pop-ups these days by default, the later seems like the better option. We want to keep the survey itself as pure HTML, so a little bit of basic JavaScript will take care of it:

<script>
var tbody = document.getElementsByTagName("body")[0];
var tnode = document.createElement('div');
tnode.style.position='absolute';
tnode.style.top='0px';
tnode.style.left='0px';
tnode.style.overflow='hidden';
tnode.style.display='none';
if( document.body && ( document.body.scrollWidth || document.body.scrollHeight ) ) {
var pageWidth = document.body.scrollWidth+'px';
var pageHeight = document.body.scrollHeight+'px';
} else if( document.body.offsetWidth ) {
var pageWidth = document.body.offsetWidth+'px';
var pageHeight = document.body.offsetHeight+'px';
} else {
var pageWidth='100%';
var pageHeight='100%';
}
tnode.style.opacity=0.4;
tnode.style.MozOpacity=0.4;
tnode.style.filter='alpha(opacity=40)';
tnode.style.zIndex=1000;
tnode.style.backgroundColor='black';
tnode.style.width= pageWidth;
tnode.style.height= pageHeight;
if (parseInt(tnode.style.height)<700) tnode.style.height='700px';
var ctrNode = document.createElement('div');
ctrNode.style.position='absolute';
ctrNode.style.top='10%';
ctrNode.style.left=parseInt((parseInt(pageWidth)-500)/2)+'px';
ctrNode.style.backgroundColor='white';
ctrNode.style.zIndex = 1001;
ctrNode.style.width='500px';
ctrNode.style.height='500px';
ctrNode.innerHTML = '<iframe width="100%" height="100%" name="survey" src="survey.html"><\/iframe>';
tbody.appendChild(ctrNode);
tbody.appendChild(tnode);
tnode.style.display='block';
</script>

You can copy and paste the above JavaScript code into a file named "showsurvey.js" and upload it to the "Data Files" section of the "Customer survey" repository.

The above JavaScript will essentially grey out the application itself and overlay a HTML file on top that is named "survey.html".

Now we need to create the survey HTML itself. Once again this involves basic web design skills. The end goal is a page that looks something like this:

generated interface

The easiest way to create the HTML is to follow these steps:

  1. Create a subfolder under the "Content Files" section of the "Customer Survey" called "Qwerty".

  2. Add a new file under the "Qwerty" folder named "survey.html".

  3. Copy the following HTML code to your clipboard:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<title>Untitled document</title>
</head>
<body>
<!--CTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dt--><!--CTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dt-->
<p>Dear customer,</p>
<p>You have been randomly selected to take part in a very short customer satisfaction survey. We value your time, so if you participate we will place you in a draw to</p>
<h1>WIN a brand new uPod Feel</h1>
<p>Simply answer the 4 questions below and you will automatically be placed in the draw. All questions must be answered to be able to enter.</p>
<div style="text-align: justify;"><form action="/qwerty/survey.jsp" method="post" enctype="application/x-www-form-urlencoded" accept-charset="UNKNOWN">
<p><input name="Cancel" type="submit" value="No Thanks" /></p>
<hr />
<p>Please answer the following questions with a rating from 1 to 5, where 1 equals "Strongly disagree" and 5 equals "Strongly Agree":</p>
<p>I use your site because of the great product range</p>
<table border="0">
<tbody>
<tr>
<td width="20%">Strongly disagree</td>
<td width="20%">&nbsp;</td>
<td width="20%">Neutral</td>
<td width="20%">&nbsp;</td>
<td width="20%">Strongly Agree</td>
</tr>
<tr>
<td><input name="ProductRange" type="radio" value="Strongly disagree" /></td>
<td><input name="ProductRange" type="radio" value="Somewhat disagree" /></td>
<td><input name="ProductRange" type="radio" value="Neutral" /></td>
<td><input name="ProductRange" type="radio" value="Somewhat Agree" /></td>
<td><input name="ProductRange" type="radio" value="Strongly Agree" /></td>
</tr>
</tbody>
</table>
<p>I use your site because of the excellent pricing</p>
<table border="0">
<tbody>
<tr>
<td width="20%">Strongly disagree</td>
<td width="20%">&nbsp;</td>
<td width="20%">Neutral</td>
<td width="20%">&nbsp;</td>
<td width="20%">Strongly Agree</td>
</tr>
<tr>
<td><input name="Pricing" type="radio" value="Strongly disagree" /></td>
<td><input name="Pricing" type="radio" value="Somewhat disagree" /></td>
<td><input name="Pricing" type="radio" value="Neutral" /></td>
<td><input name="Pricing" type="radio" value="Somewhat Agree" /></td>
<td><input name="Pricing" type="radio" value="Strongly Agree" /></td>
</tr>
</tbody>
</table>
<p>I find the site easy to use</p>
<table border="0">
<tbody>
<tr>
<td width="20%">Strongly disagree</td>
<td width="20%">&nbsp;</td>
<td width="20%">Neutral</td>
<td width="20%">&nbsp;</td>
<td width="20%">Strongly Agree</td>
</tr>
<tr>
<td><input name="EaseOfUse" type="radio" value="Strongly disagree" /></td>
<td><input name="EaseOfUse" type="radio" value="Somewhat disagree" /></td>
<td><input name="EaseOfUse" type="radio" value="Neutral" /></td>
<td><input name="EaseOfUse" type="radio" value="Somewhat Agree" /></td>
<td><input name="EaseOfUse" type="radio" value="Strongly Agree" /></td>
</tr>
</tbody>
</table>
<p>I use your site because of the fast delivery</p>
<table border="0">
<tbody>
<tr>
<td width="20%">Strongly disagree</td>
<td width="20%">&nbsp;</td>
<td width="20%">Neutral</td>
<td width="20%">&nbsp;</td>
<td width="20%">Strongly Agree</td>
</tr>
<tr>
<td><input name="Delivery" type="radio" value="Strongly disagree" /></td>
<td><input name="Delivery" type="radio" value="Somewhat disagree" /></td>
<td><input name="Delivery" type="radio" value="Neutral" /></td>
<td><input name="Delivery" type="radio" value="Somewhat Agree" /></td>
<td><input name="Delivery" type="radio" value="Strongly Agree" /></td>
</tr>
</tbody>
</table>
<p>We also value any comments or suggestions. So optionally you can type them here:</p>
<p><textarea name="Comments" rows="6" cols="60"></textarea></p>
<p><input name="Submit" type="submit" value="Submit Survey" /></p>
</form></div>
</body>
</html>
  1. Update the "survey.html" file from the console. The embedded HTML editor will open.

  2. Click on the Update button to go to the HTML text.

  3. Paste the HTML shown above into the editor and click Save.

  4. The page should now look something like this:

html page

Now we have all of the components we need and are ready to begin writing rules to present our survey.

Creating the survey selection rules

We will begin by creating the survey selection rules. In this case, the rules are very simple. We use a random number generator to determine if a user should be asked to complete the survey or not.

In this example we want the opportunity to complete a survey to be fairly frequent.

So, we start by updating the "SurveySelection" rule set to look as follows:

SurveySelection rule set

The properties for these rules are:

Random Number rule properties
If Condition properties
Exit Rule properties

Effectively we create a random number between 0 and 9 (1 digit) and provided the number is below 4, we proceed to perform the survey.

Creating the survey load rules

The purpose of the "SurveyLoad" rules is to prepare any data that may be needed by the other rule sets in the repository. It is often beneficial to do it this way to isolate or prepare data needed by other rule sets, yet at the same time keeping those other rule sets as generic as possible.

In our case, there are a couple of generic things we need to do and check:

  1. We need to start the usual HTTP Request tracking.

  2. We need to ensure a session has been started (meaning a user is logged on).

  3. We need to obtain the customer account number so we can log it (no anonymous data here!).

  4. Once everything is done, we need to proceed with the survey itself.

All of these tasks are very simple, so we show them here as a single step:

SurveyLoad rule set

The only rule that has any non-default properties is the HTTP Session Object reader. This rule allows us to read the customer account number from the Qwerty session. The properties are as follows:

HTTP Session Object properties

Creating the survey rules

We are now ready for the actual core process itself with all data and user interface components prepared. So, lets update the "Survey" rule set.

The first issue is to place the survey in the right place in the navigation process, which, in our plan, is to inject the survey on top of main page.

We start by finding out what the name of the page that is being requested is:

Survey rule set

The name splitter rule is extremely useful for this as it allows us to split a text string based on a separation character. The separation character in a URL is always "/", so we can find the requested page by using the following properties:

Number Spliter properties

Then we can use a Switch rule to determine how to direct flow:

Survey rule set

In this case the Switch variable is URL and adding new chain points to the switch rule determines when logic flows down a certain path.

Note the use of survey.jsp. That page does not exist in the Qwerty application. It is the name of the page that the HTML form in "survey.html" posts its data to. The Programmable Data Agent simply intercepts this request and deals with it before it ever reaches the application itself.

We are now ready to determine what happens when the user reaches the main page, the first step is to make sure we haven’t already presented a survey to the user in the current session:

Survey rule set

The properties for this look as follows:

HTTP Session Reader properties
If Condition properties

Basically, we check the session to see if a flag named "DoneSurvey" has already been set. If not, we proceed to see whether we need to present the survey by using the already created "SurveySelection" rule set:

SurveySelection.xml rule added

If the response comes back that we need to perform the survey, the next action is very easy. We read the already prepared JavaScript "showsurvey.js" file and add it to the response being sent back to the user:

Survey rule set

Once again, the properties are shown here:

File Reader props
HTTP Response Addition props

This takes care of presenting the survey to the user. Now we just need to handle the response from the user to the survey, the first step of which is to record whether the user has in fact responded to (or denied taking part in) the survey:

Survey rule set

We record this in the session using the following properties:

Set Variable properties
HTTP Session Writer props

Next, we check if the user hit the "Submit" button and if yes, we record the answers in the flight recorder. If no, we simply return the user to the main page, using a little bit of JavaScript to remove the survey.

Survey rule set

The properties are as follows:

If Condition props
Flight Recorder Trigger props

Optional index fields: ProductRange,Pricing,EaseOfUse,Delivery,Comments

HTTP Response props

Response data: "<script>parent.document.location='main.jsp';</script>"

The little piece of JavaScript used here reloads the "main.jsp" page. As the survey flag is now set to "X", the survey will not re-appear, and the user can continue as normal.

Creating the survey configuration

The configuration for this example is very easy. Simply create a new configuration in the Customer Survey repository and name it "SurveyTest". The following shows all of the relevant parts that must be completed for the configuration:

Configuration, general tab
Input source tab
Databases tab

Testing

You are now ready to test the survey rule set. Deploy your new configuration to the Qwerty demo server and start it. Then log into Qwerty. There is approximately a 1 in 3 chance of you getting a survey request. To quickly invoke a survey, click on the "Set up 3rd party" button and then "I'm finished", until a survey request appears. Once you have completed or rejected a survey request, log out and log back in to be presented with another one.

Make sure you answer 4 or 5 surveys at this point.

Setting up the flight recorder definitions

We now have some data in the flight recorder, so we need to set up a definition for it in order to view the data from within the console.

The following shows the definition used in this example:

Flight Recorder

Seeing the survey results

Once you have done this, select Flight Recorders from the console menu and click on SURVEY:ANSWERS. Leave all of the fields as default and click on Search (tip: if you only wish to see survey answers with comments, put an uppercase "A" into the "Comments:" from field and a lowercase "z" into the to field).

The survey results submitted will be shown:

Flight Recorder: Survery: Answers Page 1

You can now click on the graph of one of the questions. The result is a pie chart showing you the answer distribution:

Flight Recorder: Survey: Answers visualization

Using the flight recorder search filters, you can now use the responses to better understand your customer satisfaction ratings. For example, you can see if Firefox users generally rate the ease of use of your site higher than Internet Explorer users or vice versa.

Potential improvements

The sample created here is fully functional, but for production purposes, you may wish to add a few things. Some possible improvements are:

  1. JavaScript validation to ensure the customer has completed the form before submitting it.

  2. Logic in the SurveySelection rule set to ensure that the same customer does not get the survey more than once every 6 months (The History Summary rule or the History Recorder rule are both useful for this purpose).