All pages
Powered by GitBook
1 of 16

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

GUIDES

Browser Certificate Installation Guide

Version: 10.0 / Modifications: 0

Introduction

This manual describes how to install browser certificates for testing access and modifications to sites that are protected by HTTP Strict Transport Security (HSTS). It is assumed that the reader is familiar with the basic steps of deploying configurations within Composable Agentic Platform and knows how to view the console output associated with the Composable Agentic Platform proxy server.

When using the Composable Agentic Platform browser proxy for accessing secure web sites over HTTPS, you will encounter certificate warning in the browser, just like the following:

Certificate warning

These warning are relatively easy to get around by clicking on the Advanced button and adding an exception.

However, with the advent of HTTP Strict Transport Security (HSTS) this has now become impossible to do as the browser will refuse to add the exception:

Not possible to add an exception for the certificate

The following guide provides instructions on how to overcome this problem by installing a trusted certificate authority into your browser that Composable Agentic Platform in turn will use to generate valid replacement certificates for each SSL site on the fly.

Getting started

Before you begin you should make some updates to your Composable Agentic Platform installation.

Required Updates

The first step is to update/install the following components via the update server:

  • Composable Agentic Platform console (10.0.0:21050 or later)

  • Base Rules (2021-07-16 or later)

  • BIP Runtime (2018-08-07 or later)

  • HTTP Rules (2021-07-15 or later)

Locating the certificate

After the BIP Runtime extension has been installed, locate the folder named ‘Certificates’ under the Composable Agentic Platform Server installation:

Certificates folder

Our certificate is found in that folder with the name: root.pem

Installing the certificate in Firefox

To install the certificate authority in Firefox, start by selecting Options from the main menu:

Firefox Settings

The select the Privacy & Security section and click View Certificates:

View Certificates in Privacy & Security tab

In the certificate manger, select the Authorities tab:

Authorities tab in Certificate Manager

Click on Import… then open the**root.pem** file from the location described earlier (the Certificates folder).

You will be given the option to select the level of trust for the certificate. Select “Trust this CA to identify websites” and click on OK:

Trust new Certificate Authority

Click on OK again to close the certificate manager.

Routing Firefox through the Composable Agentic Platform browser proxy

To be able to see traffic flowing between Firefox and your target site, you must configure Firefox to use the proxy. Under the Options Advanced settings, select the Network tab and click on Settings.

Browser Network Settings

Configure the proxy as shown and click on OK:

Connection Settings

You can now close the Settings tab in Firefox.

The certificate is now installed, and you are ready to see traffic.

Installing the certificate in Chrome/Edge for Windows

Please note that by using the Chrome installation method, other browsers (such as the Microsoft Edge browser will be affected as well).

We will therefore only show the Chrome approach.

Important: To install the certificate, the user MUST have administrative privileges on the system.

In the Chrome browser, select Settings:

Chrome Settings

Scroll down the page that appears and click on Privacy and Security

Locate the HTTPS/SSL section and click Manage certificates…

Manage Certificates

In the dialog box that appears, navigate to the Trusted Root Certification Authorities tab and click on Import.

Trusted Root Certification Authorities

This takes you to the certificate import wizard:

Certificate import wizard

Click on Next

Specify file for certificate

Important: PEM files are not available as a default filter. To locate the file, select All Files (*.*):

Select poot.pen file from certificates

Locate and select the root.pem file, then click on Open

The file name now appears in the Certificate Import Wizard and you can click on Next.

Select the certificate store as shown and click on Next:

Select certificate store

You will be presented with a review page. Click on Finish.

A security warning appears. Make sure you click on Yes:

Security Warning window

The certificate will be imported:

Successful message for certificate import

Close the certificates list:

Certificate list window

Routing Chrome/Edge through the Composable Agentic Platform browser proxy

Please note that by using the Chrome installation method, other browsers (such as the Microsoft Edge browser will be affected as well). We will therefore only show the Chrome approach.

Within the Chrome advanced settings, locate Network and click on Change proxy settings…

Change proxy settings

In the internet properties that appears, click on LAN settings:

LAN settings

Set the proxy server as shown and click on OK:

Proxy Server section

Then click OK again to close the internet properties and close the Settings tab in Chrome. The certificate is now installed and you are ready to see traffic.

Installing the certificate into the OSX Key Chain for Safari and Chrome

Please note that both Safari and Chrome use the same certificate store so this installation applies to both.

To install the certificate, navigate to the Certificates folder and double-click on the root.pem file. The Keychain Access utility will launch and requires you to enter your Admin User credentials:

Login windo for Keychain access

Enter your password and click on Modify Keychain

This will launch the Keychain Access utility with the certificate imported into the System keychain:

Keychain Access

Double-Click on the TomorrowX CA certificate to bring up the details:

TomorrowX CA Certificate details

Expand the Trust option and set the drop-down ‘When using this certificate’ to Always Trust:

Always trust for TomorrowX CA

Close the pop-up details window and enter your administrator password to update. The entry will now have a blue circle with a white cross to indicate a trusted certificate and will have the following text: “This certificate is marked as trusted for all users”:

TomorrowX CA marked as trusted for all users

Testing the certificate installation

Now that your certificate is installed, switch to the Composable Agentic Platform console, select the Product Trial repository and deploy the BasicWebLister configuration to the proxy server.

Wait for the proxy server to start.

You are now ready to test if you can bypass HTTP Strict Transport Security (HSTS) protection. In your browser go to https://www.google.com

Google should load as normal:

Chrome homepage

And you should see traffic in the proxy console:

Traffic in the proxy console

Examples

Frame Busting

Frame busting refers to the ability of an application to avoid being encapsulated within an IFRAME. The later approach can be used to not only make one site impersonate the capabilities of another, but more sinisterly, it can be used to overlay a different user experience on top of an IFRAMEd site and allow events to flow through to the IFRAME.

Using this approach, a user can inadvertently be tricked into performing actions within an application without even knowing that they are interacting with it.

A July 2010 study by Gustav Rydstedt, Elie Bursztein and Dan Boneh of Stanford University and Collin Jackson of Carnegie Mellon University named: "Busting Frame Busting: A Study of Clickjacking Vulnerabilities on Popular Sites", explores the risks and problems associated with framing. It can be found here:

http://seclab.stanford.edu/websec/framebusting/

The study mentioned above forms the basis of the following case study.

Frame busting defense

The defenses we will introduce in this case study are rather simple; we will add some JavaScript and a few extra HTTP headers to the logon page of the Qwerty app. Depending upon the application, it may also be relevant to add this code to other pages, but for now we will just select the logon page for simplicity.

The JavaScript we will add looks as follows:

<style>
html { visibility : hidden;}
</style>

<script>
if (self == top) {
document.documentElement.style.visibility='visible';
} else {
top.location=self.location;
}
</script>

The above script has been placed in the public domain by the authors of the study.

In simple terms, it sets the entire page invisible through use of a CSS directive and only makes it visible if the page itself is the top frame and JavaScript is enabled.

In addition to the above code, we will add a couple of HTTP Headers that take advantage of built in frame busting defenses in certain browsers. The headers to set are as follows:

X-FRAME-OPTIONS: SAMEORIGIN
X-Content-Security-Policy: allow *; frame-ancestors 'self'

Planning the rules

The rules required for this case study are extremely simple. Our plan is to:

  1. Determine whether we are on the logon page.

  2. If yes, add the frame busting code.

Getting started

The very first step as always is to create a repository. In this case we will name it "Frame Busting Example".

Once done, copy and paste the JavaScript code into a text file named "framebust.js" and upload it to the data folder in the repository.

Then create a new blank rule set named "FrameBust".

Creating the rules

The first rules we need simply determine if we are on the logon page:

FrameBust rule set

These rules are the same as in most of our other examples, so we will just list the properties here for quick reference:

Name Splitter properties
Switch properties

Once the properties are set, simply add a chainpoint to the Switch rule and name it "logon.jsp".

We next add the rules to inject the JavaScript and headers:

FrameBust rule set

We read the frambust.js file into a variable, we then set a couple of variables to the header values we need, and finally we add the JavaScript and headers to our response. The properties look as follows:

File Reader properties
Set Variables properties

Values are: SAMEORIGIN,allow *; frame-ancestors 'self'

HTTP Response Addition properties

Header field names are: X-FRAME-OPTIONS,X-Content-Security-Policy

That is it, save the rule set and create a configuration to test it.

Creating the configuration

The configuration for this rule set is very simple, we create one named "FrameBustTest". The following shows the relevant sections that need to be defined:

Create new Configuration, general tab
Input source tab

Testing

Qwerty is a suitable test application for this case study because it uses frames to encapsulate the logon and other internal pages.

When navigating to Qwerty landing page URL in the browser you will see is as follows:

http://localhost/qwerty/

To test the new rule set, deploy the configuration to the Qwerty demo server and start it. Then refresh the Qwerty logon page.

Whilst you will not see any visual differences in the appearance of the Qwerty application, the Qwerty landing page URL in the browser will now look like this:

http://localhost/qwerty/logon.jsp

We can proceed to navigate to other pages in the Qwerty application outside of the main Qwerty frame.

For example, these pages would normally all be loaded from within the Qwerty frame, but are now visible in the main browser address bar:

  • http://localhost/qwerty/main.jsp

  • http://localhost/qwerty/setup.jsp

  • http://localhost/qwerty/pay.jsp

We have successfully "Busted" out of the frame.

Hello, World!

Version: 10.0 / Modifications: 0

Introduction

Hello, World!

As with all new programming languages, the "Hello, World!" program generally is a computer program that outputs or displays the message "Hello, World!". Such a program is very simple in most programming languages and is often used to illustrate the basic syntax of a programming language. It is often the first program written by people learning to code.\

✨ Now step inside and follow these steps to complete your very first composition with Composable Agentic Platform.

Requirements

  • A running local, or cloud hosted instance of X.

  • Installed console version 10.0.0.21050 or later.

  • Chrome or Firefox browsers are supported.

  • Ports 80 and 443 are required to be available to run the console and Programmable Data Agent.

For the purposes of these instructions [your server name] = localhost

For example: http://[your server name]/console/ =

You need access to a console login screen like this:

Say Hello [content file]

Click the link to open in a new browser tab:

You’ll see a simple html content file called hello.html has already been pre-deployed and is served up to the browser by the running Programmable Data Agent.

Go ahead and enter your name in the form and press the Say Hello button. The form submission responds with Hello.

Background Information

The Programmable Data Agent loads hello.html, prompting the user to enter a name and to click a button labelled Say Hello. When the button is clicked, the text entered should be appended to "Hello". For example, if the text entered is "World!" then the result will be "Hello World!"

Objective

The user experience needs improving because any text entered is currently ignored. Can you follow this guide to improve the user experience?

First up, let’s go and see where the hello.html file lives….

Login [console]

Login to the console using the default credentials. In your case if you are working on the localhost console, use the default credentials:

  • User ID: admin

  • Password: admin

Open Repositories

Once logged in, press Start followed by Repositories.

We typically call these “repos”. It’s the home, or workspace in the console of where your work lives.

Now Click on the Hello World repository folder (no need to expand the folder tree just now, as that’s where you can save and restore your repository backups – we’ll get to that soon enough!).

Now press View and then expand the Content Files folder.

Content files can be HTML, XML, images, or any other binary content that may be required to be served when requested.

Content files can also be dynamically modified by content rule sets, we’re not covering those in this example.Content files live within a content path that must map to the content path of the application. In our simple example, hello.html is served under localhost being the root directory, so therefore it resides in the top-level Content Files folder.

As the Hello World configuration has already been deployed from the console to the target server Programmable Data Agent, this is why the page loads when requested.

Update Content Files

So, let’s inspect the html file. Click on hello.html, and a new portal window will open for the file. Click on the Update button as follows.

A new browser window opens to show a html editor for the hello.html content file. Note the input parameter name on the form is set to Name. We don’t need to make any changes to the html file so you can close this window.

So that’s a small introduction to Content Files. Next, let’s take a look at Rule Sets.

SendResponse [rule set]

With the Hello World repository open, expand the Rule Sets folder, then click the SendResponse rule set and press Update in the portal window that opens.

The rules editor is the graphical design tool for composing and maintaining rule sets. The rules editor is launched as a separate browser window from within the console application when you press Update.

Rules Editor – example for reference only

Go ahead and browse the vast catalogue of what we describe as “digital blocks” on the left-hand side. The catalogue is grouped into collections. To use any block in the catalogue, expand the group folder, then click and drag a block onto the main canvas as shown.

In this example, you can expand the Alert group folder and drag the Send Kapow SMS block onto the canvas.

Rules Properties – example for reference only

Now click to select the Send Kapow SMS block on the canvas, and the left-hand side catalogue will switch to the Properties tab.

Each block has properties you need to set when composing, along with adding a more meaningful description (like adding comments in code).

In this example you can set the properties as two variables called MESSAGE and MOBILE. The properties of this the block requires these in order to perform its intended function. These variables would need to contain the values of the SMS message, and the phone number to send the SMS message to.

Everything else is taken care of.

Each block has additional online help you can access by right-clicking over the selected block and pressing Help.

Give it a try.

Set Variable

So, let’s get back to our example. Click to select the first block called Set Variable and view its Properties.

Selected blocks banner colour turns grey.

The block does exactly what it says on the tin. It sets a new variable. In this example we’ve set the variable name to RESPONSE. With the value set to a snippet of html code. We enclose this snippet in quotes.

Note how this value has been constructed in three parts.

You’ll remember from earlier, the form submission responds with “Hello”, that’s because the NAME value hasn’t been defined or “passed into” this rule so therefore it processes NAME as a blank value, so the value of RESPONSE would look like this on exit.

HTTP Response

Click to select the second block called HTTP Response and inspect the Properties. Selected blocks banner colour turns grey.

You can also COPY/ CUT / DELETE / PAST block(s) with a simple right click.

How easy is that?!

Guess what!?

This block also does exactly what it says on the tin. It responds to an http request with content of the response data that has been set in the property. In this case the variable RESPONSE is the html snippet value set in the preceding Set Variable block.

You’ll see this block also requires an HTTP Status code and Content Type set.

This rule performs the final response behavior by the Programmable Data Agent you’ve already experienced when you clicked the link and pressed the Say Hello button.

Rule Info

Click on the fourth tab called Rule Info for the SendResponse rule set.

The Export to Group and Short Description represent this rule set as a new block that can then be (re-)used in other compositions. We will use the Send Response rule that lives in the Hello World Grouped folder in the next steps.

Note it has the Parameter Type set to Input, Parameter Name set to NAME, and has been given a Label of Name.

We’ve finished looking at the SendResponse rule set now, so go ahead and close it by closing the Rules Editor window.

Do NOT save any changes if prompted to do so.

SayHello [rule set]

So, let’s create a new rule set that will pass the html form’s Name value into the response.

Create a new rule set

Click on the Rule Sets folder in the Hello World repository. In the portal window that opens, set the File Name to SayHello (case sensitive) and press the Create button.

Now open the newly created SayHello rule set for editing. Click on the SayHello rule set that has now appeared in the rule set folder of the Hello World repository and press Update just as you did to inspect the SendResponse rule set.

Search the catalogue

Go to the search tab and search for “Response” and drag the Send Response block onto the rules editor canvas.

Alternatively, you can find the same block in the catalogue from the first Grouped tab, located in the Hello World group folder. This is because the Rule Info tab of the SendReponse rule set has an export group defined as Hello World.

Either method is fine to search the catalogue and drag blocks onto the canvas.

Wire blocks together

Click on the Send Response block

yes, we’ve turned a rule set into a new block in the catalogue for re-use

and once again just set the properties. So, now set the Name property to Name (case sensitive, no quotes).

Remembering this was the input parameter set in the hello.html content file we looked at earlier.

Click and hold over the orange cog, then click-release over the green dot to “wire” the first block into the rule set in a right to left direction. Incidentally, all subsequent blocks are wired from the block exit chain point (right hand side) to the input of the next block (left hand side).

Press SAVE and close the rules editor window as shown.

That’s all you need for your new rule set.

HelloWorld [configuration]

The HelloWorld configuration defines the input into the Programmable Data Agent and the rule sets to run.

General tab

Expand the configurations folder and click the HelloWorld configuration. The General tab is the default view, and ensure you now select the SayHello rule set from the dropdown list of available rule sets. This is the “initialising” rule set that is processed by the Programmable Data Agent on the very first transaction it receives.

Embedded (dependent) rule sets that have been wired within the SayHello rule set are also deployed along with it’s parent, so you only need to set the top-level ruleset.

Therefore, any dependent rule sets will get deployed along with the configuration without having to define them.

You’ll note here that there are three other types of rule set that can be set to initialize and run when processing data. These are for (1) CONTENT, on (2) STARTUP, and on (3) COMPLETION. These are not required in this example.

Timers tab – information for reference only

Just to mention in passing, there is a fifth rule set you can set in the Timers tab of the configuration. These are rule sets that are initiated and run (as the name suggests) on a timed basis. For example, when a rule set is required to perform a defined process say, every 24 hours.

Input source tab

Click on the Input Source tab and inspect the different sources of data options available.

For this example, we are configuring the Programmable Data Agent to process web application data, but as you can see this is just one of a multitude of available options to define in the configuration, dependent on the composition and data sources being processed.

Databases tab – information for reference only

Click on the Databases tab. It’s here where you define the databases being made available to the Programmable Data Agent. You are not required to define a database for this example so there’s no need to configure a database.

Example only:

If you are interested, database connectivity specifying JDBC driver, connection string and schema credentials is an administrator set-up task in the console. You don’t need to complete that right now.

Deploy

With the new SayHello rule set defined as the rule set in the configuration, you can go ahead and press the Deploy button.

Select Programmable Data Agent as the target server and press the Deploy button.

Wait for the deployment to complete and the server restart in a few seconds, and you’ll see the Programmable Data Agent server details are shown.

Test

Click the link to open in a new browser tab and refresh the page.

Enter World! the press the Say Hello button, and if successful you’ll receive a Hello World! response.

[the crowd erupts into wild applause 👏🏻🍾]

Want some more?

Then read on….

Performance data and live probes

With the Hello, World! example now working successfully, let’s give you a glimpse under the hood of the Programmable Data Agent.

Go back to the console and click Get performance data in the Programmable Data Agent server portal window you have open.

On the next window click View Rules Performance

The rules editor window opens in a new window. Double click the Send Response block.

Place a probe on the Set Variable block. Right click over the green exit chain point and click New probe…

Click the Create button. Live probes are triggered by variables and values, and occurrences thereof. We can leave these blank to just trigger on the next transaction.

The exit chain point turns yellow to show the probe is set.

Now go to the browser tab of the demo page showing the SayHello output and click the back button so that the input hello.html page shows.

Input a new name Probe into the input field and click Say Hello, the page responds as expected with Hello Probe. Go back to the rules editor window with the probe set and you’ll see the exit chain point has turned red to show the probe has been triggered.

Right click on the red exit chain point and click View probe.

You can now see the transaction data that has just been processed by the Programmable Data Agent. The contents of the two variables NAME and RESPONSE.

Aside from helping you view live data to assist with composing or troubleshooting your solution, it also provides a superior debugging tool that can even be used on production servers without the need for logging.

“<html><body><h1>Hello "+NAME+"</h1></body></html>”
“STRING”+VAR+”STRING”
“<html><body><h1>Hello </h1></body></html>”
[NAME]=[Probe]
[RESPONSE]=[<html><body><h1>Hello Probe</h1></body></html>]
http://localhost/console
http://localhost/hello.html
http://localhost/console/
http://localhost/hello.html
http://localhost/hello.html
http://localhost/hello.html
Composable Agentic Platform - Console -
Hello World - Content file -
SayHello Response
Navigate to Repositories
Hello World repo
Hello World Content Files - navigation -
Updating the hello.html
hello.html content file
Update a Rule Set
Send Kapow SMS block
Prosperities tab for a rule
Checking the help section for a Rule
Set Variable block
HTTP Response block Properties
Rule Info Tab
Description, Export and Short Description inputs
Parameters section
Create New Rule Set
Update the newly created Rule Set - SayHello
Hello World Folder
Save the Rule Set
Specifying the initial Rule Set in our Hello World Configurations
Timers Tab
Input Source Tab
Configuring Programmable Data Agent for web application data
Databases Tab
Deploy our repo
Specifying the Target Server and deploying
hello world form
SayHello Response
Get Performance Data
View Rules Performance
SayHello Send Response block
New probe for SendResponse block
Create New probe
Yellow probe
Two probes
View a probe
Transaction data with the content of the two variables NAME and RESPONSE

Google Analytics

Google Analytics lets you do more than measure sales and conversions. It also gives insights into how visitors find and use your site, and how to keep them coming back.

This case study demonstrates Tomorrow Software as an easy integration option for adding tracking code to web pages typically done so outside of the normal software development life cycle (SDLC). Not only does this provide an easy, and rapid deployment of such third-party services, but also ensures that as and when new pages are introduced it provides comfort that tracking code will be ‘appended’ to each and every page the web application responds with back to the user’s browser.

This example is a common method whereby you can simply read a JavaScript file containing the required tracking code, insert your account ID and append it to any web page.

For information regarding the Google Analytics service please refer to:

https://www.google.com/analytics/web/

GoogleAnalystics Reporting dashboard

Planning the rules

The first step of any rule writing is to determine what we want to do and how it can be accomplished.

Before you begin you will need to ensure that you have a valid Google Account email address and password for using the service, or alternatively sign up, it only takes a couple of minutes. https://accounts.google.com

Login with Google Account

We will discuss tracking code throughout this case study, which is only accessible once you have logged in to Google Analytics.

To access your tracking code:

  • Log in to Google Analytics https://www.google.com/analytics/web/.

  • From the Admin page, select the .js Tracking Info property from within the list of accounts. Please note that tracking code is profile-specific.

  • The tracking code can be copied and pasted from the Website Tracking text box from the Tracking Code menu item.

Tracking info

The code will be similar to the below (where x replaces your specific account code 'UA-xxxxxxx-x' ):

<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-xxxxxxx-x', 'auto');
ga('send', 'pageview');
</script>
  • Replace the code with 'UA-xxxxxxx-x' as we can set the account ID in Tomorrow Software rules later, which makes managing the rules and different Google Analytics accounts much easier.

  • Copy and paste the above JavaScript code into a file named "google.js" and save somewhere local i.e. your desktop, for use later on in the exercise.

It is this tracking code that performs the task of collecting the browser data of visitors.

Getting started

Start by creating a new repository called “Google Analytics Example”.

It’s recommended that the process involved in adding the Google Tracking code be split into two:

  • setting a variable which holds the unique Google User account 'UA-1234567-1’.

  • and then inserting this value into the tracking code itself.

This means that you can subsequently update the account or the code separately in future deployments, or when Google amend their tracking code.

Keeping this in mind, you should create the following blank rule sets:

  • GoogleAnalytics: this rule set will be responsible for creating the new UA variable plus reading the tracking code and adding it to the page.

  • Qwerty_test: this rule set will allow you to test how a deployment can work in the demonstration Qwerty example application.

The two new blank Rules will be visible now within the repository.

Google Analytics Example folder

Uploading the Google Tracking Code

In the Tomorrow Software console select the Data Files folder, then upload the ‘google.js’ file you created above and saved to your desktop.

Ensure you upload to the newly created “Google Analytics Example” repository that will now be available in the drop-down list of available folders.

New data file

Press upload and the file will now be visible in the repository in data files for the rules to use.

added google.js file

GoogleAnalytics Rule Set

GoogleAnalytics rule set

Using a Set Variable rule set a new variable called Google_UA with the value “UA-1234567-1” where 1234567-1 is replaced with your specific Google Analytics user account.

Set Variable properties

Then using the File Reader read the google.js file and give it a variable name called ‘GOOGLE_ADD’

File Reader properties

Next use a String Replacer rule to insert the newly created Google_UA variable into the tracking code .js file, followed by the HTTP Response Addition rule, to append the Google Tracking code to the response.

GoogleqAnalytics rule set

The string replacer rule will basically look through the code (which is now called ‘GOOGLE_ADD’) and replace any found content with the value of the variable we have defined ‘Google_UA’.

String Replacer properties

The HTTP Response Addition rule will now take effect and provide the amended google.js file as an addition to the page response and will activate this in the user’s browser.

Http Response Addition properties

The final step for this rule set is to add a couple of Exit rules called “OK” and “Fail” which will assist in rules performance to tell you if the rule is working, and to help with embedding this as a rule set within another rule set.

GoogleAnalytics rule set

Qwerty_test Rule Set

This rule set will allow you to see an example deployment to the Qwerty demo application.

Of course, with every response from the application there is static content which you don’t wish to add Google tracking code to, so take a couple of simple steps to filter off transactions which don’t require code appending.

For example, a jpg image may be served up on a page each and every time a user navigates to this page so adding code to this page will not provide any additional customer insight.

Qwerty_test rule set

Using the Name Splitter rule to identify the URI extension is a useful way to filter out unwanted data before reading the Google Analytics rule set.

Name Splitter properties
  • Variable Name: URI

  • Last Name Variable: we are only interested in the last part of the URI so we name this variable EXT.

  • Split Pattern: “.” is the identifier of the URI which tells the rule which part of the value we want to use or split.

Using the Switch rule set the Switch Variable properties to EXT as created above and proceed to ‘Add Chain Points’ for the static content you wish to ignore such as gif, css, html, js, jpg.

Switch properties

The final step is to connect the newly created GoogleAnalytics.xml rule set now located in the Rule Sets folder.

Rules Sets folder

Setting up the configuration file

Finally, you can set up the configuration file. Click the Configurations menu, select the “Google Analytics Example” repository from the drop-down list, and enter some basic information about the rule to load.

The following screen shots show the information required for the “General”, and “Input Source” tabs.

Configuration for Google Analytics Example repo
input source tab

You can now click the “Create” button to create your configuration file. Once created, click the “Deploy” button to deploy it to your Qwerty demo server.

Future considerations

The above case study shows how to implement Google Analytics tracking code in a specific environment, though of course each individual application will be different.

Validate the code is working

You will be able to log into your Google Analytics and select real-time traffic reports within the reporting dashboard, to validate the tracking code has been inserted, and is working correctly on your website.

real-time traffic report

You can also right click the page in the browser to view source code to verify the Google tracking code has been correctly inserted into the target application page.

CSRF attack prevention

Before explaining how to combat CSRF (Cross Site Request Forgery), a quick explanation of the technique behind it is in order.

A cross site request forgery relies on a user visiting a malicious site, shortly after they have logged into a genuine site, and whilst they still have a session cookie active with the genuine site.

By making the user's browser send malicious requests directly back to the genuine site, the malicious site can exploit the fact that the user is already logged in, to effectuate such things as placing orders in the user's name, sending emails using the user's credentials or posting comments to other users in what may well be a trusted user's name. The list of exploits is endless and only really subject to the vulnerabilities of the site being attacked.

Ways to make the user visit the malicious site whilst still being logged into the genuine site includes phishing, posting of links in comments on the genuine site, or even just "trial and error" by posting links on sites that may also be frequented by users of the genuine site.

The limitation of the CSRF attack is that it is always "blind". The attacker cannot see what the application responds with, or what the current state of the session is; due to restrictions imposed by browser security models that say that a request from one server (domain) cannot be sent to another.

CSRF defense techniques

How to best protect your site against CSRF attacks depends on how it was written. Generally speaking, most applications perform actions as a result of an HTML form being posted to the site. Some sites also respond with actions to a GET request

For example: "http://www.mysite.com/delete.jsp?orderToDelete=12345"

This example will focus on protecting applications that use a form POST. This is done by adding a hidden field to every form presented by the application. This hidden field contains a random value that is unique to the specific session of the user. We will require that this field is always present on a form POST, making it virtually impossible for a malicious site to second guess what a valid POST request might look like.

The technique for protecting a site that uses GET requests is similar, simply requiring the addition of an additional URL parameter to every URL that takes parameters, instead of a hidden form field.

Planning the rules

The first step in implementing our CSRF defense is to create a simple plan of action i.e. what do we intend to do, and how do we wish to go about doing it. It is a good idea to write this down in plain English and then use that text as a guide whilst designing the rule structure. In this case, the plan reads as follows:

  1. If a POST request comes in whilst there is an active session, then make sure it has our hidden field, and that it is the hidden field we have generated for that session. If the field is not present, we should respond to the user with an HTTP Status code of 403 (Forbidden).

  2. Whenever a new page is provided by the application, make sure we add a large random number as the hidden field to every form presented by the application. The large random number we use should be generated once for the session and then be stored in it for easy reference and good performance.

That sounds easy enough; so, let's begin...

Getting started

Create a new repository named "CSRF Example" and add a new rule set named "CSRF".

Filter out static content before it hits the core rules using a Name Splitter and Switch rule as shown:

  • The Name Splitter conveniently extracts the extension of the object being requested using the following properties:

  • The Switch rule operates on the EXT variable. By adding new chain points for each type of static content they are eliminated from reaching the rule set.

As we are dealing with Web Applications, and we need to know information such as the method used (POST/GET), the first step is to add an HTTP Request Tracker rule from the HTTP group in the rules catalog to the CSRF rule set:

A good technique for rule writing is to start by determining the "flow" of events or pages that will subsequently have rules applied to them.

In our case we have two flows:

  • The verification of the forms

  • The addition of the form fields.

So, our next action is to add a Sequencer rule from the Flow group in the rules catalog:

Implementing step 1

Now, the first step in our written plan is to check if we are dealing with a POST request in the session, and if the form posted has our hidden field. The first part is very easy:

Only the If Condition requires some properties:

The next step is simple. We need to look up the current hidden field from the session:

Once again, there are not many properties:

The variable names and values we have chosen are arbitrarily selected, although they should be meaningful and memorable.

In this example, we have decided that the hidden field is stored with a session key named "CSRF.key" and that the hidden field on all forms is named "CSRF". We could have chosen any names as long as we use them consistently when we add the field to the form and store the session key.

All that is left for the first step is to make sure that if the key doesn't match, then the user receives a 403 error.

Once again, the properties are very simple:

We use a Set Completed rule after the response, as once we have decided that the user should be rejected, there is no need to proceed with the rest of the rule set. Instead we simply terminate the flow.

Implementing step 2

We are now ready to implement the second part of the plan. The first step in doing so is getting the actual response from the server so that we can add the hidden field if we need to.

The HTTP Server Execute rule takes care of this, even if you are writing rules using a built in forwarding proxy.

Once again, the properties are very simple as we are just interested in the application response:

Once again, we need to check if a session is present, but after the HTTP Server Execute rule, as that rule may in fact result in a session being created:

If there is a session, then we need to add our unique CSRF key to it. The first step in doing that is to see if we already have that key:

Once again, not many properties:

If we don’t have it, we need to create it, which is easy:

The properties for these rules are as follows:

The session key we use is the same "CSRF.key" that we used in .

All that remains now is to add the field to the form and send the response back to the user.

Thankfully there is a dedicated rule that handles the first problem, the "Insert Hidden Field" rule.

Note that we are handling various loose ends too: connecting a Session not found to the HTTP Response, and connecting the existing session key to the Insert Hidden Field rule.

The final properties that must be set are as follows:

Testing

Our rule set is now complete, and we are ready to test it. A good sample application for this test is the Qwerty application. Create a configuration for the test named "CSRFTest" and set it as follows:

(Only relevant sections shown)

Once you have set up your configuration, deploy it to the Qwerty demo server and try testing it.

You will see in the Qwerty application, in the "Set up 3rd Party Accounts" page, that there is now a CSRF hidden field added to the page:

Use the performance data to further verify that everything is working as you expected.

Adding more protection

If you look further through the page source of the Qwerty application, you may also notice the following link:

This is a classic case of a GET request that can be exploited using CSRF. In this basic case study, we only protect POST requests of forms. However, if your application also uses actions on GET request, you can fairly easily amend the rule set to also cover GET requests.

This involves manipulating any URL parameters in the pages that are used for actions.

You can do this using the String Replacer rule, especially if your application uses ".jsp" or ".do" or ".aspx" as URL identifiers for active content.

For example, you could replace ".jsp?" in every page with ".jsp?CSRF=0123456789&" and then check for the field on every URL that ends in ".jsp" and has PARAMETER_NAMES (from HTTP Request Tracker Rule) not equal to blank. If you do that you will achieve the same result as the Insert Hidden Field rule does in this case study.

Additional CSRF notes

The above example is based on implementing the CSRF problem as a single rule set.

step 1
Static content filtering using Name Splitter and Switch rules
Name Splitter properties
Switch properties
HTTP Request Tracker added
Sequencer rule added
HTTP Session Check with If Condition
If Condition properties
HTTP Section Reader with If Condition rules
HTTP Session Reader properties
If Condition properties
403 error flow
HTTP Response properties
2nd implementation in the Sequencer
HTTP Server Execute properties
HTTP Session Check added
HTTP Session Reader and If Condition to check if the key is blank
HTTP Session Reader properties
If Condition properties
Random Number and HTTP Session Writer to create and store a rando number
Random Number properties
HTTP Session Writer properties
Insert hidden field rule added
Insert hidden field properties
HTTP Response properties
General tab for CSRFTest configurations
Input source tab for CSRFTest configurations
CSRF hidden field
A link with to a GET request with params

TCL Script Writer Reference

Version: 10.0 / Modifications: 0

Using the scripting interface

The Tomorrow Software console ships with a scripting interface to facilitate automated management by other tools. The scripting interface is based on the TCL (Tools Command Language) version 8.4 syntax and commands, but also includes a number of Tomorrow Software specific commands.

As scripting is a programming interface, the scripting engine is not multi-lingual. It is invoked by a simple HTTP/S POST command to the URL:

http://<server>/console/ScriptRunner

The parameters for the POST are as follows:

Parameter

Value

user

The console user ID under which the script will be executed

password

The password for the user

script

The script to execute

All parameters should be UTF-8 encoded.

Learning TCL

It is beyond the scope of this manual to provide complete details of the TCL language. TCL has been in use for many years and plenty of online resources exist for learning the language. An excellent primer can be found here:

https://www.tcl.tk/

Also, several sample scripts can be found in the /Education/script samples folder.

Testing scripts

To assist with testing scripts, a specific page has been made available:

http://<server>/console/scriptRunner.jsp

This page allows you to enter a user ID and password, as well as a script, and submit it to the console. The output from the submission is returned to the browser.

Tomorrow Software specific TCL extensions

Tomorrow Software introduces a number of extensions to the standard TCL language. All of the extensions relate to specific console management tasks.

In addition, output written with the "puts" command is written to the HTTP Response stream rather than STDOUT.

Command: createUser

The createUser command creates a new console user. The command takes a number of parameters to correctly define a user in the console:

Parameter

Value

Logon

The console user ID for the new user

Name

The full name of the user

Password

The initial password for the user

Email

The email address of the user

Type

The user type. Valid values are:

0 = Administrator

1 = Standard User

2 = Super User

3 = Security User

Role

The role name for the user. Can be blank if no role is required.

Time Zone

The new users time zone. Must correspond to the time zone list found in the appendixes of this manual.

Additional Auth

The class name of any additional authentication settings. Can be blank if no additional authentication is required. Please note that only basic authentication selections are available. Overrides (such as the number of digits for one-time emails) are not supported. Currently the following are valid additional auth classes:

software.tomorrow.authenticate.OneTimeEmailPlugin

software.tomorrow.authenticate.LocalHostPlugin

The following script snippet shows an example of how to use this command:

createUser test123 "Test User" test123 [email protected] 1 "" GMT ""

This command can only be executed with administrator or security authority.

Command: deployConfiguration

The deployConfiguration command deploys a specific configuration to a nominated server. Only configurations located in a repository can be deployed using this command. The parameters for the command are Server ID, Repository Name and Configuration Name. The following script snippet shows an example of how to use this command:

deployConfiguration Qwerty "Product Trial" BasicWebTrial

The command will wait for the deployment task to complete before continuing. The deployment does not result in a server restart. The stopServer and startServer commands should be used after this command to ensure that the deployed configuration takes effect. This command is only valid for production servers.

Command: deleteUser

The deleteUser command deletes a user based on a provided user ID. The following script snippet shows an example of how to use this command:

deleteUser super

This command can only be executed with administrator or security authority.

Command: getAudit

The getAudit command retrieves a subset of the internal Tomorrow Software audit log. The following snippet shows an example of how to use this command:

set clause "WHERE ACTIONTIME>1425064936463 ORDER BY ACTIONTIME DESC"
set auditRows [getAudit $clause]
puts "Row count = [$auditRows length]<p>\n"
for { set i 0 } { $i < [$auditRows length] } { incr i } {
puts "[$auditRows get $i]<p> \n"
}

The above commands retrieve all audit log entries after the Java Time Stamp 1425064936463.

This command can only be executed with administrator or security authority.

Command: getConfiguration

The getConfiguration command reads a specific configuration from a specific repository and provides access to all elements of the configuration (including the ability to update it if the user has the appropriate authority.

The following script snippet shows an example of how to use this command:

set cnf [getConfiguration "Product Trial" BasicWebTrial]
puts "Configuration rule set [$cnf getRuleSet]"
$cnf setTestDataDepth 20000
$cnf update

The above command obtains the BasicWebTrial configuration from the Product Trial repository, outputs the default rule set file name and then sets the maximum number of test records to 20,000 before updating the configuration (writing it to the file system).

The following table provides a list of all of the readable properties on the configuration object:

Method

Return value

getAttributeLabels

An array of strings with the input field labels of the configuration

getAttributeNames

An array of strings with the input field names of the configuration

getAttributeValues

An array of strings with the input field values of the configuration

getContentRuleSet

The file name of the content rule set

getDatabaseAliases

An array of strings with the database aliases of the configuration

getDatabaseDrivers

An array of strings with the database drivers of the configuration

getDatabaseNames

An array of strings with the database names of the configuration

getDatabaseSchemas

An array of strings with the database schemas of the configuration

getDatabaseSystems

An array of strings with the database system names of the configuration

getDescription

The description of the configuration

getDirectory

The directory where the configuration is located

getDoneRuleSet

The file name of the completion rule set

getFileName

The file name of the configuration

getInitRuleSet

The file name of the startup rule set

getInputClass

The class name of the input adaptor used by the configuration

getInputParms

A string with the input parameters passed to the configuration upon startup

getLoopPrevent

The maximum number of chain point interactions before a rule set is considered looping

getName

The configuration name

getPerformanceLevel

The level of performance data collection

0 = Transaction counts

1 = Transaction count and inline time

2 = Transaction count, inline time and URI statistics

3 = All counters

getRuleSet

The base rule set file name

getServerType

The server type

0 = Production

1 = Test

getTestDataDepth

The maximum number of test data collected

getTimerDelays

An array of strings with the timer delay in seconds for each timer rule set of the configuration

getTimerNames

An array of strings with the timer rule set file names for each timer rule set of the configuration

getTimerTypes

An array of strings with the timer rule set types for each timer rule set of the configuration

0 = Real time

1 = Pause

isAutoStart

Set to 1 if this configuration is auto starting, 0 otherwise

isCollectTestData

Set to 1 if this configuration collects test data by default, 0 otherwise

isEchoOut

Set to 1 if this configuration provides an echo of console messages to System.out, 0 otherwise

isFailOpen

Set to 1 if this configuration fails open, 0 otherwise

Each of the above values can also be set using the equivalent setter method (replacing "get"/"is" with "set").

Please note that for any arrays, ALL arrays in a set (attributes, databases, timer rule sets) MUST be set to the same length before invoking update.

The TCL interface only supports updating existing configurations. New configurations cannot be created using TCL and existing configurations cannot be deleted.

Command: getUser

The getUser command reads a specific user and provides access to some elements of that user.

The following script snippet shows an example of how to use this command:

set usr [getUser test123]
puts "User name [$usr getName]"
$usr setRole analyst
$usr update

The above command reads the user test123, outputs the name of that user and then sets the role before updating the user.

The following table provides a list of all the readable properties on the user object:

Method

Return value

getAuth

The class name of any additional authentication. Can be blank.

getCreated

The time the user was first created in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT.

getEmail

The email address of the user

getLastLogon

The time of the user's last logon in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT.

getLogon

The user ID of the user

getName

The name of the user

getRole

The role set for the user (if any)

getTimeZone

The users time zone. Will contain a value from the time zone list found in the appendixes of this manual.

getType

The user type. Valid values are:

0 = Administrator

1 = Standard User

2 = Super User

3 = Security User

Most of the above values can also be set using the equivalent setter method (replacing "get"/"is" with "set"). The values that cannot be set are: Logon, Created and LastLogon.

This command can only be executed with administrator or security authority.

Command: serverList

The serverList command obtains a list of all configured servers in the console that the user is authorized to view. The response is in the form of an array of server IDs. The following script snippet shows an example of how to use this command:

set srvList [serverList]
puts "Server count = [$srvList length]<p>"
for { set i 0 } { $i < [$srvList length] } { incr i } {
puts "Server [$srvList get $i]<p>"
}

A sample output from running the above script is as follows:

Server count = 5
Server Console
Server LocalProxy
Server MPServer1
Server Qwerty
Server TestServer1

Command: serverStatus

The serverStatus command is used to interrogate the current status of a server, based on the server's ID. The following script snippet shows an example of how to use this command:

set srvId Qwerty
set srv [serverStatus $srvId]
puts "Server status = [$srv getStatus]<p>"
puts "Server is running = [$srv isRunning]<p>"

The return value from the command is a server status object. The following methods are available on the object:

Method

Return value

getBuild

The base rules build number for the server

isCollectTestData

A flag to indicate if the server is collecting test data.

0 = No

1 = Yes

getConfiguration

The name of the configuration currently deployed on the server

getConfUser

The name of the user that created the current configuration used on the server

getConfVersion

The version of the configuration currently deployed on the server

getDeployErrorCode

Any error code issued (if any) when attempting to deploy the last configuration to the server

getDeployFrom

The repository name from which the configuration was deployed

getDeployTime

The time the current configuration was deployed to the server in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT

getDeployUser

The name of the user that deployed the current configuration to the server

getDescription

The description of the configuration currently deployed on the server

getErrorCode

Any error codes detected on the server. The corresponding error messages are found in the "translation.properties" file for the console application.

getFlightRecorders

A string array with the IDs of any flight recorders in use by the currently deployed configuration

getHost

The host name of the server

getInputAdapter

The class name (identifier) of the input adaptor used for the current configuration

getInputParms

The input parameters provided to the configuration to be used in conjunction with the input adaptor. This is mainly used for file polling servers and in that instance provides the directory that is polled for files. For test servers it provides the input file name to the configuration.

getJavaVersion

The current version of Java used by the server

getLastStarted

The time the Programmable Data Agent was last started in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT

getLastStopped

The time the Programmable Data Agent was last stopped in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT

getLastTransaction

The time the Programmable Data Agent was last invoked in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT

getMajorVersion

The major version number of the Programmable Data Agent

getMinorVersion

The minor version number of the Programmable Data Agent

getOperatingSystem

The operating system and version of the server

isPolling

If the server is polling for data (feed servers)

getPort

The port the server is accepting instructions from

getRevisionVersion

The revision version number of the Programmable Data Agent

getRuleset

The name of the currently deployed rule set

isRunning

Whether the server is currently running (started).

getStatus

The status of the server. 0=Offline, 1=Online

getTestData

The number of available test data lines

isTraceData

If the server has trace data

isTraceMode

If the server is in trace mode

getTransactions

Number of transactions processed since last server start

getVersion

Full version number of the Programmable Data Agent in text format

Command: setCredentials

The setCredentials command is used to set the value of a given field in the credentials vault. The specific vault and field must exist already.

The following script snippet shows an example of how to use this command:

setCredentials KapowSMS UserID Fred

This command can only be executed with administrator or security authority.

Command: startServer

The startServer command is used to start a nominated server.

The following script snippet shows an example of how to use this command:

startServer Qwerty

The command will wait for up to 30 seconds to ensure that the server is actually started. Provided the server starts, the command will return "1". If the server fails to start, then "0" will be returned.

Command: stopServer

The stopServer command is used to stop a nominated server.

The following script snippet shows an example of how to use this command:

stopServer Qwerty

The command will wait for up to 30 seconds to ensure that the server is actually stopped. Provided the server stops, the command will return "1". If the server fails to stop then "0" will be returned.

Command: userExists

The userExists command checks if a given user ID exists. The command returns "0" if the user ID is not found or "1" if the user ID is found. The following script snippet shows an example of how to use this command:

set checkUser admin
puts "User $checkUser exists [userExists $checkUser]"

This command can only be executed with administrator or security authority.

Command: updateApplication

The updateApplication command updates a console application (such as Qwerty or the console itself. The following script snippet shows an example of how to use this command. In this case the console itself will be updated:

updateApplication console

This command can only be executed with administrator authority.

Command: updateExtension

The updateExtension command updates/installs an extension from the update server (such as the Base Rules or the Http Rules). The following script snippet shows an example of how to use this command:

updateExtension "MaxMind Rules"

This command can only be executed with administrator authority.

Command: updateRepository

The updateRepository command updates/installs a repository from the update server (such as the Product Trial repository). The following script snippet shows an example of how to use this command:

updateRepository "Product Trial"

This command can only be executed with administrator authority.

Command: userList

The userList command obtains a list of all users in the console. The response is in the form of an array of user IDs. The following script snippet shows an example of how to use this command:

set usrList [userList]
puts "User count = [$usrList length]<p>"
for { set i 0 } { $i < [$usrList length] } { incr i } {
puts "[$usrList get $i]<p>"
}

A sample output from running the above script is as follows:

User count = 3
admin
security
super

This command can only be executed with administrator or security authority.

Using the Push Notification Framework

Push notifications is rapidly emerging as one of the most efficient ways of sending information to users without going through email, SMS or other channels (such as Messenger or Slack).

Push notifications have a very high click-through rate and is supported by all modern browsers and platforms except Apple’s.

The push notification framework provides a simple way to add push notifications to your application with the ability to fall back to alternatives if the user is on an unsupported platform.

A push notification appears as a message in the user’s notification section. For example, in Windows a message could look like this:

Notification on windows

It consists of an Icon, A headline and a text body (where supported).

If the user clicks on the message, an event is generated that will open a web page specific to the message and will also send a notification back to the server that the user clicked the message.

There are some restrictions to using push messages:

  1. The web site sending the message MUST be using a secure protocol (https), even during development

  2. The user must be on a supported platform

  3. A set of cryptographic keys must be created to sign messages

The push notification framework helps you manage the last 2 of those 3 items above. To install certificates within your application, please refer to the product reference.

Please note: This manual will reference the Push Notification Demo repository, which can be obtained from the update server.

Getting started

The push notification framework consists of 3 rules and a precisely structure HTML page. In the following section we will cover these rules in detail.

Initialize Push Notifications

Initialize Push Notifications rule

This rule does two things:

  1. It either obtains, reads, or creates credential keys to use with the notifications

  2. It initializes a data set used to store notification user information

Server Keys

The condition for creating keys is that no keys are present in the credentials vault or in the file system, so they must be created. This is done directly on the target server as two new files the first time the rule is executed:

Server keys

You can choose to simply leave these files on the server (in which case you should also place them in the Data Files section of the repository you are working with and ensure they are deployed using the Register Data Files rule).

The preferred way however is to store the keys in the credential vault. This is a simple exercise of opening the key files with a text editor and copying the text from them to the appropriate keys in the vault:

Maintain credential vault window

Once the keys are in the vault, the files can be removed from the target server.

Subscriber Data Set

The data set created by the rule is named “WebPushSubscribers” and is entirely managed by the framework. You can however query and work with the data set in rules as well if you wish. To do so, you will need to know the field names which are: subscriber, target, endpoint and group.

  • Subscriber refers to the user id within your application for a logged in user

  • Target is the type of communication. For example: Push, Email, SMS etc

  • Endpoint is the key to sending, it can be a Push key, email address, phone number (or whatever else is a valid definition of where the message should end up depending on the target)

  • Group is the target group. It can be the same as the subscriber (for direct communication) or it can be a subscribed group (such as offers, recalls, alerts etc).

The Push Notification Controller

The Push Notification Controller rule manages everything related to interacting with the browser to ensure push notifications can be subscribed to and delivered. It automatically generates correct and tested JavaScript pages and a default icon for the rest of the framework to use.

Push Notification Controller rule

Even though the controller manages all these interactions, you always have the option of doing your own additional processing (For example when a user subscribes or unsubscribes or performs a click through on a notification).

The controller only needs a few properties:

controller properties
  • The Database is the database where subscriber information should be store.

  • The Subscriber is the user ID of the user involved in the interaction. Note that generally speaking it is best to have a user logged in so that you can target specific users, rather than just a generic group of people.

  • The Default URL to open is a fallback mechanism for browsers that do not yet support a target URL to open for each message. In that case, clicking on the notification should send them to a sensible page (such as a login page).

  • The Is default URL also welcome page is important only to ensure that if the user already keyed in the welcome URL and already has that open (without a page, such as https://example.com/ and not https://example.com/index.html) the browser should not open index.html but rather just focus the original welcome page).

Wiring it up

The Push Notification Controller is designed to be the last rule in our normal application flows. In the sample repository that looks like this:

Controller structure

It is important to note that the sample repository is cut down to an absolute minimum for maximum clarity. Your production repository should follow the guidelines set out in the Best Practices manual.

The demonstration repository entry point

To help you experience push notifications we have created a simple entry page called index.html. It presents as follows:

index.html in the browser

Returning to this page logs out any user. To log in as one of the two users just click the relevant button.

No passwords required.

Create a subscription page

To go along with the Push Notification Controller, you will need a subscription page served up as content. There is a very minimum sample page in the Push Notification Demo repository named subscribe.html. It presents as follows (after checking that notifications are possible):

subscribe.html page

Should your browser NOT support push notifications, you will receive the following page instead:

Message for not supporting the push notifications on that specific browser

And finally if you are trying with something like Internet Explorer you will receive this message:

Message for not supporting the old browser used to open the page

All of the above sections are simply DIVs in the sample HTML file:

subscribe.html file content

The important thing to understand is that the various IDs of each DIV must remain in place.

You can change the DIVs to <section> or <span> tags (or whatever you like), but the IDs must be present so the Push Notification Controller rule can take charge of the page in the background.

Mandatory IDs

There are several critical IDs that must remain in place. They are as follows:

ID

Function

webpushSupportedNotSubscribed

This ID is used to identify a section that is displayed when push notifications are supported, but the browser is not yet subscribed.

webpushNotSupportedNotSubscribed

This ID is used to identify a section that is displayed when push notifications are not supported, but the browser is not yet subscribed.

webpushSupportedButBlocked

This ID is used to identify a section that is displayed when push notifications are supported, but the user has previously declined permission to send notifications

webpushSupportedButError

This ID is used to identify a section that is displayed when push notifications are supported, but an unexpected error was encountered when try to register

webpushSubscribed

This ID is used to identify a section that is displayed when the user is already subscribed to notifications

webpushGroups

This ID is used to identify a section that displays a list of groups that the user can choose to subscribe to alongside the individual subscription

webpushChecking

This ID is used to identify a section that is displayed while the browser is checking the availability of push notifications

webpushTooOld

This ID is used to identify a section that is displayed if the browser is too old to support the push notification syntax.

webpushStyleDisplayBlock

This hidden ID is used to identify a value that will be used to turn items with display=”none” to visible. The default is “block”, but should you need other values (such as “inline-block”) you can change this value to achieve that effect.

webpushSubscriber[target]

These hidden IDs are used to identify a value that will be used as the target value for any alternative notification methods. For example, if both Email and SMS are available, the IDs:

webpushSubscriberEmail

webpushSubscriberSMS

Must exist with appropriate values (email address and phone number).

webpushSubscribeButton

This ID is used to identify the subscribe button. The button can in theory be something other than a button, but it must support the disabled property.

webpushUnsubscribeButton

This ID is used to identify the unsubscribe button. The button can in theory be something other than a button, but it must support the disabled property.

Radio and Checkbox Groups

In addition to the IDs, there are 2 named radio button groups:

radio inputs

and:

notificationsubscribeoption radio

Notice the slight difference in the name that separates the two groups. It’s a common mistake to copy from one group to the other and forget to correct the name.

Alongside each radio button that is NOT a Push notification, you will need to specify a hidden value for each:

notificationsubscribeoption radio

This provides the framework with information on how to define the destination of the non-Push notifications.

The next section is the checkboxes to enable additional groups the user can subscribe to.

snippet from subscribe.html file

You can have an unlimited number of these groups. Each selection by the user will automatically subscribe them to that group and notifications can be sent to all subscribers.

Buttons

The framework requires two buttons on the page:

two buttons

Both these buttons should be disabled in the HTML by default. The framework will enable the right button at the right time.

Connecting to the framework

The final step is to connect the HTML to the framework. This is done by importing a javascript file that is dynamically generated by the framework. You do not need to have this file anywhere in your repository, it is fully generated along with all dependencies.

importing js file

And with this, you now have a fully functioning push notification page, so it is time to look at how to send them.

Sending Push Notifications

Sending push notifications involves either sending an individual message or sending notifications to an entire group of people. In the demonstration repository there is a sample sending page name send.html:

Send notifications by send.html

This page enables you to send individual push notifications to the two users or you can send a recall notification to all users that has subscribed to recalls.

The page shown after the message is the page that will be opened when the user clicks the notification.

The Send Push Notification rule

All notifications (regardless of the method of sending) can be managed with the Send Push Notification rule:

Send Push Notifications rule

There are several options to define how the push notification looks and acts:

Rule properties
  • The Database relates to the location where subscriber data is stored.

  • The Audience should be either the internal user ID that the application can use to identify a user or a notification group name.

  • The Sender Email should be the email of someone who can assist with technical queries from the external push notification servers used to send the notifications. Those servers are managed by organizations such as Google and Microsoft and the email is used for relaying complaints or warnings.

  • The Expiry is provided in minutes with a maximum of 24 hours allowed.

  • The Title is the key short description of the notification

  • The Icon is an icon to show to the user when the notification is displayed. If no icon is provided a default will be displayed.

  • The URL to open is the URL that will open when the user clicks on the notification. Not all browsers support this and should they not, the default URL to open from the Push Notification Controller will be used instead.

  • The Message allows for a longer notification message to be displayed. Not all browsers support this.

  • The Tag is a value that can be used to avoid sending the same notification to the user over and over. Messages with the same tag name will only be available once in the user’s notification system. The following Re-notify is related to this. It determines if the user should get another notification as a result of an unopened tag group or not.

  • Vibrate can be used to control the vibrations of the user’s device. It is specified as a series of on/off pairs in milliseconds. For example: “100,200,100,200” would mean vibrate for 100ms, pause for 200ms, vibrate for 100ms, pause for 200ms.

Message personalization

This rule notably includes the ability to personalize the message being sent and permits the sending through alternative channels.

For every message, the following variables are available: NOTIFICATION_TARGET, NOTIFICATION_USER, NOTIFICATION_ENDPOINT, NOTIFICATION_MESSAGE, NOTIFICATION_URL and NOTIFICATION_TITLE

The rule writer can use the first 3 to determine where to send a message and identify the relevant user being notified – and can use the last 3 to customize the message.

In our demonstration repository we do this by inserting the user name into the message for group messages:

Personalization for a Send Push Notification title

The Personalize title rule has the following properties:

properties

This is based on the title being sent to the recall group looking like this:

message

So for each message being sent, the Personalization chain point will insert the actual user name into the title.

Alternative notification methods

If the user that signed up for notifications did not have a supported browser, we can offer alternatives (such as email, SMS or other targets).

To support the sender managing those alternative channels for us, the Alternative chain point is called whenever a target is different to “Push”. In our demonstration repository we showcase this with a simple output to the console:

Alternative method for Send Push Notification

However, the rule writer has access to all the 6 variables listed previously at this point in the flow and can use it to send the notification to the right target using the rules most relevant for that:

properties

Raspberry Pi with PiFace Reference

Introduction

Welcome to the Tomorrow Software reference for interacting with the PiFace Digital 2 I/O board for Raspberry Pi. In this guide we will provide instructions on how to set up a Raspberry Pi and PiFace combo to accept button input and control a few LEDs and relays.

Licensing

The licensing of the PiFace Extension is the same as most other extensions that we provide. You simply need a valid Tomorrow Software license.

The PiFace Extension uses the Pi4J open source (LGPL V3 license) library. This is a free unencumbered license for private and commercial use.

Prerequisite

It is assumed in this document that you have prior experience with Tomorrow Software and that concepts such as server definitions and rule writing are familiar to you.

Getting started

The very first thing you need to get started is some hardware. The following photo shows the most essential components:

most essential components

What you need is as follows:

  • HDMI cable plus a TV/monitor with HDMI input (not shown)

  • Micro-USB power supply (Preferably 2A)

  • Raspberry Pi 2 board

  • Case designed for the Raspberry Pi and PiFace together (optional)

  • Multi-meter (Optional but really handy)

  • USB Wi-Fi dongle

  • Raspberry Pi Noobs SD Card

  • PiFace Digital 2 board

  • Standard USB mouse

  • Standard USB keyboard

Hardware Assembly

The assembly of the hardware is incredibly simple:

  • Mount the PiFace on top of the Raspberry Pi board

  • Insert the Wi-Fi dongle, keyboard and mouse into the USB slots

  • Remove the micro-SD card from inside the Noobs SD pocket and insert it into the bracket on the underside of the Raspberry Pi

  • Connect the HDMI cable from your Raspberry Pi to your monitor

  • Connect the power supply and wait for it to boot up

Initial configuration

Once the operating system has booted, you will see the following image:

Raspberry PI setup window

Using your cursor keys, space bar to select and Tab key to navigate options, set up your time zone, locale and select the option to boot to desktop.

Enabling SPI

The PiFace board communicates with the Raspberry Pi over an interface known as SPI. This interface is not enabled by default, so we need to do so. From within the configuration tool, select Advanced Options and SPI.

SPI

Enable SPI and load by default. Once done, return to the main menu, hit the Esc key and type:

sudo reboot

This will force a reboot and after a startup you now end up in LXDE:

LXDE

From here, we need to configure our Wi-Fi connection. Click on Preferences then Wi-Fi Configuration.

wpa_gui

Next click on Scan. After a short while, your Wi-Fi network should appear and you can double-click on it to provide a password. Once done, simply click on Add and your internet connection will be established.

Wait for the IP address to show up and note it down for later.

Updating and upgrading

Because our project requires the latest drivers and software, the next step is to update the operating system.

Open a terminal window and type the following commands:

sudo apt-get update
sudo apt-get upgrade

These two commands will take quite a while to complete, depending on your internet speed. Please ensure both tasks complete without errors before continuing.

Installing Pi4J

The Tomorrow Software PiFace extension relies on an open source project known as Pi4J. We need to install this next. At the command line, type:

curl -s get.pi4j.com | sudo bash

Optional USB drive support

Next, we need to get the Tomorrow Software installed. There are two options:

  • Downloaded from the web

  • Install using a USB thumb drive

If you have received the software on a USB thumb drive, you need to perform some additional configuration. If you downloaded the image, please skip to the next section.

In the terminal window, create a folder where the USB drive will be mounted:

mkdir usbdrv

Next, we need to edit the file system table:

sudo nano /etc/fstab

Add the following line to the end of the file:

/dev/sda1 /home/pi/usbdrv   vfat  uid=pi,gid=pi,umask=0022,sync,auto,nosuid,rw,nouser 0   0

IMPORTANT: This has to be ONE line in the file

Press Ctrl-X and a capital Y, followed by Enter to save.

Then reboot:

sudo reboot

Once the reboot has completed, insert the thumb drive and make sure you can access it.

Allow root access

Tomorrow Software is required to be installed as the user root as it uses ports such as 80 (http) and 443 (https).

To achieve this, you need to be able to switch to root using the su command.

To enable root access, type the following command:

sudo passwd root

Pick a good password and enter it twice.

Starting the file manager

We are now ready to start the file manager in root mode to copy the image into place.

At the command prompt, type:

su

Enter the password you just set up, then type:

gksudo pcmanfm

This will start the file manager as root.

file manager as root

Locate the “Tomorrow-Software-Server-10.0.0.zip” image you either downloaded or on your thumb drive, then right click and select Copy.

Change the folder to /opt and create a new folder named “local”.

Copy the zip file to this location, right click it and select “Extract Here”.

In the terminal window (as root), create a symbolic link to the distribution as follows:

cd /opt/local
ln -s Tomorrow-Software-Server-10.0.0 Tomorrow

Setting the software to auto-start

Right click the file tomorrow.sh in /opt/local/Tomorrow/server/bin, select Properties, then the Permissions tab and make sure Execute is set to “Only owner and group”.

Copy the file tomorrowstart from /opt/local/ Tomorrow /server/bin to /etc/init.d.

Right click the file, select Properties, then the Permissions tab and once again make sure Execute is set to “Only owner and group”.

Then enter the following commands in a terminal window (logged in as root).

cd /etc/init.d
update-rc.d tomorrowstart defaults

Starting the instance

Everything is now ready for the first run of the Tomorrow Software engine. Reboot your Raspberry Pi. You can either do this from the menu or by typing:

sudo reboot

Once rebooted, wait for CPU to settle down after startup – it can take quite a while (2-3 minutes on a PI 2). Do NOT attempt to log in during this phase.

Defining the console type

Logging in to the instance should happen from some other computer. The best way to do this is to modify the hosts file on the computer in question to give it a valid name. For example: homeauto.local

Then, simply open a browser and point it to the following URL:

http://homeauto.local/console

Log in using the user admin and the password admin. You will access the main console. Select Administration then Console Setup:

Console Setup in Administration

Change the console type to “Forwarding Proxy without console” and click on Save.

This will shut down Tomorrow Software on the Raspberry Pi. Give it a minute or two to complete, then return to the Raspberry Pi and reboot it.

Setting up the server definition

At this point there will no longer be a console running on the Raspberry Pi. It is instead required to be managed from another Tomorrow Software console instance. To enable this, we need to log in to that alternate console instance and set up a new server definition:

Basic tab

As well as the basics above, we also need to set up the protected hosts, remove the client IP restrictions and disable the browser proxy:

Forwarding tab

Make the required changes and click on Save.

If all your settings are correct, your instance will now show green in the Servers section:

Servers

Required Updates

The next step is to update/install the following components via the update server:

PiFace Rules

Testing the setup

It is now time to test all the setup work. We will start by turning on LEDs on demand.

Switching LED rule set

From within the Tomorrow Software console, create a new repository named “LED Test”, then create a new rule set named “LEDSwitch” in that repository.

Hit update on the rule set and create the following:

LEDSwitch structure

Properties are:

properties
properties
properties
properties

Click on the Save button to save the new rule set.

Test configuration

Return to the console to create a new configuration in the LED Test repository:

General tab
Input Source tab

Click on Create to create the configuration.

Deployment and Testing

It is now time to deploy the configuration to the PiFace Server. Deploy the configuration selecting the “Restart immediately” option.

Wait for the deployment to complete. This can take several minutes, especially the first time. Once the deployment is complete, return to a browser and enter the following URL:

http://homeauto.local/?onoff=on&LED=4

Provided you have followed every step above, LED 4 on the PiFace board will now turn on. You can turn it off using:

http://homeauto.local/?onoff=off&LED=4

Responding to button presses

When a button is pressed or released, this needs to trigger an event. For this purpose, there is a rule named “PiFace Button Listener”, which applies to each button.

You place these rules in a startup rule set.

The following shows a startup rule set that will turn LED 1 on when button 1 is pressed and turn it off when button 2 is pressed:

Buttons structure
properties
properties

We also need to modify the configuration to accept the startup rule:

General tab

Deploy the configuration the the PiFace server and once again enter the following URL in a browser:

http://homeauto.local/?onoff=off&LED=4

This will trigger the Programmable Data Agent startup and activate the button listeners. Now try to press button 1 on the PiFace. LED 1 will turn on. If you press button 2, LED 1 will turn off.

Notice that LED 1 is linked to a relay. You can hear it click when the LED turns on or off.

Two Factor Authentication

With online fraud levels ever-increasing, most if not all companies are introducing additional methods of identifying their customers. One popular approach is via a method known as two-factor authentication (or 2FA).

Two-factor authentication consists of requiring online users to identify themselves through an additional method after they’ve logged in with their standard username or password. This could be via the use of a random token generating device or app, or by sending a one-time password to the user’s email address or mobile phone.

Two-factor via an SMS token sent to a user’s mobile phone remains popular, and the cost to company and customers is minimal.

One point to be aware of though, is that the organization must be reasonably confident that the mobile number data they hold, does in fact belong to their customers. It would be prudent to create additional rule sets triggered when a customer attempts to change their mobile phone number, however this is outside the scope of this case study.

In this case study we will outline what is required to deploy a two-factor SMS authentication request seamlessly into an existing application using in-built rules that ship with Tomorrow Software.

Planning the rules

The first step of any rule writing is to determine what to do and how it can be accomplished. Drawing flow charts can be extremely helpful.

Below is a basic example flow chart of how Tomorrow Software may implement a two-factor SMS request.

two-factor SMS implementation

Before beginning, you will need to answer the following:

  1. Where is the login page and where does it go to authenticate the user?

  2. Where is the data that holds the user’s mobile phone number?

  3. What should the rule set do if there is no mobile phone number for a user?

  4. What are the technical details for sending SMS messages?

  5. How long should the Programmable Data Agent wait for a correct response?

  6. How many times should the rules allow someone to enter an incorrect response and what should happen after this given amount?

This case study we will use the in-built SMS aggregator Kapow to send our messages. Your own environment may use internal SMPP calls or different aggregators, which may require you to write your own extension.

Extension writing is outside the scope of this case study but is relatively straight forward for a Java developer.

Getting started

Start by creating a new repository called “Two Factor Example”.

It’s recommended that the processes involved in sending a two-factor message, checking the existence of a two-factor request and checking the response against the stored value, be separated into different rule sets. This provides ease of maintenance in the future, and also allows you to turn two-factor authentication on and off, or change out functionality quickly and easily.

So, keeping this in mind, you should create the following blank rule sets:

  1. TwoFactorLoad – this rule set will be loaded initially and determine whether a two-factor request should be made based on the user’s login status.

  2. TwoFactorCheck – this rule set will check whether there is an existing two-factor request in place and display the embedded two-factor response page if required.

  3. TwoFactor – this rule set will generate the random token and embed it into the message template.

  4. TwoFactorLookup – this rule set will look up the user’s mobile phone number from the database.

  5. TwoFactorSend – this rule will send the message to the user’s mobile phone via Kapow.

Designing the user interface elements

With our two-factor authentication, we need to provide a page that will allow users to enter the token they receive via SMS. This page only needs to be very simple, with an introduction explaining what the user needs to do and a form field for them to enter their token. We will also need two additional pages:

  • One for an incorrect two-factor response,

  • And one for a two-factor time out, since the user will be given a limited time to complete the task.

Within your own web application environment, you will wish to design your pages to fit in with the site’s look and feel, but for this example we will keep it very simple.

You can use the inbuilt content editor to create your pages. to do so follow the steps below.

  1. Expand the “Content Files” menu item and select “Two Factor Example”.

  2. Create a new file called “twofactor.html”.

  3. Copy the below HTML to your clipboard:

<html>
<head><title>Two Factor</title></head>
<body>
<h3>Two Factor Authentication Request</h3>
<p>A two factor token has been sent to your nominated mobile device. You have five minutes to enter the token in the field below.</p>
<p>This process is a part of our ongoing efforts to prevent online fraud. We apologise for any inconvenience caused.</p>
<form action="twofactor.html" method="POST"> <strong>Two Factor Token: </strong> <input name="tokenresponse" type="text" /> <input type="submit" value="Send Token" /> </form>
</body>
</html>
  1. Update the "twofactor.html" file from the console. The embedded HTML editor will op

  2. Click on the HTML button to go to the HTML text.

  3. Paste the HTML shown above into the editor and click "Save".

  4. The page should now look something like this:

Out two factor authentication form
  1. Continue the above process for the following two files. Create new content files called:

    1. twofactorerror.html

    2. twofactortimeout.html

  2. As per above, update each file, click the HTML button and paste the following HTML for each file:

twofactorerror.html

<html>
<head><title>Two Factor</title></head>
<body>
<h3>Two Factor Authentication Error</h3>
<p>The response provided was not correct. Your session has been invalidated. Please log on again.</p>
</body>
</html>

twofactortimeout.html

<html>
<head><title>Two Factor</title></head>
<body>
<h3>Two Factor Authentication Timeout</h3>
<p>Sorry. It took too long to respond to our request. Please try again.</p>
</body>
</html>
  1. Save your files. Your file structure within Content Files should now look as follows:

Saved files
  1. In our example, File Reader rules will be used to read these html files. Therefore, download then upload each file separately from Content Files to the Data Files repository. All files used by File Reader rules must be accessible from the Data Files location by the Programmable Data Agent.

SMS Token Message

Before we begin writing our rule sets, there is one more data file we will create. This file will be a plain text file that will contain the token and SMS message that will be sent to our users.

Begin by creating a new text document in Notepad. Copy and paste the following text into your blank document.

Your two factor token for XYZ Company is [token]. Please enter this token into our website to continue. If you are not currently logging into our website, please contact our customer service team on 01234 5678.

Save the text document as “twofactor.txt”.

Next, go to the “Data Files” section of your Tomorrow Software console. Select the “Two Factor Example” repository from the drop-down list and click the “Browse” button to select the file just created.

Next, click the “Upload” button to upload your file to the console. All files should now be saved within Data Files as follows:

saved files

Two-factor Authentication Rule Sets

As mentioned above, we have five rule sets to deal with a two-factor authentication request. Although all functionality could be contained within the one rule set, we decided to split them out into discrete chunks that all handle a different aspect of the process.

TwoFactorLookup Rule Set

This rule set will handle looking up the user’s mobile phone number from our local database.

To begin with, use the SQL Lookup rule to look up the user’s mobile number in our USERS database. In your web applications, of course, the database, table and field names will differ, but in this example, we are using a database called USERS with a table called “Users” looking for a field called “mobile” where the field “userid” is equal to the variable “userId”.

TwoFactorLookup rule set

Examine the above image to see how we have stored the result from the field “Mobile” into a variable called “MOBILE”. If the record is found, we use the If Condition rule to check that there is actually a value in the MOBILE variable – if there is, we exit the rule set with the value “Continue”. Otherwise we exit with the value “Not Found”.

You can find the Exit Rule in the “Flow” group of rules.

TwoFactorSend Rule Set

This rule set handles sending the token to the user’s mobile handset. This token will be set in the TwoFactor rule set in the variable we will name TOKEN.

The user’s mobile number, as you have seen, has been set in the TwoFactorLookup rule set.

We will use the File Reader rule to read the twofactor.txt file we created earlier into a variable.

TwoFactorSend rule set

Next, we will replace the token with the actual token created by our “TwoFactor” rule set by using the String Replacer rule.

token replaced in the rule set properties

Then we will use Kapow to send the message to the mobile number we found in the “TwoFactorLookup” rule set.

Send Kapow SMS rule set added

IMPORTANT: You will need your own Kapow username and password in the credentials vault to use the service.

Next, we exit the rule set with either “Continue” for a successful send, or “Failed” for a failed send.

TwoFactor Rule Set

This rule set will initialize a two-factor request and save the following variables to the system: a flag that a two-factor request is in progress, what the token actually is, and what the time limit is for the request.

To begin this rule set, we need to set a time stamp as an expiry and create a random token. Next, we need to pass through the TwoFactorLookup and TwoFactorSend rule sets we created earlier.

Use the Timestamp rule found in the “Variable Marking” group followed by the Calculation rule found in the “Math” group to create a time limit.

TwoFactor rule set

Note that timestamps are in milliseconds, so we need to add 300,000 to the current TIMESTAMP variable to get a time five minutes into the future.

Next, we will create a random numeric token by using the Random Number rule, also found in the “Variable Marking” group. Create a random number with 8 digits and save it to a variable named TOKEN.

Random Number bloxk added

Now we can look up the user’s mobile number and send the SMS message to their phone. To do this, use the TwoFactorLookup and TwoFactorSend rule sets from the “Rule Set” group.

We must remember to set the session variables that tell us a two-factor request has been sent, what the time limit is, and what the token is.

First though, we need to set a variable TWOFACTOR to “Y” to tell us that we are in the middle of a two-factor request. Use the Set Variable rule to do this.

Set Variable properties

Next, we can use the HTTP Session Writer rule set to assign the three variables to the session.

HTTP Session Writer block added

Finally, we need to display the two-factor response page to the user.

To do this, we must first save the HTTP request so that later on, if the user enters the correct token in a timely manner, we can restore the application to its normal flow. Use the HTTP Request Saver rule set to do this.

Next, we use the File Reader rule to read our “twofactor.html” file into a variable for display. We will call this variable RESPONSE.

HTTP Request Saver and File Reader blocks added

Finally, we just need to display this content back to the user, followed by a Set Completed rule to tell the system not to go any further.

Set Completed rule added

TwoFactorCheck Rule Set

This rule set will check whether or not a two-factor request is in progress, and deal with any responses or timeouts the system may encounter. This rule set will use a combination of rules we have previously encountered.

The first thing to check is whether or not the time limit has passed.

To do this, we create a new timestamp called TIMESTAMP_NOW, subtract the existing TIMESTAMP from it, and if the remaining time TIME_REMAINING is greater than zero we know the two-factor session is still valid. If not, we will read the “twofactortimeout.html” file and respond back to the user.

TwoFactorCheck rule set
If Condition rule added

If there’s still time left on the authentication process, we then need to check whether or not a response has been entered, and if it has, whether or not it is the correct one.

In our HTML form we set the field name to “tokenresponse” so this is the name of the variable we must check.

If Condition rul for reponse been entered

If there is a value, then we check it against the variable we set earlier called “TOKEN”. If there is no value, or the value is incorrect, we will use the File Reader rule to read the “twofactorerror.html” file and display back to the user.

Additionally, we will reset the TWOFACTOR variable so that the system knows not to check again.

Optionally, we may redirect the user to a specific logout page, but in this example, we will not do this.

more rules for the TwoFactorCheck rule set

If the user has entered the correct response, we will reset the TWOFACTOR variable to “X” so that the rule sets know that the user has already been authenticated.

Finally, we will use the HTTP Request Restorer to place the user back into the original application flow.

Final structure for TwoFactorCheck rule set

TwoFactorLoad Rule Set

Lastly, we will create the TwoFactorLoad rule set which will bring together all of the previous rule sets. This rule set will determine whether or not we need to check for a two-factor request, which only needs to be done if a user has been authenticated by the system, and only on non-media content

for example, not images, stylesheets, javascript et cetera

TwoFactorLoad rule set

Using the Name Splitter rule we can split the URI variable to determine the extension.

In our example we are running JSP pages, so we only want the rule set to continue if the content has the extension “jsp” and the user is currently logged in.

There are several ways to determine if a user is logged in, and which method you use will be dependent upon your specific web application. There may be a cookie or session variable that we can read, or perhaps your web application has a specific URI or query string for pages that are available to logged in users only.

In this case study we will assume that a cookie with the user’s id has been set on login.

We will use the Http Request Tracker rule to expose all cookies. The rule actually exposes all request information into separate variables, but in this case, we are only interested in the “userId” cookie.

HTTP Request Tracker rule added
  • If the userId cookie is set, then we need to check if a two-factor request is in progress, otherwise we will simply exit the rule set.

  • If the userId cookie is set, then we must find out whether we need to initiate a two-factor request, check a two-factor request, or ignore as the two-factor request has already been successfully processed.

First, we will use the Http Session Reader rule to place the relevant session variables into variables our rule sets can query. We will store the TWOFACTOR, TIMESTAMP and TOKEN session variables into local variables.

properties

Next, we use the Switch rule to check the contents of the TWOFACTOR variable.

This is the variable that tells us exactly what we should do.

If the variable is not set, then we need to initiate a two-factor request. If the variable is set to “Y” then a request is already in progress, so we need to look for a token response or time out. If the variable is set to “X” then we know the user has already successfully performed the two-factor authentication, and we can pass them back to the application.

TwoFactorLoad rule set

Use the “Add Chain Point” button to add the “Y” and “X” points to the Switch rule.

Then, connect each chain point to the relevant rule set (found in the “Rule Sets” group) or set completed for already authenticated users.

Setting up the external database

Before you can deploy your rule set, you need to ensure that your database server is set up correctly, assuming that you need to retrieve the user’s mobile number from an external database.

In the following example, we will connect to a MySQL database – however, the process is similar for all JDBC drivers.

The Tomorrow Software Server ships with the Derby database driver, but you can easily add new database drivers to the application. The first thing you need to ensure is that the driver to the database is available in the class path of the program or application that is running Tomorrow Software.

For the Tomorrow Software Server itself, the location is /server/lib/ext/jdbc (we recommend that you create a folder in that location named mysql and that the driver jar file is placed in there).

The MySQL JDBC driver is available from http://dev.mysql.com/downloads/connector/j/

Next, you need to create the Database Connector in Tomorrow Software by clicking the Database Connectors link on the menu.

Simply enter in the class name, URL prefix (e.g., the location of the primary server to access), username and password required to access the database.

Click “Create” and your database is ready to access.

Create a MySQL database

Setting up the configuration file

Finally, you can set up your configuration file. Click the Configurations menu and select the “Two Factor” repository from the drop-down list. Enter some basic information about the rule to load and the databases required.

The following screen shots show the information required for the “General”, “Input Source” and “Databases” tabs.

Creating new Configuration
Input source tab

For the “Databases” tab, click the “+” icon to add a database, type the name of your database and select our newly created MySQL driver from the list.

Databases tab

You can now click the “Create” button to create the configuration file. Once created, click the “Deploy” button to deploy it to the server.

Future considerations

The above case study shows how to implement two-factor in a specific environment, though of course each individual application will be different.

You will also need to consider how you wish to handle users for whom you do not have a mobile number – alternatives could include email, or perhaps you have some kind of external token generator.

Windows Automation Reference

Version: 10.0 / Modifications: 0

Introduction

Welcome to a new dimension of Microsoft Windows automation. Using the Tomorrow Software Windows Automation Extension you can now not just script up the flow of a Windows Application – but you can also combine it with data from many other sources and the powerful rule writing capabilities of the Tomorrow Software Multi-Protocol engine.

The extension is based on the popular AutoIt automation product and we have included tools from that product to help your automation efforts. AutoIt is a free product, however, if you find the product and the Windows Automation extension useful, we would encourage you to make a donation to the creators of AutoIt at:

Licensing

The licensing of the Tomorrow Software Windows Automation Extension is the same as most other extensions that we provide. You simply need a valid Tomorrow Software license.

The license for the AutoIt tools described in this reference guide is found in the “data” folder where you also found this document. In a quick summary it is a classic free software license.

Getting started

Before you begin your first automation project, you need to make some updates to your Tomorrow Software installation.

Required Updates

The first step is to update/install the following components via the update server:

  • Tomorrow Software console (B18020 or later)

  • Base Rules (2018-04-26 or later)

  • Parallel Processing Rules (2018-04-23 or later)

  • Windows Automation Rules (2018-04-23 or later)

If you received this document through some means other than the update server, then you will also need to install the Windows Automation repository.

At this point, stop the Tomorrow Software instance.

Updating the Java Runtime Environment

The JRE that ships with Tomorrow Software is a basic 32 bit JRE. The version may depend on when you received your copy of the product.

To successfully run automation projects, you need to update the JRE to at least version 8 for your platform (32 bit or 64 bit).

You can download the correct JRE from here:

Once you have installed the JRE, you need to update the JRE folder under the Tomorrow Software installation with the JRE that you installed on your Windows PC. You do this by renaming the original JRE folder and creating a new one by copying the JRE from C:\Program Files\Java\jre1.8.0_(version) and renaming it to jre.

Installing the required tools

The final step is to install the Au3Info tool. You need this tool to inspect running Microsoft Windows programs and identify the names of controls that you can manage. The easiest way to install the tool is to download it from the Windows Automation repository’s data folder and save it to your desktop (or some other convenient location).

There are two versions available:

  • Au3Info.exe is for 32-bit Windows systems

  • Au3Info_x64 is for 64-bit Windows systems

Make sure that you download the right version.

Your first automation

In this example we will take you through the automation of creating a document in Windows Notepad and saving it.

Start by restarting the Tomorrow Software Server instance, log in and create a new repository called “Notepad Exercise”.

Then create a new rule set called “NotePadDemo”:

and open it up in the rules editor.

Starting an application

The very first thing we need to do is start notepad itself. To start an application, simply drag the Run Application rule onto the canvas:

And set the properties as shown:

This step alone will cause Notepad to start up. You do not need to provide a directory since notepad will be in the system path.

Since we are going to do something more than just start the application, we need to make sure that it is fully loaded before we start pressing keys. So we add a Wait Active rule:

With the properties set as follows:

Identifying windows

Here it is relevant to pause for a minute and look at those properties.

Firstly, the Windows Label. Many of the rules provided in the Windows Automation framework use the Label and Text combination to identify windows to work with. The logic of this combination is as follows:

The Label match is basically starting from the beginning of the label matching as many characters as provided in the rule.

In this case we match the entire label.

The optional Text matching refers to a text within the window that was opened. This could be any word visible on the page or within a dialog box. This matching is used for more precise pinpointing of a window.

We will perform such a match later in this section.

Entering text

For now, we will simply send some keystrokes to Notepad to create a document we can save:

Testing

Let’s try and run our three new rules and see what happens. In the Notepad Exercise repository create a new configuration as follows:

And set the input source to:

We can now deploy our configuration to any convenient active server (You can use a Multi-Protocol server or even Qwerty). As long as you tick the “Restart immediately”, shortly after the deployment is complete, you will see Notepad start up and the text appear:

Sending formatted text

You may have noticed that the text entered in our example was set as “Raw”:

Your other option would be to use the formatting text feature:

This feature allows you to send specific keystrokes with great ease. For example:

You can combine these keys: ^!a would be Ctrl-Alt-a.

If you need to send any of those characters without sending them as special keys, you must enclose them in curly brackets. For example {!} to send a !

You can also send normal Windows keys by enclosing them in curly brackets. For example:

The name used in the brackets can be most normal windows keyboard designations.

If you need to repeat a few keystrokes, you can do this by entering the key name followed by a count. For example:

Will result in the delete key being hit 5 times.

So we could in theory expand on our example to make notepad try to close once the text was entered. The keystrokes for closing a Window is Alt-F4. We would do this in formatted text as follows:

Doing this results in the following outcome:

You can try this if you wish, just remember to switch it back to “Hello World!!” and “Raw “afterwards to continue this exercise.

Reading window text

It is one thing being able to send keystrokes, but more often than not for automation, you will need to know the content of specific fields or you may need to be able to set the value of specific named fields without just using keyboard navigation.

This is where the tool from AutoIt (that you installed earlier) comes into play. Start up the correct version of Au3Info:

Click on the Finder tool and drag it onto the main Notepad window:

You will see that the tool provides you with the basic Windows information (Title and Class). It also provides us with the Basic Control Info, which is that the field is of the Class “Edit” and it is instance “1”.

What we need at this stage is the ability to identify a specific field in a specific window. The best and safest way to do this is to click on the “Control” tab:

And then double-click on the “Advanced Mode” entry. This copies the identifier [CLASS:Edit; INSTANCE:1] to the clipboard for us so that we can use it easily.

So all we need now is to add a “Get Control Text” rule (and a List Variables so we can see what’s going on):

The properties for the Get Control Text would be as follows:

The control identifier being easily set by entering two double-quotes and pasting the content of the clipboard from the AutoIt tool in between them.

A quick run and peak at our console will confirm that this is working:

Text outside controls

There are certain circumstances where text is not necessarily linked to a specific control. The Windows Calculator is one such example. It actually stores the result window not in the control, but in the window itself. If you need to get to this text, you can use the Get Window Text rule instead of the Get Control Text rule.

Hint: When you extract text from the Window itself, it is often formatted across multiple lines. An easy way to get visibility of control characters in text is to escape them as if they would be going into a URL. You can do this with the Escape rule.

Closing windows

It is now time to close our window. This is simply done with the Close Window rule:

The properties should look familiar now:

The result of adding this rule will inevitably be:

Now, we wish to wait for this dialog to appear and then hit Enter to save the file we just created. Once again, this should now be familiar territory:

With the properties being set as follows:

Note the use of Window text in the “Wait for save box” rule. It is conceivable that Notepad my put out many dialogs that is simple labeled “Notepad”, so the extra check for the word “Save” somewhere on the dialog box helps us confirm we are in the right place.

Advanced controls

But now things are getting a little tricky. Once we hit enter, we need to wait for the “Save As” dialog to appear:

On the surface, this may look quite simple. We wait for the dialog box to appear, we find the controls for the directory and file name, put in some values and hit Save.

The first step is not too hard:

Next, we discover (using the AutoIt tool) that the directory control is named “ToolbarWindow32”:

However, through experimentation it quickly becomes obvious that you can’t just set the control value to “Address: MyDirectory” using the Set Control Text rule. It simply has no effect. So, we need to introduce a workaround. In this case, some experimentation shows that if you click on the far right corner of the control, you can actually enter a directory name:

And the text is preselected, so if we can just do the same mouse clicks in rules, we will be able to override the text in the control and continue. This requires a few steps

Getting a control position

We start by getting the control position so that we can figure out where to click within it:

Adding a List Variables rule and running this results in the following output in the console:

So now we know the position and dimensions of the control. The next step is to figure out the correct position to click. A simple calculation rule will take care of that:

All that remains now is to “click”:

Most of the above should now be clear. We are basically clicking the far left side of the control, using the left mouse button. If you don’t provide an X or Y position, the center of the controls axis respectively will be clicked.

All that remains now is to set the control value by sending the right key strokes:

Notice that we hit the Enter key as part of this exercise. This is because the Save As dialog box changes to the directory entered, once the Enter key is hit.

If you are following this example, make sure that you pick a directory that actually exists. In our example, we have created C:\DemoData purely for this exercise.

The next job is to set the actual file name. Using AutoIt, we discover that the control name for this is “[CLASS:Edit; INSTANCE:1]”. So this looks pretty straight forward. However, setting the control text by itself:

Does not work well. The resulting file name actually becomes “mydemo.txt*.txt”.

So formatted text once again comes to the rescue. We preface the new file name with a Ctrl-a (select all) followed by Delete to clear the field:

Note that there are other ways you could achieve the same goal. This is just an illustrative example.

All that remains is to hit the Save button. Any old Windows Keyboard warrior will know that underlined text character in a Windows dialog box can be invoked using Alt+[underlined key]:

In this case, Alt+S will save the file. So we go ahead and invoke it:

If you run this complete example, you will now have a file in your designated folder called “mydemo.txt”

Handling exceptions

Of course, if you run our scenario twice, you will encounter another message dialog telling you that the file already exists:

It is important to handle the kinds of exceptions as otherwise your automation project may become unreliable. In our case, we wait for the “Already exists” dialog to appear, with a timeout telling us if we need to handle it or not:

In the above example, the file will simply be replaced if it already exists.

Interference

A significant problem with Windows automation is interference. Essentially the automation rules are sending keystrokes and mouse clicks to applications. If someone (a human being mostly) tries to also enter keys or click the mouse at the same time, the automation is likely to fail. For this reason, automations should always run on a dedicated machine with no other activity.

When running a cluster of Tomorrow Software Server instances as a REST service, you need to consider the avoidance of interference traffic impacting automation requests. For example, a load balanced clustered web-based service may have heartbeat health check request pings to confirm service availability, or other unwanted requests; such traffic needs to be filtered (not necessarily blocked) but prevented from reaching the automation rulesets.

Parallel processing

A final issue to be aware of when running automations is that multiple concurrent automations also interfere with each other. For this reason, the best approach is to queue automations if they need to run on the same server. The easiest way to do this is with the “Launch Queued Process” rule:

This rule will ensure that Programmable Data Agent wide, only one automation rule set will run at any one point in time. However, the rules are not held up whilst these automation requests are queued.

If you need to wait for an automation process to complete before continuing, the best rule to use is “Wait for Queued Process”. This rule will place the automation request on the queue and will not continue until the automation has completed.

Windows automation as a service

Scaling up

Given you can only run one automation process at any one point in time, you may need a load balanced setup to share automation requests over multiple servers.

The best way to do this is by wrapping the automation request into a REST service and deploying it to multiple virtual server instances behind a load balancer in round robin mode.

Using this approach, the load balancer will find the next available server and distribute the load evenly.

A core virtual server instance should be created so that it can be cloned whenever more capacity is needed.

Set-up

The default BaseApp Tomorrow Software Server service instance is suitable for running as a REST service, please refer to the instructions file Read me.txt located in Tomorrow-Software-Server-10.0.0]/BaseApp/ for set up. Also refer to the Product Reference.pdf section entitled “Removing other unnecessary components” to remove the Tomorrow Software Console and other unwanted demo applications and server instances not required.

Please note that Windows Automation instances cannot be run as a service. They must be started using a bat file in the Windows startup group.

Example to run at start up: Windows Server 2012

Modify Local Group Policy Editor > Administrative Templates > System > Logon > Run these programs at user logon

Enable this option, press show and enter the following value, and apply/OK to this configuration.

Where c:\Tomorrow\Tomorrow-Software-Server-10.0.0 is this example directory path.

When using this option you need to edit the default Tomorrow.bat file to add the following three lines before cd server to accommodate the start up directory path as follows, once again where c:\Tomorrow\Tomorrow-Software-Server-10.0.0 is this example directory path.

Active Desktop using RealVNC

A significant limitation with Windows automation (like most GUI automation tools) is that it requires an active desktop to run. So, when you log out of any remote desktop connection or lock the computer, automation is paused/stuck until you reconnect. It is therefore impractical to retain open RDP connection for multiple Tomorrow Software Server instances when running as a REST service with high availability demands. The following is a working example to overcome this limitation.

Example: In Windows Server 2012 set the Turn off the display option to Never in Control Panel Power Options.

You still need a way for the remote server to have it’s head/desktop to be unlocked and active. The best way to do this is to use the VNC protocol rather than RDP. There are numerous VNC software (server and client) available that are also free and/or open source.

For this example, we have tested with RealVNC - VNC for Windows version 5.2.3.

Please ensure you refer to Licensing terms as a License key is required to install and use Real VNC for your environment and organisation.

RealVNC VNC Server uses modes to provide remote access to computers in different circumstances, to meet different needs.

VNC Server needs to install on the Tomorrow Software Server instance, and VNC Viewer needs to be installed on a ‘controller’ server.

Given the Tomorrow Software Console server will have access to the server instances, this server is a good candidate to run VNC Viewer, although a dedicated server with access to the instances can perform this connectivity too.

VNC Server installs and runs on default port 5900, so ensure any security group policies have been amended to permit connection using this port, together with ports that are running the REST service. The BaseApp to use as a REST service runs as default on port 10001 as defined in the rulesengine.properties settings.

RealVNC installation notes

During the standard RealVNC installation process, ensure you select the appropriate components for your REST service instance and Console Server or controller.

There is also an install option to add an exception to the Windows firewall during installation, but if you are still experiencing connection problems you may still be required to inspect your server firewall settings.

Before starting the VNS Server service, it’s useful to know all VNC applications are controlled by VNC parameters, set to suitable default values for most users out-of-the-box.

Please refer to this link for RealVNC parameter names reference information.

The easiest way to set the authentication scheme and credentials for the VNC Viewer controller in order to connect to VNC Server is to start the VNC Server (User Mode) desktop application.

For example, set the simple authentication scheme using VNC password in the VNC Server – Options > Users & Permissions option as follows.

Once the authentication scheme and access credentials have been set, and Licensing updated if required, ensure you stop the running VNC Server (User Mode) by pressing the More button, followed by Stop VNC Server as follows.

The parameter IdleTimeout specifies the number of seconds to wait before disconnecting users who have not interacted with the host computer during that time. The default value for IdleTimeout is 3600 seconds, so you need to set this parameter to 0 in order to never disconnect idle connections. You need to add the IdleTimeout parameter in Windows Registry Editor when running VNC Server as a Windows service as follows.

  1. Using Registry Editor, navigate to HKEY_LOCAL_MACHINE\Software\RealVNC\vncserver.

  2. Select New > String Value from the shortcut menu and create IdleTimeout.

  3. Select Modify from the shortcut menu, and specify appropriate Value data, 0.

With VNC Server successfully installed and parameters set, amend the VNC Server service with Startup Type set to Automatic.

Also, the Allow service to interact with desktop option must be checked as follows.

With the IdleTimeout parameter set to 0 as a minimum, restart the server and START the VNC Server service.

You are now ready to connect to the Tomorrow Software server instance running VNC Server from the controller running VNC Viewer.

Connect to the Tomorrow Software Console server (or controller) using a standard Windows remote desktop connection; install the default VNC Viewer components, and start the VNC Viewer application from the desktop shortcut.

The VNC Viewer application will then prompt to enter the host name or IP address of the REST Service server instance running VNC Server.

With the ‘Let VNC Server choose’ option for encryption selected you will prompted as follows for the password set on VNC Server earlier in the VNC Server – Options > Users & Permissions option.

If connection is successful the VNC Viewer will launch a connected window to the server, at which point you can login using your Windows user credentials, and you can proceed to repeat the process to make a VNC connection to all VNC Server instances if operating in a scaled cluster.

Even with VNC Viewer closed, because it is simply a relay of the host’s screen to your desktop (works differently than RDP), when disconnected, it just stops relaying, and that is all. The relay works like a splitter connection, both the local head/monitor has access, and the VNC Viewer has access.

By this design, VNC will continue to retain an active desktop even though you’re not connected over VNC, as long as the host desktop is logged in and not locked.

The environment – Tomorrow Software Console server, and multiple connected REST service server instances as defined in the Tomorrow Software Console server definitions are now ready for use; the VNC Viewer windows residing on the Tomorrow Software Console server (or controller) can be closed, and the remote desktop connection closed, and desktop will remain unlocked and active.

Customer Satisfaction Survey

This case study will show you how to inject a random customer satisfaction survey into the user experience on a site.

We will use a flight recorder to graph the responses and collate comments from the users.

Planning the rules

The first step in implementing our customer satisfaction survey is to create a plan of what we intend to do, and how we wish to go about it. It is often a good idea to write this down in plain English and then use that text as a guide whilst designing the rule structure. In this case, the plan reads like this:

  1. We want to ask random customers about their experience with our site.

  2. We want to have the survey appear on our main page after log-in.

  3. We want to make the survey experience as quick and painless as possible to get the maximum potential responses.

  4. We are going to use the Tomorrow Software flight recorder feature to graph and view the responses.

Getting started

In this case study we are going to split the decision points into three discrete components, following the recommendations mentioned elsewhere in this manual. So, start by adding a new repository called "Customer survey" and create three blank rule sets:

  1. "SurveyLoad", which is the rule set that will pre-check our survey and make sure all of the data we need is collected before we start the survey process.

  2. "SurveySelection", which is the rule set that will determine if a user is selected for a survey.

  3. "Survey", which is the rule set that will contain the survey logic itself.

Designing the user interface elements

The plan involves injecting a customer survey on top of the user experience. We can do this as a pop-up window or we can simply overlay it on top of the site using JavaScript. Given that most users block pop-ups these days by default, the later seems like the better option. We want to keep the survey itself as pure HTML, so a little bit of basic JavaScript will take care of it:

You can copy and paste the above JavaScript code into a file named "showsurvey.js" and upload it to the "Data Files" section of the "Customer survey" repository.

The above JavaScript will essentially grey out the application itself and overlay a HTML file on top that is named "survey.html".

Now we need to create the survey HTML itself. Once again this involves basic web design skills. The end goal is a page that looks something like this:

The easiest way to create the HTML is to follow these steps:

  1. Create a subfolder under the "Content Files" section of the "Customer Survey" called "Qwerty".

  2. Add a new file under the "Qwerty" folder named "survey.html".

  3. Copy the following HTML code to your clipboard:

  1. Update the "survey.html" file from the console. The embedded HTML editor will open.

  2. Click on the Update button to go to the HTML text.

  3. Paste the HTML shown above into the editor and click Save.

  4. The page should now look something like this:

Now we have all of the components we need and are ready to begin writing rules to present our survey.

Creating the survey selection rules

We will begin by creating the survey selection rules. In this case, the rules are very simple. We use a random number generator to determine if a user should be asked to complete the survey or not.

In this example we want the opportunity to complete a survey to be fairly frequent.

So, we start by updating the "SurveySelection" rule set to look as follows:

The properties for these rules are:

Effectively we create a random number between 0 and 9 (1 digit) and provided the number is below 4, we proceed to perform the survey.

Creating the survey load rules

The purpose of the "SurveyLoad" rules is to prepare any data that may be needed by the other rule sets in the repository. It is often beneficial to do it this way to isolate or prepare data needed by other rule sets, yet at the same time keeping those other rule sets as generic as possible.

In our case, there are a couple of generic things we need to do and check:

  1. We need to start the usual HTTP Request tracking.

  2. We need to ensure a session has been started (meaning a user is logged on).

  3. We need to obtain the customer account number so we can log it (no anonymous data here!).

  4. Once everything is done, we need to proceed with the survey itself.

All of these tasks are very simple, so we show them here as a single step:

The only rule that has any non-default properties is the HTTP Session Object reader. This rule allows us to read the customer account number from the Qwerty session. The properties are as follows:

Creating the survey rules

We are now ready for the actual core process itself with all data and user interface components prepared. So, lets update the "Survey" rule set.

The first issue is to place the survey in the right place in the navigation process, which, in our plan, is to inject the survey on top of main page.

We start by finding out what the name of the page that is being requested is:

The name splitter rule is extremely useful for this as it allows us to split a text string based on a separation character. The separation character in a URL is always "/", so we can find the requested page by using the following properties:

Then we can use a Switch rule to determine how to direct flow:

In this case the Switch variable is URL and adding new chain points to the switch rule determines when logic flows down a certain path.

Note the use of survey.jsp. That page does not exist in the Qwerty application. It is the name of the page that the HTML form in "survey.html" posts its data to. The Programmable Data Agent simply intercepts this request and deals with it before it ever reaches the application itself.

We are now ready to determine what happens when the user reaches the main page, the first step is to make sure we haven’t already presented a survey to the user in the current session:

The properties for this look as follows:

Basically, we check the session to see if a flag named "DoneSurvey" has already been set. If not, we proceed to see whether we need to present the survey by using the already created "SurveySelection" rule set:

If the response comes back that we need to perform the survey, the next action is very easy. We read the already prepared JavaScript "showsurvey.js" file and add it to the response being sent back to the user:

Once again, the properties are shown here:

This takes care of presenting the survey to the user. Now we just need to handle the response from the user to the survey, the first step of which is to record whether the user has in fact responded to (or denied taking part in) the survey:

We record this in the session using the following properties:

Next, we check if the user hit the "Submit" button and if yes, we record the answers in the flight recorder. If no, we simply return the user to the main page, using a little bit of JavaScript to remove the survey.

The properties are as follows:

Optional index fields: ProductRange,Pricing,EaseOfUse,Delivery,Comments

Response data: "<script>parent.document.location='main.jsp';</script>"

The little piece of JavaScript used here reloads the "main.jsp" page. As the survey flag is now set to "X", the survey will not re-appear, and the user can continue as normal.

Creating the survey configuration

The configuration for this example is very easy. Simply create a new configuration in the Customer Survey repository and name it "SurveyTest". The following shows all of the relevant parts that must be completed for the configuration:

Testing

You are now ready to test the survey rule set. Deploy your new configuration to the Qwerty demo server and start it. Then log into Qwerty. There is approximately a 1 in 3 chance of you getting a survey request. To quickly invoke a survey, click on the "Set up 3rd party" button and then "I'm finished", until a survey request appears. Once you have completed or rejected a survey request, log out and log back in to be presented with another one.

Make sure you answer 4 or 5 surveys at this point.

Setting up the flight recorder definitions

We now have some data in the flight recorder, so we need to set up a definition for it in order to view the data from within the console.

The following shows the definition used in this example:

Seeing the survey results

Once you have done this, select Flight Recorders from the console menu and click on SURVEY:ANSWERS. Leave all of the fields as default and click on Search (tip: if you only wish to see survey answers with comments, put an uppercase "A" into the "Comments:" from field and a lowercase "z" into the to field).

The survey results submitted will be shown:

You can now click on the graph of one of the questions. The result is a pie chart showing you the answer distribution:

Using the flight recorder search filters, you can now use the responses to better understand your customer satisfaction ratings. For example, you can see if Firefox users generally rate the ease of use of your site higher than Internet Explorer users or vice versa.

Potential improvements

The sample created here is fully functional, but for production purposes, you may wish to add a few things. Some possible improvements are:

  1. JavaScript validation to ensure the customer has completed the form before submitting it.

  2. Logic in the SurveySelection rule set to ensure that the same customer does not get the survey more than once every 6 months (The History Summary rule or the History Recorder rule are both useful for this purpose).

DNS Multi Protocol

In the following case study, we will explore adding a new protocol (DNS) to the capabilities of Tomorrow Software.

For simplicity, we will restrict this to just a single DNS A record.

We will show how to proxy the protocol, how to modify the data coming back from the DNS server and how to capture a network packet and use it later as a template for requests from non-Multi-Protocol input adaptors.

Defining the protocol

This case study assumes that you intend to work with a brand new protocol, if using a predefined protocol (such as MySQL or Telnet) then you can skip this section.

Before you can begin to work with a new protocol, you need to define it. In this case study we will create a basic DNS A Record protocol interpreter. It is not a complete DNS example, but will serve well as an example on how to use the multi-protocol capabilities of Tomorrow Software.

The DNS protocol explained

The DNS protocol was chosen for this case study due to its simplicity and because it is well documented.

A simple internet search for “DNS Packet Format” will provide the complete details, but the following is a simplified primer.

At its core, it has the following structure in both the request and response:

A header block:

Followed by the actual questions or answers block. Questions contain the domain being queried, followed by two 16 bit fields, the first of which is the question type (1 = A record, 2 = NS record and so on) and the second of which is the question class (always 1).

The domain name being queried will have its dots removed and each section of the name is supplied with a leading byte providing the section length, followed by a zero byte to indicate all sections have been provided. For example:

labs.tomorrow.eu`` ``will be turned into: [4]labs[8]tomorrow[2]eu[0]

Breaking down the protocol with protocol rules

Before we can start doing anything with the DNS packets, we need to break them down and make them available to our normal rules. We do this in the administration section under “Protocols”.

Just like normal rules, start by creating a rule set named dns_in (as shown) and open it in the rule editor.

You will notice that the rules catalogue for protocols is much smaller than the regular rules catalogue:

You can explore these rules to get a feel for what is available.

Before starting to write the rules, it is important to understand streams, protocol variables, VAO variables, VAO stream variables and stream windows.

Multi-Protocol Streams

Whenever a packet is read using the Multi-Protocol server version of the Programmable Data Agent, it will be read in the form of a stream. For almost all protocols there are two streams: request and response. It is the job of the Multi-Protocol server to break down the binary content of the stream into variables that can be used and manipulated by the regular Programmable Data Agent.

The regular Programmable Data Agent is then capable of modifying the content of the stream before proxying it to the real target server. Upon a reply from the real server, the reply will also be treated as a stream and can equally be broken down and manipulated or simply returned to the original requester.

VAO Variables

Setting a VAO variable directly refers to setting variables in the input for the regular Programmable Data Agent, when the Multi-Protocol server hands over control to the regular Programmable Data Agent.

Protocol Variables

To help the protocol rule writer control the workflow around breaking down a protocol, a set of variables known as protocol variables are used. These are basically String objects, and unlike the regular rules, can be treated as such. This means that assignments to a protocol variable via the Set Variable rule can use all of the regular Java language conventions such as:

Notice the use of “”+ in the last example. This is a convenient way to convert a Java integer into a String object.

VAO Stream Variables

VAO Stream variables on the other hand are directly tied to the request or response stream. If you modify a VAO Stream variable within the regular Programmable Data Agent, then the underlying stream will also be modified. VAO Stream variables use format converters so that the underlying stream can be a binary field, but it will be presented as a regular integer (or some other valid representation) in the regular Programmable Data Agent.

VAO Stream windows

VAO Stream windows are used to handle the very common occurrence where part of a protocol stream may contain information such as the length of another part.

A classic example of this would be the “Content-length” header in a HTTP response stream.

If designating that a VAO stream variable is also a stream window, any modifications that you make to the content of the stream window will automatically be reflected in the value of the variable.

Breaking down the request stream

We are now ready to create our first protocol rules. Return to the dns-in rule set we opened earlier.

According to the protocol definition, the first field we need to read from the stream is the message ID. We do this by adding a “Read Fixed Data Type” rule:

And setting the properties as follows:

Let’s examine what is going on here:

  1. We are using a “Fixed” data type. This refers to data types that have a fixed unchangeable length within the stream. In our case we pick an unsigned integer, MSB first (MSB referring to Most Significant Byte).

  2. We set the length to 2 bytes.

  3. We picked a variable name of messageid. This is the protocol variable name.

  4. We specified that the stream we are going to work with is the request stream. This is optional as the Multi-Protocol server version of the Programmable Data Agent is smart enough to know the main stream being worked with, however for clarity, it is recommended to specify it.

  5. We specified a Stream Variable name of MessageID. This means that when the regular Programmable Data Agent is invoked, it can access and modify the MessageID variable, which will have a direct impact on the stream.

Next, we wire the rule up to the rule set entry point, and also add the “Abort Connection” rule so that we can handle protocol failures gracefully.

The next couple of bytes contain the DNS flags. Since they are bit level, we will read the two bytes as a binary string of 0s and 1s. We once again use the Read Fixed Data Type rule, but this time set the following properties:

This will ensure that the value contained in these 16 bits is represented as a string, looking something like this: “1000010110000000”, where each bit signals a particular meaning as per the DNS protocol specification.

We follow this with 4 simple rules to read the question and answer count:

Each of these new rules reads a 2 byte unsigned integer and are wired to the “Abort Connection” rule on failure.

Next, we need to deal with the actual query payload. For a simple query, this means handling the variable number of elements in the domain name being queried. Theoretically, more than one query could be included in a single DNS request, however for the sake of simplicity, we are ignoring that for now.

The full construct of breaking down the domain name looks like this:

What is happening here is as follows:

  • The Count and DomainElement variables are each being set to the value “1”.

  • The while loop is then created using the following properties:

  • This is followed by a Read Data Type, which is capable of reading a set of bytes with a variable length. In this case, it is the length prefixed String. The properties to perform this read are as follows:

Finally, the Count variable is incremented, using the technique described earlier:

All that remains in breaking down the protocol request is to read the query type and class. As both are simple 2 byte unsigned integers, we can do this with ease:

Finally, we tell the Multi-Protocol server to hand over to the Programmable Data Agent:

The complete rule set looks like this:

Making the protocol rule set available in rules

Before we can use the protocol rule set in rules, we need to give it a short description and check the box that allows rules access:

Setting up Tomorrow Software rule sets

We now have everything we need to perform a test of our protocol breakdown.

The next step is to create the regular rule sets that are going to set up a port to listen on and receive the packet.

Start by creating a new repository and create a rule set called DNSStart. It will only have one rule:

Save the rule and then create another rule set called DNSMain. It will also only have one rule:

The configuration for these two rule sets is also very simple:

It is now time to start up the stand-alone Multi-Protocol server instance. It is found in the Multi-Protocol folder. The easiest way to do this is to execute either the tomorrow.bat file or the tomorrow.sh file.

Once the instance is running, you can deploy your new configuration to it and start it.

Note: If you are not seeing any Multi-Protocol server instances when you try to deploy, please check that you have a server defined with the server type Multi-Protocol, and that it is configured to the correct management port of your Multi-Protocol server instance.

If you check the log for the Multi-Protocol server instance, you should see the following message:

First test

We now need a tool to send DNS packets to any given DNS server easily. There are many options available on the net, and we selected DNSDataView from was chosen for this example.

The first thing we will do is trigger a simple DNS A Record retrieval packet against our Multi-Protocol server instance:

This will obviously not generate a reply yet, but you will see the packets generated in the console output. There will be several, because DNSDataView retries 5 times:

As you can see, the query is broken down into stream variables that are all on the request stream. Also notice how the protocol rules have sliced the query neatly into its three parts: www, testing and com.

Breaking down the response stream

Now that we know the request stream is working, we can proceed to create the protocol rules for the response stream as well. Fortunately, for DNS this is very simple, at least if only dealing with a single DNS A record as in this case study. The first part of the response stream is essentially an echo of the request.

So, start by copying dns_in to a new protocol ruleset named dns_out.

Then we modify each rule to point to the “response” stream and provide a new name for each stream variable by adding the letter R in front of each name:

Once done, we can proceed to read the actual response data.

Things get a little funny here, because the designers of the DNS protocol lived in a time where bandwidth was a scarce resource, so they built “compression” into the protocol.

The way they did it was by manipulating the first bit of the length field of the reply. If this bit is set, then the actual site name (www.testing.com) being replied to can be found by using the rest of the bits + the following byte to create an offset to where the name can also be found in the packet. However, given that in this example we only have one query, we will ignore that and just read the bytes:

With that in mind, the rest of the dns_out protocol rule set becomes fairly simple:

All of the above rules simply read unsigned integers MSB first. Not all have the same length though. RPointer, RType, RClass and RLength are all 2 bytes long. RTTL is 4 bytes long and RIP1 – RIP4 are all 1 byte (each part of the IP address).

The final step to complete the rule set is to name it and allow it to be used in rules:

Proxying a multi-protocol packet

We are now ready to proxy our protocol packet and do something useful with it. We need to return to the regular DNSMain rule set and make some changes:

The first rule you see above is actually the “Proxy Input Request” rule. However, once you change the selected protocol, it automatically changes its name to the protocol it is using.

The complete properties used are:

The Host name/IP shown above is Google’s DNS server. You could choose to use your own to complete this case study.

Testing the proxy

Deploy the dns_example configuration to the Multi-Protocol server instance and restart the rule set. Then go back to DNSDataView and get ready to launch another query. Since we are using Google’s DNS server, we are going to query “www.google.com”.

This time, we get a reply:

And the console shows that the proxy worked:

Looking through the various stream variables, the significant ones are RIP1-4, which tells us that “www.google.com” can be found at 216.58.199.36.

Manipulating a stream

We will now use the regular rule set to manipulate the response stream.

You may notice that the “Set Variable” rule is used to change the RIP4 value to 100. As the Multi-Protocol server version of the Programmable Data Agent is a two-way mapping of the variables to the stream, changing one of the variables also changes the stream.

We will demonstrate this by deploying the rule sets and re-launching the DNS request:

As the tool shows, we have just changed the output of a DNS request in real time.

The usefulness of this is probably limited (given the recursive nature of DNS), but one example could be making the DNS server respond with a different IP address based on the requesters physical location or setting up internal honeypots.

Crafting protocol packets within rule sets

Proxying packets using the Multi-Protocol server instance is one way to use the protocol packets. There may be times when you wish to use a protocol to access an external service. However, crafting network packets by hand is incredibly time consuming and fraught with error risks.

To get around this, Tomorrow Software includes a feature to capture a packet and use it as a template. Capturing a packet is incredibly easy. Simply modify the rule set to write the stream to a file:

Once you have a captured packet, you can easily modify it using simple stream variables. The following shows how the captured packet is read before being sent to the test DNS server using the “Write Stream to Server” rule:

Using this approach, we have added DNS lookup capability to rules using no code whatsoever.

!a would be the same as sending Alt-a
^a would be the same as sending Ctrl-a
+a would be the same as sending Shift-a
#a would be the same as sending Windows key (Win-a)
{SPACE} {ENTER} {DEL} {TAB} {BS} {HOME} {UP} {DOWN} {LEFT} {RIGHT}
{DEL 5}
CMD /c "c:\Tomorrow\Tomorrow-Software-Server-10.0.0\Tomorrow.bat"
cd/
cd Tomorrow
cd Tomorrow-Software-Server-10.0.0
cd server
https://www.autoitscript.com/site/donate/
http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html
https://www.realvnc.com/download/vnc/latest/
https://www.realvnc.com/products/vnc/
https://help.realvnc.com/hc/en-us/articles/360002251297-VNC-Server-Parameter-Reference-
New Rule Set
Run Application Rule
Rule properties
Wait Active Rule
Wait Active Rule properties
Notepad tabs
Send Text Rule
Send Text Rule properties
New configuration
New configuration: input source
Notepad editor with the text
Properties - Format Raw
Properties - Format Formatted
Properties - Key Strokes with formatted text
Notepad modal asking for saving changes to the file
Autolt v3
Finder into Notepad window
Control tab
Our rules
Get Control Text properties
Checking our Programmable Data Agent Console
Close Window Rule
Close Window properties
Results
Our rules structure
Wait Active properties
Send text properties
Saving our file
Our Wait Active rule
Wait Active props
Directory control name
Entering a directory name
Get Control Position rule
Get Control Position props
Checking the Programmable Data Agent console
Calculation rule
Calculation rule props
Click Control rule
Click Control rule props
Send text rule
Send text rule props
Set Control Text rule
Set Control Text rule props
Update the Set Control Text rule props
Save button
Warning dialog of a file name already exists
Wait Active rule for already exists nottice
props
Send text props
Launch Queued Process rule
Run these programs at user logon
Applying the updated configurations
Option Never in Control Panel
Connectivity between VNC Viewer and VNC Server
Controller = VNC Viewer REST
Service instance = VNC Server
VNC Setup
VNC Logo
Creating a password for VNC
Stopping VNC Server
Updating a Value
Startup type set to Automatic
Allow service to interact with desktop in VNC Server
VNC Server Name and IP Address in VNC Viewer
Authentication in VNC Viewer
VNC Server instances
<script>
var tbody = document.getElementsByTagName("body")[0];
var tnode = document.createElement('div');
tnode.style.position='absolute';
tnode.style.top='0px';
tnode.style.left='0px';
tnode.style.overflow='hidden';
tnode.style.display='none';
if( document.body && ( document.body.scrollWidth || document.body.scrollHeight ) ) {
var pageWidth = document.body.scrollWidth+'px';
var pageHeight = document.body.scrollHeight+'px';
} else if( document.body.offsetWidth ) {
var pageWidth = document.body.offsetWidth+'px';
var pageHeight = document.body.offsetHeight+'px';
} else {
var pageWidth='100%';
var pageHeight='100%';
}
tnode.style.opacity=0.4;
tnode.style.MozOpacity=0.4;
tnode.style.filter='alpha(opacity=40)';
tnode.style.zIndex=1000;
tnode.style.backgroundColor='black';
tnode.style.width= pageWidth;
tnode.style.height= pageHeight;
if (parseInt(tnode.style.height)<700) tnode.style.height='700px';
var ctrNode = document.createElement('div');
ctrNode.style.position='absolute';
ctrNode.style.top='10%';
ctrNode.style.left=parseInt((parseInt(pageWidth)-500)/2)+'px';
ctrNode.style.backgroundColor='white';
ctrNode.style.zIndex = 1001;
ctrNode.style.width='500px';
ctrNode.style.height='500px';
ctrNode.innerHTML = '<iframe width="100%" height="100%" name="survey" src="survey.html"><\/iframe>';
tbody.appendChild(ctrNode);
tbody.appendChild(tnode);
tnode.style.display='block';
</script>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<title>Untitled document</title>
</head>
<body>
<!--CTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dt--><!--CTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dt-->
<p>Dear customer,</p>
<p>You have been randomly selected to take part in a very short customer satisfaction survey. We value your time, so if you participate we will place you in a draw to</p>
<h1>WIN a brand new uPod Feel</h1>
<p>Simply answer the 4 questions below and you will automatically be placed in the draw. All questions must be answered to be able to enter.</p>
<div style="text-align: justify;"><form action="/qwerty/survey.jsp" method="post" enctype="application/x-www-form-urlencoded" accept-charset="UNKNOWN">
<p><input name="Cancel" type="submit" value="No Thanks" /></p>
<hr />
<p>Please answer the following questions with a rating from 1 to 5, where 1 equals "Strongly disagree" and 5 equals "Strongly Agree":</p>
<p>I use your site because of the great product range</p>
<table border="0">
<tbody>
<tr>
<td width="20%">Strongly disagree</td>
<td width="20%">&nbsp;</td>
<td width="20%">Neutral</td>
<td width="20%">&nbsp;</td>
<td width="20%">Strongly Agree</td>
</tr>
<tr>
<td><input name="ProductRange" type="radio" value="Strongly disagree" /></td>
<td><input name="ProductRange" type="radio" value="Somewhat disagree" /></td>
<td><input name="ProductRange" type="radio" value="Neutral" /></td>
<td><input name="ProductRange" type="radio" value="Somewhat Agree" /></td>
<td><input name="ProductRange" type="radio" value="Strongly Agree" /></td>
</tr>
</tbody>
</table>
<p>I use your site because of the excellent pricing</p>
<table border="0">
<tbody>
<tr>
<td width="20%">Strongly disagree</td>
<td width="20%">&nbsp;</td>
<td width="20%">Neutral</td>
<td width="20%">&nbsp;</td>
<td width="20%">Strongly Agree</td>
</tr>
<tr>
<td><input name="Pricing" type="radio" value="Strongly disagree" /></td>
<td><input name="Pricing" type="radio" value="Somewhat disagree" /></td>
<td><input name="Pricing" type="radio" value="Neutral" /></td>
<td><input name="Pricing" type="radio" value="Somewhat Agree" /></td>
<td><input name="Pricing" type="radio" value="Strongly Agree" /></td>
</tr>
</tbody>
</table>
<p>I find the site easy to use</p>
<table border="0">
<tbody>
<tr>
<td width="20%">Strongly disagree</td>
<td width="20%">&nbsp;</td>
<td width="20%">Neutral</td>
<td width="20%">&nbsp;</td>
<td width="20%">Strongly Agree</td>
</tr>
<tr>
<td><input name="EaseOfUse" type="radio" value="Strongly disagree" /></td>
<td><input name="EaseOfUse" type="radio" value="Somewhat disagree" /></td>
<td><input name="EaseOfUse" type="radio" value="Neutral" /></td>
<td><input name="EaseOfUse" type="radio" value="Somewhat Agree" /></td>
<td><input name="EaseOfUse" type="radio" value="Strongly Agree" /></td>
</tr>
</tbody>
</table>
<p>I use your site because of the fast delivery</p>
<table border="0">
<tbody>
<tr>
<td width="20%">Strongly disagree</td>
<td width="20%">&nbsp;</td>
<td width="20%">Neutral</td>
<td width="20%">&nbsp;</td>
<td width="20%">Strongly Agree</td>
</tr>
<tr>
<td><input name="Delivery" type="radio" value="Strongly disagree" /></td>
<td><input name="Delivery" type="radio" value="Somewhat disagree" /></td>
<td><input name="Delivery" type="radio" value="Neutral" /></td>
<td><input name="Delivery" type="radio" value="Somewhat Agree" /></td>
<td><input name="Delivery" type="radio" value="Strongly Agree" /></td>
</tr>
</tbody>
</table>
<p>We also value any comments or suggestions. So optionally you can type them here:</p>
<p><textarea name="Comments" rows="6" cols="60"></textarea></p>
<p><input name="Submit" type="submit" value="Submit Survey" /></p>
</form></div>
</body>
</html>
generated interface
html page
SurveySelection rule set
Random Number rule properties
If Condition properties
Exit Rule properties
SurveyLoad rule set
HTTP Session Object properties
Survey rule set
Number Spliter properties
Survey rule set
Survey rule set
HTTP Session Reader properties
If Condition properties
SurveySelection.xml rule added
Survey rule set
File Reader props
HTTP Response Addition props
Survey rule set
Set Variable properties
HTTP Session Writer props
Survey rule set
If Condition props
Flight Recorder Trigger props
HTTP Response props
Configuration, general tab
Input source tab
Databases tab
Flight Recorder
Flight Recorder: Survery: Answers Page 1
Flight Recorder: Survey: Answers visualization

0..7 - 8..15

16..23 - 24..31

Message ID

A unique number that the sender can use to tie a response to a request

Flags

16 bit flags. The most important of those is the first bit, which is 0 for a query and 1 for a response

Number of questions

A simple 16 bit count of questions

Number of answers

A simple 16 bit count of answers

Number of authoritative answers

A simple 16 bit count of answers that are authoritative

Number of additional answers

A simple 16 bit count of additional answers

“abcd”.substring(3,1)
somevariable+”somenewtext”
""+(Integer.parseInt(Count)+1)
http://www.nirsoft.net
Protocols folder
Rule Catalogue folder
dns_in rule set
Read Fixed Data Type properties
dns_in rule set
Read Fixed Data Type properties
dns_in rule set
rules for breaking down the domain name
While Condition properties
Read Data Type properties
Set Variable properties
reading query type and class
Send to Rules rule added
dns_in complete rule set
dns_in description
DNSStart rule set
rule set saved
Configuration general tab
input source tab
message on the Multi-Protocol server instance
Triggering a simple dns
modified Read Fixed Data Type rule properties
modified Read Fixed Data Type rule properties
dns_out rule set
rule set description
DNSMain rule set
DNS Out properties
Query the "www.google.com" domain
got a reply
console with logs
DNSMain rule set
got a reply
DNSMain rule set
DNSLookup rule set

Experience Multi-Factor Authentication (MFA) in a live, 1 hour test drive, where you get to deploy MFA to a Login App on AWS.

Buy with AWS

Try free with AWS

Cover