Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Version: 10.0 / Modifications: 0
Developing Web Applications using the Composable Architecture Platform Framework often requires collaboration between server-side rule writers and client-side web page designers.
The Composable Architecture Platform Framework provides a methodology to make this collaboration as easy and seamless as possible.
This guide provides best practices examples of the various workflows that go into the development of a new application and the maintenance of an existing one.
As a best practice, web applications should be designed from the perspective of function. This means rule writers must have basic HTML and JavaScript skills, but are not required to understand the intricacies of making a web page responsive and attractive once functionality is achieved.
Modern web applications rely heavily on AJAX, functional JavaScript, JavaScript and JQuery libraries and other tools that define functionality, but not necessarily look and feel (which is typically achieved using CSS).
So, a web application should be started by the rule writers and contain the bare minimum of design elements to be functional. After that the role of the web page designers will be to make those pages shiny and friendly.
Throughout this manual we are going to work through a very simple example to illustrate the steps involved. Each section will have either [Rules], [Admin] and/or [Web] in their headlines to denote which role the section applies to. This allows you to quickly skip sections that may not be relevant to your job.
Before you can start on this example, please install the “Web Application Tutorial” repository from the update server and create roles and users for the rule writers and web designers. Then deploy the repository to a proxy server for the users to access.
The web designer role should have:
Server Permissions: VIEW, STOP, START and DEPLOY CONTENT
for the Proxy Server
Repository Permissions: VIEW, VIEW CONTENT, EDIT CONTENT and DEPLOY CONTENT
for the Web Application Tutorial repository.
The tutorial application is a simple data maintenance app. You can add and delete quotes from your favorite authors. The landing page ([server]/index.html) looks like this:
To create a new quote, enter an author and click on Create:
The new quote is now stored:
And you can add a few:
In turn, to remove a quote, just click on the adjacent Delete button.
So, we now have a fully functional (albeit very ugly) web application.
The repository installed contains a configuration, some basic rules and content for an app that can be learned in a few minutes. We hasten to add that the application is not secure, has no validation and the patterns used can be a reference for functionality only.
The Startup rule set defines the Data Set used, the SubmitManager rule sets determines what happens when a button is clicked and the ContentManager rule set grabs the row snippet from the content files, builds the table rows and inserts them into the page. We will explain the page components in the next section for web designers.
It is now imperative that you train the web designer in the application functionality so that modifications made can be tested by the designer directly.
When you first log into the Composable Architecture Platform console, you are greeted with the following interface:
Before you do anything else, you should change your assigned password to one of your own choice. You do this by clicking on the Password button (top right).
The 8 radio buttons each represent a different desktop. For now, just leave the first one active only.
To get started, click on Start -> Servers
:
This will open a new window with the servers you can control:
You can minimize, maximize or close the window using the controls in the top right corner:
To move the window around, click down on the window header and drag it where you want it.
To resize the window, use the resize anchor in the bottom right corner:
Every window you open, and position stays in place if you log out and back in, so it pays to spend a little time organizing your desktop.
In the window you just opened, you can see the Proxy Server. Click on it to see the current state of the server. It should look like this:
You will likely be limited to Stopping and Starting the server and accessing the console. The console window is mainly used by server-side developers but can be useful if you are trying to identify a problem.
In most cases it will show something as simple as this:
Servers are instances of application containers, they are not physical servers. There is normally no risk associated with stopping and starting development servers.
To work with your assigned repositories, click on Start -> Repositories
:
You will receive a list of repositories that you can access:
Click on the Web Application Tutorial
:
Then click on the View
button. This will open the list of content files for you and close the repository window. Here we have expanded all the subfolders to make it easier to see:
So the entire application at this point contains just 2 files and 1 folder: “index.html
”, the folder “snippets
” and the file “quotefragment.snippet
”.
At this stage, to make your life as easy as possible, we suggest you arrange your desktop similarly to the below:
You can (with some difficulty) work with the files directly through the console, uploading and downloading files as you need, but the Composable Architecture Platform console contains a much easier way to manage the lifecycle of you web development using your local file system, which we will discuss next.
Whenever you are assigned a repository where you need to exclusively maintain or modify content files, the simplest and easiest way is to work in a “scratch” folder.
A scratch folder can either contain ALL the content files in the repository or just a sub-set. The benefit of using a scratch folder is that anything you add or change will be synchronized with the target server immediately, allowing you to test your changes without having to perform any form of deployment task.
It is important to understand that those synchronizations are temporary. They will be removed when you either: Restart the server or redeploy the repository to the server. More about this later.
For now, start by clicking on “Content Files” in the repository tree:
Since we are just starting the application, the correct approach now is do download all the artifacts and use them to create our scratch folder. To do this, click on Download. The repository content will download as a zip file:
Create a new scratch folder somewhere in your file system and copy the zip file to it and unzip it:
We now have our artefacts:
IMPORTANT: You MUST unzip the folder even if your file system supports moving into a zip file directly as the synchronization in most cases does not work with a zip file on it’s own.
So, let’s open the folder:
These are our artifacts that we need to edit and make attractive. So now we need to connect the Scratch folder to the server.
To connect the folder to the server, we go back to the Composable Architecture Platform console and click on Live Web Development:
This gives you an option to select a server that you can connect content to:
Select the Proxy Server and click on Live Web Development again.
(Should the Proxy Server be greyed out in the list then it is most likely stopped. In that case open the Servers window, select the Proxy Server and click on Start and wait 30 seconds – then return to the window above and click on Refresh).
You are now ready to link the server and the local folder:
To do this, drag the root folder inside the scratch folder (Web Application Tutorial) into the drop zone in the Window:
Your local folder and the server’s temporary content space will now synchronize. Unless you have a slow internet connection, this happens very quickly. You can see the process of what remains to be synchronized in the Queued section in the window.
If everything went well, you will see this:
IMPORTANT: If you close the window, your session expires, or you log out of the console then you will be disconnected. Do not despair, simply reconnect and the console will only synchronize from where you left off.
Your local folder and the server are now connected. In the next section we will test it.
To test the connection, we are going to open the index.html file and make a very simple change. For this demo we will use the Atom editor, but you can use any tool of your choice as long as it saves to your scratch folder. Here is a listing of the index.html
page.
You will notice that it is just basic HTML with one little anomaly: The $QUOTES$
tag inside the table. The reason for the placement of this tag is as a placeholder for when the server generates the list of quotes.
It is EXTREMELY important for the application functionality that you do NOT change the name of any of these tags, nor can you change the name or id attribute of any elements contained in the page.
In most cases you can change the tags (for example turning <span>
into <div>
or a table into <section>
tags). However you must verify that the application remains functional in case there are JavaScript dependencies.
NOTE TO RULE WRITERS: When you create tags that contains snippets, it is important to include a comment about where the snippet can be found.
In our example application, the snippet to include looks like this:
As you can see, this snippet represents a single line in the table that the application can use to control the formatting of that line and even some actions as shown above.
So let’s modify the index.html
file with something subtle first. Try changing:
To:
Wait 3 seconds then refresh the application page:
At this stage you can now add sub-folder to your scratch folder, add css files, images, JavaScript libraries, fonts
and anything else that can help you style the page.
For example, we created a new folder named css
:
Then we added a file named style.css
to that folder:
And then we import that style sheet into our main page:
And save the changes. Within seconds the page now presents as:
And you can test that none of your changes have broken functionality.
At this point we hope that it is clear that working on the web design is now just a matter of respecting the developer tags in the pages and using your skills.
It is important to understand that changes in your scratch directory are always ADDED to the server content delivery. If you delete files, they will not be deleted from the server’s temporary content system, but they will be removed when you make the changes from your scratch folder permanent.
The last step in the initial design phase is to make the changes you have made permanent. To do this you must return to the console:
The first thing you should do is disconnect the synchronization by closing the synchronization window and then open the content files window:
Scroll down to the bottom of the page and get ready to upload your changes:
You should set the Upload type to Folder and then click on Choose Files.
Select the application content folder and click on Upload. You will possibly receive a warning:
Confirm by clicking on Upload and then click on the Upload Button:
All your changes will now be uploaded to the master content for the repository:
IMPORTANT: Whatever files you upload will be ADDED to the repository and will only create new files or replace existing files with the same name. Files in the master content with names not existing in your scratch folder will stay in place. This concept is useful for application maintenance which we will discuss in the next chapter.
Once your files finish uploading, the last step is to deploy them to the server’s permanent content delivery system.
To do this, click Content Files and then on the Deploy button:
You will be asked to pick one or more servers to deploy your content to:
Select the Proxy Server and click on Deploy. The server window will open and if you have a lot of content to deploy you will see a progress bar. However, in our case it will probably be so quick that you will not even notice:
So, close the server window, log out and inform the rule writer that you have completed your work.
The maintenance life cycle of a web application’s content files differs from the initial creation in several ways:
It may be performed by the rule writer in isolation
It may involve the web designer styling new features
It may only involve a small subset of the application
For these reasons you may not always wish to create a complete scratch folder with all the required synchronization of image files etc.
If you are working collaboratively with other users, you should ALWAYS download the latest files from the repository before making any changes. Do not rely on an old scratch folder as you will possibly remove someone else’s work.
The good news is that anything you place in the scratch folder will either override or add to the content. Nothing will ever be deleted.
So before you synchronize your scratch folder, you can comfortably delete anything you do not intend to change after you have unzipped the download.
Alternatively you can create a scratch folder by downloading individual files from within the portal and copying them into the right location in the scratch folder (remembering to create the correct matching sub-folders if needed):
Once again, after you have completed your scratch folder you should connect it to the server and make your modifications that way. There are two reasons for this:
You can test your changes instantly
You can verify that you are not accidentally stepping on someone’s toes (see below)
If you just change one or two files you may be tempted to simply upload your file changes and deploy them. We do however strongly discourage that approach because it means you test after deploy (permanent) and you have not performed a collaboration check.
The collaboration check tells you if someone else is also doing work with a scratch folder at the same time as you are. It will show up before you connect for synchronization as follows:
If you see this warning you should NOT connect your scratch folder without communicating with the named user.
The platform contains a rule named “Merge HTML Pages
”. This rule is used to merge content into a wrapping page with headers and footers.
For example, consider the following master.html
template:
It contains the headers and footers for the pages, but not the actual content. You can have more than one template in the same application if needs be.
The important part is line 18 in the above code. This is where the content for each page will be inserted, although any <head>
tags in the content will be appended to the page just before the </head>
in the template.
These rules do of course work with live web development, so the web designer can modify them on the fly while testing content.
For example:
When merged with the master.html
page will produce:
Notice line 11 where the <title>
tag is not inserted and line 18 where the content is inserted.
It is possible to control where precisely the <head>
information is inserted into the template. Sometimes this is necessary to give some JavaScript libraries preference over others. To specify a precise spot in the master template where content <head> information should be insert, place the following comment at that location:
If your application needs to be translated into multiple languages and locales, there are tree rules to help you:
This rule is usually placed either immediately after all static content is loaded for any given page, but before any dynamic content is injected into the page. The Content Variable should contain the content to be translated.
The rule can determine if the content is HTML or simply plain text. If HTML, the rule will isolate constants such as plain HTML text, <select>
options, input place holders etc., being fully HTML aware.
The rule is capable of recording text that needs to be translated and can write that text out to one or more files (merging it with existing known translations).
Building translation files depends on the content of the global variable BUILD_TRANSLATION, rather than a property. This approach is to enable the easy switching on/off of translation recording between development and production without the need to change the rule structure.
If the BUILD_TRANSLATION
global variable is set to “Yes” the X Engine will write out fresh translation files in the format “translation-"+locale+".utf8
” to the home folder on the target server.
So to get a translation file that can be handed to a local language expert, let the rule “learn” all the text that should be translated and then have the language expert translate that.
The rule is capable of translation into complex character languages such and Chinese and Japanese. Please see the rule help file for further information.
Use this rule to convert international edited values (currency, numbers, dates) to a usable internal format.
This rule can convert input from any given locale into a consistent US internal format for processing.
All values returned uses US number formatting (. as decimal point) and all dates will be returned in YYYYMMDD
format.
This rule is the inverse of the previous rule. It takes US formatted numbers and YYYYMMDD formatted dates and converts them to the appropriate local version (for example: MM/DD/YY for the US.
The framework contains a HTML aware rule that can significantly simplify the co-operation between the rule writer and the web designer.
Using this rule, you can set the value of check radio buttons, checkboxes, select options in lists and input fields in a HTML document. You can also set the value of text areas, output tags, spans, labels
and divs
. For these tags, the value obviously refers to the innerHTML
of those tags.
For checkboxes, the value can either be “on”, “true” or “checked” if no specific value has been defined for the checkbox. For lists (select tag), the options to select must have a value attribute.
Tags to set can be identified by name, ID or class name. If more than one instance of a identifier exists, all will be set.
This approach can in many cases be used to avoid using $TAGS$
in the HTML
and the web designer can instead provide a page with sample values that makes visual sense on its own and the rule writer simply overrides those sample values before serving the page to the user.
Version: 10.0 / Modifications: 0
This manual describes how to install browser certificates for testing access and modifications to sites that are protected by HTTP Strict Transport Security (HSTS). It is assumed that the reader is familiar with the basic steps of deploying configurations within Composable Architecture Platform and knows how to view the console output associated with the Composable Architecture Platform proxy server.
When using the Composable Architecture Platform browser proxy for accessing secure web sites over HTTPS, you will encounter certificate warning in the browser, just like the following:
These warning are relatively easy to get around by clicking on the Advanced button and adding an exception.
However, with the advent of HTTP Strict Transport Security (HSTS) this has now become impossible to do as the browser will refuse to add the exception:
The following guide provides instructions on how to overcome this problem by installing a trusted certificate authority into your browser that Composable Architecture Platform in turn will use to generate valid replacement certificates for each SSL site on the fly.
Before you begin you should make some updates to your Composable Architecture Platform installation.
The first step is to update/install the following components via the update server:
Composable Architecture Platform console (10.0.0:21050 or later)
Base Rules (2021-07-16 or later)
BIP Runtime (2018-08-07 or later)
HTTP Rules (2021-07-15 or later)
After the BIP Runtime extension has been installed, locate the folder named ‘Certificates’ under the Composable Architecture Platform Server installation:
Our certificate is found in that folder with the name: root.pem
To install the certificate authority in Firefox, start by selecting Options from the main menu:
The select the Privacy & Security section and click View Certificates:
In the certificate manger, select the Authorities tab:
Click on Import… then open the**root.pem
** file from the location described earlier (the Certificates folder).
You will be given the option to select the level of trust for the certificate. Select “Trust this CA to identify websites” and click on OK:
Click on OK again to close the certificate manager.
To be able to see traffic flowing between Firefox and your target site, you must configure Firefox to use the proxy. Under the Options Advanced settings, select the Network tab and click on Settings.
Configure the proxy as shown and click on OK:
You can now close the Settings tab in Firefox.
The certificate is now installed, and you are ready to see traffic.
Please note that by using the Chrome installation method, other browsers (such as the Microsoft Edge browser will be affected as well).
We will therefore only show the Chrome approach.
Important: To install the certificate, the user MUST have administrative privileges on the system.
In the Chrome browser, select Settings:
Scroll down the page that appears and click on Privacy and Security
Locate the HTTPS/SSL section and click Manage certificates…
In the dialog box that appears, navigate to the Trusted Root Certification Authorities tab and click on Import.
This takes you to the certificate import wizard:
Click on Next
Important: PEM files are not available as a default filter. To locate the file, select All Files (*.*):
Locate and select the root.pem
file, then click on Open
The file name now appears in the Certificate Import Wizard and you can click on Next.
Select the certificate store as shown and click on Next:
You will be presented with a review page. Click on Finish.
A security warning appears. Make sure you click on Yes:
The certificate will be imported:
Close the certificates list:
Please note that by using the Chrome installation method, other browsers (such as the Microsoft Edge browser will be affected as well). We will therefore only show the Chrome approach.
Within the Chrome advanced settings, locate Network and click on Change proxy settings…
In the internet properties that appears, click on LAN settings:
Set the proxy server as shown and click on OK:
Then click OK again to close the internet properties and close the Settings tab in Chrome. The certificate is now installed and you are ready to see traffic.
Please note that both Safari and Chrome use the same certificate store so this installation applies to both.
To install the certificate, navigate to the Certificates folder and double-click on the root.pem
file. The Keychain Access utility will launch and requires you to enter your Admin User credentials:
Enter your password and click on Modify Keychain
This will launch the Keychain Access utility with the certificate imported into the System keychain:
Double-Click on the TomorrowX CA certificate to bring up the details:
Expand the Trust option and set the drop-down ‘When using this certificate’ to Always Trust:
Close the pop-up details window and enter your administrator password to update. The entry will now have a blue circle with a white cross to indicate a trusted certificate and will have the following text: “This certificate is marked as trusted for all users”:
Now that your certificate is installed, switch to the Composable Architecture Platform console, select the Product Trial repository and deploy the BasicWebLister configuration to the proxy server.
Wait for the proxy server to start.
Google should load as normal:
And you should see traffic in the proxy console:
Version: 10.0 / Modifications: 0
The Tomorrow Software console ships with a scripting interface to facilitate automated management by other tools. The scripting interface is based on the TCL (Tools Command Language) version 8.4 syntax and commands, but also includes a number of Tomorrow Software specific commands.
As scripting is a programming interface, the scripting engine is not multi-lingual. It is invoked by a simple HTTP/S POST command to the URL:
http://<server>/console/ScriptRunner
The parameters for the POST are as follows:
All parameters should be UTF-8 encoded.
It is beyond the scope of this manual to provide complete details of the TCL language. TCL has been in use for many years and plenty of online resources exist for learning the language. An excellent primer can be found here:
Also, several sample scripts can be found in the /Education/script samples folder.
To assist with testing scripts, a specific page has been made available:
http://<server>/console/scriptRunner.jsp
This page allows you to enter a user ID and password, as well as a script, and submit it to the console. The output from the submission is returned to the browser.
Tomorrow Software introduces a number of extensions to the standard TCL language. All of the extensions relate to specific console management tasks.
In addition, output written with the "puts" command is written to the HTTP Response stream rather than STDOUT
.
The createUser command creates a new console user. The command takes a number of parameters to correctly define a user in the console:
The following script snippet shows an example of how to use this command:
This command can only be executed with administrator or security authority.
The deployConfiguration command deploys a specific configuration to a nominated server. Only configurations located in a repository can be deployed using this command. The parameters for the command are Server ID, Repository Name and Configuration Name. The following script snippet shows an example of how to use this command:
The command will wait for the deployment task to complete before continuing. The deployment does not result in a server restart. The stopServer and startServer commands should be used after this command to ensure that the deployed configuration takes effect. This command is only valid for production servers.
The deleteUser command deletes a user based on a provided user ID. The following script snippet shows an example of how to use this command:
This command can only be executed with administrator or security authority.
The getAudit command retrieves a subset of the internal Tomorrow Software audit log. The following snippet shows an example of how to use this command:
The above commands retrieve all audit log entries after the Java Time Stamp 1425064936463.
This command can only be executed with administrator or security authority.
The getConfiguration command reads a specific configuration from a specific repository and provides access to all elements of the configuration (including the ability to update it if the user has the appropriate authority.
The following script snippet shows an example of how to use this command:
The above command obtains the BasicWebTrial configuration from the Product Trial repository, outputs the default rule set file name and then sets the maximum number of test records to 20,000 before updating the configuration (writing it to the file system).
The following table provides a list of all of the readable properties on the configuration object:
Each of the above values can also be set using the equivalent setter method (replacing "get"/"is" with "set").
Please note that for any arrays, ALL arrays in a set (attributes, databases, timer rule sets) MUST be set to the same length before invoking update.
The TCL interface only supports updating existing configurations. New configurations cannot be created using TCL and existing configurations cannot be deleted.
The getUser command reads a specific user and provides access to some elements of that user.
The following script snippet shows an example of how to use this command:
The above command reads the user test123, outputs the name of that user and then sets the role before updating the user.
The following table provides a list of all the readable properties on the user object:
Most of the above values can also be set using the equivalent setter method (replacing "get"/"is" with "set"). The values that cannot be set are: Logon, Created and LastLogon.
This command can only be executed with administrator or security authority.
The serverList command obtains a list of all configured servers in the console that the user is authorized to view. The response is in the form of an array of server IDs. The following script snippet shows an example of how to use this command:
A sample output from running the above script is as follows:
The serverStatus command is used to interrogate the current status of a server, based on the server's ID. The following script snippet shows an example of how to use this command:
The return value from the command is a server status object. The following methods are available on the object:
The setCredentials command is used to set the value of a given field in the credentials vault. The specific vault and field must exist already.
The following script snippet shows an example of how to use this command:
This command can only be executed with administrator or security authority.
The startServer command is used to start a nominated server.
The following script snippet shows an example of how to use this command:
The command will wait for up to 30 seconds to ensure that the server is actually started. Provided the server starts, the command will return "1". If the server fails to start, then "0" will be returned.
The stopServer command is used to stop a nominated server.
The following script snippet shows an example of how to use this command:
The command will wait for up to 30 seconds to ensure that the server is actually stopped. Provided the server stops, the command will return "1". If the server fails to stop then "0" will be returned.
The userExists command checks if a given user ID exists. The command returns "0" if the user ID is not found or "1" if the user ID is found. The following script snippet shows an example of how to use this command:
This command can only be executed with administrator or security authority.
The updateApplication command updates a console application (such as Qwerty or the console itself. The following script snippet shows an example of how to use this command. In this case the console itself will be updated:
This command can only be executed with administrator authority.
The updateExtension command updates/installs an extension from the update server (such as the Base Rules or the Http Rules). The following script snippet shows an example of how to use this command:
This command can only be executed with administrator authority.
The updateRepository command updates/installs a repository from the update server (such as the Product Trial repository). The following script snippet shows an example of how to use this command:
This command can only be executed with administrator authority.
The userList command obtains a list of all users in the console. The response is in the form of an array of user IDs. The following script snippet shows an example of how to use this command:
A sample output from running the above script is as follows:
This command can only be executed with administrator or security authority.
Welcome to the Tomorrow Software reference for interacting with the PiFace Digital 2 I/O board for Raspberry Pi. In this guide we will provide instructions on how to set up a Raspberry Pi and PiFace combo to accept button input and control a few LEDs and relays.
The licensing of the PiFace Extension is the same as most other extensions that we provide. You simply need a valid Tomorrow Software license.
The PiFace Extension uses the Pi4J open source (LGPL V3 license) library. This is a free unencumbered license for private and commercial use.
It is assumed in this document that you have prior experience with Tomorrow Software and that concepts such as server definitions and rule writing are familiar to you.
The very first thing you need to get started is some hardware. The following photo shows the most essential components:
What you need is as follows:
HDMI cable plus a TV/monitor with HDMI input (not shown)
Micro-USB power supply (Preferably 2A)
Raspberry Pi 2 board
Case designed for the Raspberry Pi and PiFace together (optional)
Multi-meter (Optional but really handy)
USB Wi-Fi dongle
Raspberry Pi Noobs SD Card
PiFace Digital 2 board
Standard USB mouse
Standard USB keyboard
The assembly of the hardware is incredibly simple:
Mount the PiFace on top of the Raspberry Pi board
Insert the Wi-Fi dongle, keyboard and mouse into the USB slots
Remove the micro-SD card from inside the Noobs SD pocket and insert it into the bracket on the underside of the Raspberry Pi
Connect the HDMI cable from your Raspberry Pi to your monitor
Connect the power supply and wait for it to boot up
Once the operating system has booted, you will see the following image:
Using your cursor keys, space bar to select and Tab key to navigate options, set up your time zone, locale and select the option to boot to desktop.
The PiFace board communicates with the Raspberry Pi over an interface known as SPI. This interface is not enabled by default, so we need to do so. From within the configuration tool, select Advanced Options and SPI.
Enable SPI and load by default. Once done, return to the main menu, hit the Esc key and type:
This will force a reboot and after a startup you now end up in LXDE:
From here, we need to configure our Wi-Fi connection. Click on Preferences then Wi-Fi Configuration.
Next click on Scan. After a short while, your Wi-Fi network should appear and you can double-click on it to provide a password. Once done, simply click on Add and your internet connection will be established.
Wait for the IP address to show up and note it down for later.
Because our project requires the latest drivers and software, the next step is to update the operating system.
Open a terminal window and type the following commands:
These two commands will take quite a while to complete, depending on your internet speed. Please ensure both tasks complete without errors before continuing.
The Tomorrow Software PiFace extension relies on an open source project known as Pi4J. We need to install this next. At the command line, type:
Next, we need to get the Tomorrow Software installed. There are two options:
Downloaded from the web
Install using a USB thumb drive
If you have received the software on a USB thumb drive, you need to perform some additional configuration. If you downloaded the image, please skip to the next section.
In the terminal window, create a folder where the USB drive will be mounted:
Next, we need to edit the file system table:
Add the following line to the end of the file:
IMPORTANT: This has to be ONE line in the file
Press Ctrl-X
and a capital Y
, followed by Enter to save.
Then reboot:
Once the reboot has completed, insert the thumb drive and make sure you can access it.
Tomorrow Software is required to be installed as the user root as it uses ports such as 80 (http) and 443 (https).
To achieve this, you need to be able to switch to root using the su
command.
To enable root access, type the following command:
Pick a good password and enter it twice.
We are now ready to start the file manager in root mode to copy the image into place.
At the command prompt, type:
Enter the password you just set up, then type:
This will start the file manager as root.
Locate the “Tomorrow-Software-Server-10.0.0.zip
” image you either downloaded or on your thumb drive, then right click and select Copy.
Change the folder to /opt and create a new folder named “local”.
Copy the zip file to this location, right click it and select “Extract Here”.
In the terminal window (as root), create a symbolic link to the distribution as follows:
Right click the file tomorrow.sh
in /opt/local/Tomorrow/server/bin
, select Properties, then the Permissions tab and make sure Execute is set to “Only owner and group”.
Copy the file tomorrowstart from /opt/local/ Tomorrow /server/bin
to /etc/init.d.
Right click the file, select Properties, then the Permissions tab and once again make sure Execute is set to “Only owner and group”.
Then enter the following commands in a terminal window (logged in as root).
Everything is now ready for the first run of the Tomorrow Software engine. Reboot your Raspberry Pi. You can either do this from the menu or by typing:
Once rebooted, wait for CPU to settle down after startup – it can take quite a while (2-3 minutes on a PI 2). Do NOT attempt to log in during this phase.
Logging in to the instance should happen from some other computer. The best way to do this is to modify the hosts file on the computer in question to give it a valid name. For example: homeauto.local
Then, simply open a browser and point it to the following URL:
Log in using the user admin and the password admin. You will access the main console. Select Administration then Console Setup:
Change the console type to “Forwarding Proxy without console” and click on Save.
This will shut down Tomorrow Software on the Raspberry Pi. Give it a minute or two to complete, then return to the Raspberry Pi and reboot it.
At this point there will no longer be a console running on the Raspberry Pi. It is instead required to be managed from another Tomorrow Software console instance. To enable this, we need to log in to that alternate console instance and set up a new server definition:
As well as the basics above, we also need to set up the protected hosts, remove the client IP restrictions and disable the browser proxy:
Make the required changes and click on Save.
If all your settings are correct, your instance will now show green in the Servers section:
The next step is to update/install the following components via the update server:
PiFace Rules
It is now time to test all the setup work. We will start by turning on LEDs on demand.
From within the Tomorrow Software console, create a new repository named “LED Test”, then create a new rule set named “LEDSwitch” in that repository.
Hit update on the rule set and create the following:
Properties are:
Click on the Save button to save the new rule set.
Return to the console to create a new configuration in the LED Test repository:
Click on Create to create the configuration.
It is now time to deploy the configuration to the PiFace Server. Deploy the configuration selecting the “Restart immediately” option.
Wait for the deployment to complete. This can take several minutes, especially the first time. Once the deployment is complete, return to a browser and enter the following URL:
Provided you have followed every step above, LED 4 on the PiFace board will now turn on. You can turn it off using:
When a button is pressed or released, this needs to trigger an event. For this purpose, there is a rule named “PiFace Button Listener”, which applies to each button.
You place these rules in a startup rule set.
The following shows a startup rule set that will turn LED 1 on when button 1 is pressed and turn it off when button 2 is pressed:
We also need to modify the configuration to accept the startup rule:
Deploy the configuration the the PiFace server and once again enter the following URL in a browser:
This will trigger the X Engine startup and activate the button listeners. Now try to press button 1 on the PiFace. LED 1 will turn on. If you press button 2, LED 1 will turn off.
Notice that LED 1 is linked to a relay. You can hear it click when the LED turns on or off.
Push notifications is rapidly emerging as one of the most efficient ways of sending information to users without going through email, SMS or other channels (such as Messenger or Slack).
Push notifications have a very high click-through rate and is supported by all modern browsers and platforms except Apple’s.
The push notification framework provides a simple way to add push notifications to your application with the ability to fall back to alternatives if the user is on an unsupported platform.
A push notification appears as a message in the user’s notification section. For example, in Windows a message could look like this:
It consists of an Icon, A headline and a text body (where supported).
If the user clicks on the message, an event is generated that will open a web page specific to the message and will also send a notification back to the server that the user clicked the message.
There are some restrictions to using push messages:
The web site sending the message MUST be using a secure protocol (https), even during development
The user must be on a supported platform
A set of cryptographic keys must be created to sign messages
The push notification framework helps you manage the last 2 of those 3 items above. To install certificates within your application, please refer to the product reference.
Please note: This manual will reference the Push Notification Demo repository, which can be obtained from the update server.
The push notification framework consists of 3 rules and a precisely structure HTML page. In the following section we will cover these rules in detail.
This rule does two things:
It either obtains, reads, or creates credential keys to use with the notifications
It initializes a data set used to store notification user information
The condition for creating keys is that no keys are present in the credentials vault or in the file system, so they must be created. This is done directly on the target server as two new files the first time the rule is executed:
You can choose to simply leave these files on the server (in which case you should also place them in the Data Files section of the repository you are working with and ensure they are deployed using the Register Data Files rule).
The preferred way however is to store the keys in the credential vault. This is a simple exercise of opening the key files with a text editor and copying the text from them to the appropriate keys in the vault:
Once the keys are in the vault, the files can be removed from the target server.
The data set created by the rule is named “WebPushSubscribers
” and is entirely managed by the framework. You can however query and work with the data set in rules as well if you wish. To do so, you will need to know the field names which are: subscriber, target, endpoint and group.
Subscriber refers to the user id within your application for a logged in user
Target is the type of communication. For example: Push, Email, SMS etc
Endpoint is the key to sending, it can be a Push key, email address, phone number (or whatever else is a valid definition of where the message should end up depending on the target)
Group is the target group. It can be the same as the subscriber (for direct communication) or it can be a subscribed group (such as offers, recalls, alerts etc).
The Push Notification Controller rule manages everything related to interacting with the browser to ensure push notifications can be subscribed to and delivered. It automatically generates correct and tested JavaScript pages and a default icon for the rest of the framework to use.
Even though the controller manages all these interactions, you always have the option of doing your own additional processing (For example when a user subscribes or unsubscribes or performs a click through on a notification).
The controller only needs a few properties:
The Database is the database where subscriber information should be store.
The Subscriber is the user ID of the user involved in the interaction. Note that generally speaking it is best to have a user logged in so that you can target specific users, rather than just a generic group of people.
The Default URL to open is a fallback mechanism for browsers that do not yet support a target URL to open for each message. In that case, clicking on the notification should send them to a sensible page (such as a login page).
The Push Notification Controller is designed to be the last rule in our normal application flows. In the sample repository that looks like this:
It is important to note that the sample repository is cut down to an absolute minimum for maximum clarity. Your production repository should follow the guidelines set out in the Best Practices manual.
To help you experience push notifications we have created a simple entry page called index.html
. It presents as follows:
Returning to this page logs out any user. To log in as one of the two users just click the relevant button.
No passwords required.
To go along with the Push Notification Controller, you will need a subscription page served up as content. There is a very minimum sample page in the Push Notification Demo repository named subscribe.html
. It presents as follows (after checking that notifications are possible):
Should your browser NOT support push notifications, you will receive the following page instead:
And finally if you are trying with something like Internet Explorer you will receive this message:
All of the above sections are simply DIVs
in the sample HTML
file:
The important thing to understand is that the various IDs of each DIV must remain in place.
You can change the DIVs to <section> or <span> tags (or whatever you like), but the IDs must be present so the Push Notification Controller rule can take charge of the page in the background.
Mandatory IDs
There are several critical IDs that must remain in place. They are as follows:
Radio and Checkbox Groups
In addition to the IDs, there are 2 named radio button groups:
and:
Notice the slight difference in the name that separates the two groups. It’s a common mistake to copy from one group to the other and forget to correct the name.
Alongside each radio button that is NOT a Push notification, you will need to specify a hidden value for each:
This provides the framework with information on how to define the destination of the non-Push notifications.
The next section is the checkboxes to enable additional groups the user can subscribe to.
You can have an unlimited number of these groups. Each selection by the user will automatically subscribe them to that group and notifications can be sent to all subscribers.
Buttons
The framework requires two buttons on the page:
Both these buttons should be disabled in the HTML by default. The framework will enable the right button at the right time.
Connecting to the framework
The final step is to connect the HTML to the framework. This is done by importing a javascript file that is dynamically generated by the framework. You do not need to have this file anywhere in your repository, it is fully generated along with all dependencies.
And with this, you now have a fully functioning push notification page, so it is time to look at how to send them.
Sending push notifications involves either sending an individual message or sending notifications to an entire group of people. In the demonstration repository there is a sample sending page name send.html
:
This page enables you to send individual push notifications to the two users or you can send a recall notification to all users that has subscribed to recalls.
The page shown after the message is the page that will be opened when the user clicks the notification.
All notifications (regardless of the method of sending) can be managed with the Send Push Notification rule:
There are several options to define how the push notification looks and acts:
The Database relates to the location where subscriber data is stored.
The Audience should be either the internal user ID that the application can use to identify a user or a notification group name.
The Sender Email should be the email of someone who can assist with technical queries from the external push notification servers used to send the notifications. Those servers are managed by organizations such as Google and Microsoft and the email is used for relaying complaints or warnings.
The Expiry is provided in minutes with a maximum of 24 hours allowed.
The Title is the key short description of the notification
The Icon is an icon to show to the user when the notification is displayed. If no icon is provided a default will be displayed.
The URL to open is the URL that will open when the user clicks on the notification. Not all browsers support this and should they not, the default URL to open from the Push Notification Controller will be used instead.
The Message allows for a longer notification message to be displayed. Not all browsers support this.
The Tag is a value that can be used to avoid sending the same notification to the user over and over. Messages with the same tag name will only be available once in the user’s notification system. The following Re-notify is related to this. It determines if the user should get another notification as a result of an unopened tag group or not.
Vibrate can be used to control the vibrations of the user’s device. It is specified as a series of on/off pairs in milliseconds. For example: “100,200,100,200” would mean vibrate for 100ms, pause for 200ms, vibrate for 100ms, pause for 200ms.
This rule notably includes the ability to personalize the message being sent and permits the sending through alternative channels.
For every message, the following variables are available: NOTIFICATION_TARGET, NOTIFICATION_USER, NOTIFICATION_ENDPOINT, NOTIFICATION_MESSAGE, NOTIFICATION_URL and NOTIFICATION_TITLE
The rule writer can use the first 3 to determine where to send a message and identify the relevant user being notified – and can use the last 3 to customize the message.
In our demonstration repository we do this by inserting the user name into the message for group messages:
The Personalize title rule has the following properties:
This is based on the title being sent to the recall group looking like this:
So for each message being sent, the Personalization chain point will insert the actual user name into the title.
If the user that signed up for notifications did not have a supported browser, we can offer alternatives (such as email, SMS or other targets).
To support the sender managing those alternative channels for us, the Alternative chain point is called whenever a target is different to “Push”. In our demonstration repository we showcase this with a simple output to the console:
However, the rule writer has access to all the 6 variables listed previously at this point in the flow and can use it to send the notification to the right target using the rules most relevant for that:
Version: 10.0 / Modifications: 0
Welcome to a new dimension of Microsoft Windows automation. Using the Tomorrow Software Windows Automation Extension you can now not just script up the flow of a Windows Application – but you can also combine it with data from many other sources and the powerful rule writing capabilities of the Tomorrow Software Multi-Protocol engine.
The extension is based on the popular AutoIt automation product and we have included tools from that product to help your automation efforts. AutoIt is a free product, however, if you find the product and the Windows Automation extension useful, we would encourage you to make a donation to the creators of AutoIt at:
The licensing of the Tomorrow Software Windows Automation Extension is the same as most other extensions that we provide. You simply need a valid Tomorrow Software license.
The license for the AutoIt tools described in this reference guide is found in the “data” folder where you also found this document. In a quick summary it is a classic free software license.
Before you begin your first automation project, you need to make some updates to your Tomorrow Software installation.
The first step is to update/install the following components via the update server:
Tomorrow Software console (B18020 or later)
Base Rules (2018-04-26 or later)
Parallel Processing Rules (2018-04-23 or later)
Windows Automation Rules (2018-04-23 or later)
If you received this document through some means other than the update server, then you will also need to install the Windows Automation repository.
At this point, stop the Tomorrow Software instance.
The JRE that ships with Tomorrow Software is a basic 32 bit JRE. The version may depend on when you received your copy of the product.
To successfully run automation projects, you need to update the JRE to at least version 8 for your platform (32 bit or 64 bit).
Once you have installed the JRE, you need to update the JRE folder under the Tomorrow Software installation with the JRE that you installed on your Windows PC. You do this by renaming the original JRE folder and creating a new one by copying the JRE from C:\Program Files\Java\jre1.8.0_(version)
and renaming it to jre
.
The final step is to install the Au3Info tool. You need this tool to inspect running Microsoft Windows programs and identify the names of controls that you can manage. The easiest way to install the tool is to download it from the Windows Automation repository’s data folder and save it to your desktop (or some other convenient location).
There are two versions available:
Au3Info.exe is for 32-bit Windows systems
Au3Info_x64 is for 64-bit Windows systems
Make sure that you download the right version.
In this example we will take you through the automation of creating a document in Windows Notepad and saving it.
Start by restarting the Tomorrow Software Server instance, log in and create a new repository called “Notepad Exercise”.
Then create a new rule set called “NotePadDemo”:
and open it up in the rules editor.
The very first thing we need to do is start notepad itself. To start an application, simply drag the Run Application rule onto the canvas:
And set the properties as shown:
This step alone will cause Notepad to start up. You do not need to provide a directory since notepad will be in the system path.
Since we are going to do something more than just start the application, we need to make sure that it is fully loaded before we start pressing keys. So we add a Wait Active rule:
With the properties set as follows:
Here it is relevant to pause for a minute and look at those properties.
Firstly, the Windows Label. Many of the rules provided in the Windows Automation framework use the Label and Text combination to identify windows to work with. The logic of this combination is as follows:
The Label match is basically starting from the beginning of the label matching as many characters as provided in the rule.
In this case we match the entire label.
The optional Text matching refers to a text within the window that was opened. This could be any word visible on the page or within a dialog box. This matching is used for more precise pinpointing of a window.
We will perform such a match later in this section.
For now, we will simply send some keystrokes to Notepad to create a document we can save:
Let’s try and run our three new rules and see what happens. In the Notepad Exercise repository create a new configuration as follows:
And set the input source to:
We can now deploy our configuration to any convenient active server (You can use a Multi-Protocol server or even Qwerty). As long as you tick the “Restart immediately”, shortly after the deployment is complete, you will see Notepad start up and the text appear:
You may have noticed that the text entered in our example was set as “Raw”:
Your other option would be to use the formatting text feature:
This feature allows you to send specific keystrokes with great ease. For example:
You can combine these keys: ^!a
would be Ctrl-Alt-a
.
If you need to send any of those characters without sending them as special keys, you must enclose them in curly brackets. For example {!} to send a !
You can also send normal Windows keys by enclosing them in curly brackets. For example:
The name used in the brackets can be most normal windows keyboard designations.
If you need to repeat a few keystrokes, you can do this by entering the key name followed by a count. For example:
Will result in the delete key being hit 5 times.
So we could in theory expand on our example to make notepad try to close once the text was entered. The keystrokes for closing a Window is Alt-F4. We would do this in formatted text as follows:
Doing this results in the following outcome:
You can try this if you wish, just remember to switch it back to “Hello World!!” and “Raw “afterwards to continue this exercise.
It is one thing being able to send keystrokes, but more often than not for automation, you will need to know the content of specific fields or you may need to be able to set the value of specific named fields without just using keyboard navigation.
This is where the tool from AutoIt (that you installed earlier) comes into play. Start up the correct version of Au3Info:
Click on the Finder tool and drag it onto the main Notepad window:
You will see that the tool provides you with the basic Windows information (Title and Class). It also provides us with the Basic Control Info, which is that the field is of the Class “Edit” and it is instance “1”.
What we need at this stage is the ability to identify a specific field in a specific window. The best and safest way to do this is to click on the “Control” tab:
And then double-click on the “Advanced Mode” entry. This copies the identifier [CLASS:Edit; INSTANCE:1] to the clipboard for us so that we can use it easily.
So all we need now is to add a “Get Control Text” rule (and a List Variables so we can see what’s going on):
The properties for the Get Control Text would be as follows:
The control identifier being easily set by entering two double-quotes and pasting the content of the clipboard from the AutoIt tool in between them.
A quick run and peak at our console will confirm that this is working:
There are certain circumstances where text is not necessarily linked to a specific control. The Windows Calculator is one such example. It actually stores the result window not in the control, but in the window itself. If you need to get to this text, you can use the Get Window Text rule instead of the Get Control Text rule.
Hint: When you extract text from the Window itself, it is often formatted across multiple lines. An easy way to get visibility of control characters in text is to escape them as if they would be going into a URL. You can do this with the Escape rule.
It is now time to close our window. This is simply done with the Close Window rule:
The properties should look familiar now:
The result of adding this rule will inevitably be:
Now, we wish to wait for this dialog to appear and then hit Enter to save the file we just created. Once again, this should now be familiar territory:
With the properties being set as follows:
Note the use of Window text in the “Wait for save box” rule. It is conceivable that Notepad my put out many dialogs that is simple labeled “Notepad”, so the extra check for the word “Save” somewhere on the dialog box helps us confirm we are in the right place.
But now things are getting a little tricky. Once we hit enter, we need to wait for the “Save As” dialog to appear:
On the surface, this may look quite simple. We wait for the dialog box to appear, we find the controls for the directory and file name, put in some values and hit Save.
The first step is not too hard:
Next, we discover (using the AutoIt tool) that the directory control is named “ToolbarWindow32”:
However, through experimentation it quickly becomes obvious that you can’t just set the control value to “Address: MyDirectory” using the Set Control Text rule. It simply has no effect. So, we need to introduce a workaround. In this case, some experimentation shows that if you click on the far right corner of the control, you can actually enter a directory name:
And the text is preselected, so if we can just do the same mouse clicks in rules, we will be able to override the text in the control and continue. This requires a few steps
We start by getting the control position so that we can figure out where to click within it:
Adding a List Variables rule and running this results in the following output in the console:
So now we know the position and dimensions of the control. The next step is to figure out the correct position to click. A simple calculation rule will take care of that:
All that remains now is to “click”:
Most of the above should now be clear. We are basically clicking the far left side of the control, using the left mouse button. If you don’t provide an X or Y position, the center of the controls axis respectively will be clicked.
All that remains now is to set the control value by sending the right key strokes:
Notice that we hit the Enter key as part of this exercise. This is because the Save As dialog box changes to the directory entered, once the Enter key is hit.
If you are following this example, make sure that you pick a directory that actually exists. In our example, we have created C:\DemoData
purely for this exercise.
The next job is to set the actual file name. Using AutoIt, we discover that the control name for this is “[CLASS:Edit; INSTANCE:1]
”. So this looks pretty straight forward. However, setting the control text by itself:
Does not work well. The resulting file name actually becomes “mydemo.txt*.txt
”.
So formatted text once again comes to the rescue. We preface the new file name with a Ctrl-a (select all) followed by Delete to clear the field:
Note that there are other ways you could achieve the same goal. This is just an illustrative example.
All that remains is to hit the Save button. Any old Windows Keyboard warrior will know that underlined text character in a Windows dialog box can be invoked using Alt+[underlined key]:
In this case, Alt+S will save the file. So we go ahead and invoke it:
If you run this complete example, you will now have a file in your designated folder called “mydemo.txt
”
Of course, if you run our scenario twice, you will encounter another message dialog telling you that the file already exists:
It is important to handle the kinds of exceptions as otherwise your automation project may become unreliable. In our case, we wait for the “Already exists” dialog to appear, with a timeout telling us if we need to handle it or not:
In the above example, the file will simply be replaced if it already exists.
A significant problem with Windows automation is interference. Essentially the automation rules are sending keystrokes and mouse clicks to applications. If someone (a human being mostly) tries to also enter keys or click the mouse at the same time, the automation is likely to fail. For this reason, automations should always run on a dedicated machine with no other activity.
When running a cluster of Tomorrow Software Server instances as a REST service, you need to consider the avoidance of interference traffic impacting automation requests. For example, a load balanced clustered web-based service may have heartbeat health check request pings to confirm service availability, or other unwanted requests; such traffic needs to be filtered (not necessarily blocked) but prevented from reaching the automation rulesets.
A final issue to be aware of when running automations is that multiple concurrent automations also interfere with each other. For this reason, the best approach is to queue automations if they need to run on the same server. The easiest way to do this is with the “Launch Queued Process” rule:
This rule will ensure that X Engine wide, only one automation rule set will run at any one point in time. However, the rules are not held up whilst these automation requests are queued.
If you need to wait for an automation process to complete before continuing, the best rule to use is “Wait for Queued Process”. This rule will place the automation request on the queue and will not continue until the automation has completed.
Given you can only run one automation process at any one point in time, you may need a load balanced setup to share automation requests over multiple servers.
The best way to do this is by wrapping the automation request into a REST service and deploying it to multiple virtual server instances behind a load balancer in round robin mode.
Using this approach, the load balancer will find the next available server and distribute the load evenly.
A core virtual server instance should be created so that it can be cloned whenever more capacity is needed.
The default BaseApp Tomorrow Software Server service instance is suitable for running as a REST service, please refer to the instructions file Read me.txt located in Tomorrow-Software-Server-10.0.0]/BaseApp/
for set up. Also refer to the Product Reference.pdf section entitled “Removing other unnecessary components” to remove the Tomorrow Software Console and other unwanted demo applications and server instances not required.
Please note that Windows Automation instances cannot be run as a service. They must be started using a bat file in the Windows startup group.
Example to run at start up: Windows Server 2012
Modify Local Group Policy Editor > Administrative Templates > System > Logon > Run these programs at user logon
Enable this option, press show and enter the following value, and apply/OK to this configuration.
Where c:\Tomorrow\Tomorrow-Software-Server-10.0.0
is this example directory path.
When using this option you need to edit the default Tomorrow.bat file to add the following three lines before cd server to accommodate the start up directory path as follows, once again where c:\Tomorrow\Tomorrow-Software-Server-10.0.0
is this example directory path.
A significant limitation with Windows automation (like most GUI automation tools) is that it requires an active desktop to run. So, when you log out of any remote desktop connection or lock the computer, automation is paused/stuck until you reconnect. It is therefore impractical to retain open RDP connection for multiple Tomorrow Software Server instances when running as a REST service with high availability demands. The following is a working example to overcome this limitation.
Example: In Windows Server 2012 set the Turn off the display option to Never in Control Panel Power Options.
You still need a way for the remote server to have it’s head/desktop to be unlocked and active. The best way to do this is to use the VNC protocol rather than RDP. There are numerous VNC software (server and client) available that are also free and/or open source.
RealVNC VNC Server uses modes to provide remote access to computers in different circumstances, to meet different needs.
VNC Server needs to install on the Tomorrow Software Server instance, and VNC Viewer needs to be installed on a ‘controller’ server.
Given the Tomorrow Software Console server will have access to the server instances, this server is a good candidate to run VNC Viewer, although a dedicated server with access to the instances can perform this connectivity too.
VNC Server installs and runs on default port 5900, so ensure any security group policies have been amended to permit connection using this port, together with ports that are running the REST service. The BaseApp to use as a REST service runs as default on port 10001 as defined in the rulesengine.properties settings.
During the standard RealVNC installation process, ensure you select the appropriate components for your REST service instance and Console Server or controller.
There is also an install option to add an exception to the Windows firewall during installation, but if you are still experiencing connection problems you may still be required to inspect your server firewall settings.
Before starting the VNS Server service, it’s useful to know all VNC applications are controlled by VNC parameters, set to suitable default values for most users out-of-the-box.
The easiest way to set the authentication scheme and credentials for the VNC Viewer controller in order to connect to VNC Server is to start the VNC Server (User Mode) desktop application.
For example, set the simple authentication scheme using VNC password in the VNC Server – Options > Users & Permissions
option as follows.
Once the authentication scheme and access credentials have been set, and Licensing updated if required, ensure you stop the running VNC Server (User Mode) by pressing the More button, followed by Stop VNC Server as follows.
The parameter IdleTimeout specifies the number of seconds to wait before disconnecting users who have not interacted with the host computer during that time. The default value for IdleTimeout is 3600 seconds, so you need to set this parameter to 0 in order to never disconnect idle connections. You need to add the IdleTimeout parameter in Windows Registry Editor when running VNC Server as a Windows service as follows.
Using Registry Editor, navigate to HKEY_LOCAL_MACHINE\Software\RealVNC\vncserver
.
Select New > String Value
from the shortcut menu and create IdleTimeout
.
Select Modify from the shortcut menu, and specify appropriate Value data, 0.
With VNC Server successfully installed and parameters set, amend the VNC Server service with Startup Type set to Automatic.
Also, the Allow service to interact with desktop option must be checked as follows.
With the IdleTimeout
parameter set to 0
as a minimum, restart the server and START the VNC Server service.
You are now ready to connect to the Tomorrow Software server instance running VNC Server from the controller running VNC Viewer.
Connect to the Tomorrow Software Console server (or controller) using a standard Windows remote desktop connection; install the default VNC Viewer components, and start the VNC Viewer application from the desktop shortcut.
The VNC Viewer application will then prompt to enter the host name or IP address of the REST Service server instance running VNC Server.
With the ‘Let VNC Server choose’ option for encryption selected you will prompted as follows for the password set on VNC Server earlier in the VNC Server – Options > Users & Permissions
option.
If connection is successful the VNC Viewer will launch a connected window to the server, at which point you can login using your Windows user credentials, and you can proceed to repeat the process to make a VNC connection to all VNC Server instances if operating in a scaled cluster.
Even with VNC Viewer closed, because it is simply a relay of the host’s screen to your desktop (works differently than RDP), when disconnected, it just stops relaying, and that is all. The relay works like a splitter connection, both the local head/monitor has access, and the VNC Viewer has access.
By this design, VNC will continue to retain an active desktop even though you’re not connected over VNC, as long as the host desktop is logged in and not locked.
The environment – Tomorrow Software Console server, and multiple connected REST service server instances as defined in the Tomorrow Software Console server definitions are now ready for use; the VNC Viewer windows residing on the Tomorrow Software Console server (or controller) can be closed, and the remote desktop connection closed, and desktop will remain unlocked and active.
Version: 10.0 / Modifications: 0
Hello, World!
As with all new programming languages, the "Hello, World!
" program generally is a computer program that outputs or displays the message "Hello, World!". Such a program is very simple in most programming languages and is often used to illustrate the basic syntax of a programming language. It is often the first program written by people learning to code.\
Now step inside and follow these steps to complete your very first composition with Composable Architecture Platform.
A running local, or cloud hosted instance of X.
Installed console version 10.0.0.21050 or later.
Chrome or Firefox browsers are supported.
Ports 80 and 443 are required to be available to run the console and X Engine.
For the purposes of these instructions [your server name] = localhost
For example: http://[your server name]/console/ =
You need access to a console login screen like this:
You’ll see a simple html content file called hello.html
has already been pre-deployed and is served up to the browser by the running X Engine.
Go ahead and enter your name in the form and press the Say Hello button. The form submission responds with Hello
.
The X Engine loads hello.html
, prompting the user to enter a name and to click a button labelled Say Hello. When the button is clicked, the text entered should be appended to "Hello". For example, if the text entered is "World!" then the result will be "Hello World!"
The user experience needs improving because any text entered is currently ignored. Can you follow this guide to improve the user experience?
First up, let’s go and see where the hello.html
file lives….
User ID: admin
Password: admin
Once logged in, press Start followed by Repositories.
We typically call these “repos”. It’s the home, or workspace in the console of where your work lives.
Now Click on the Hello World repository folder (no need to expand the folder tree just now, as that’s where you can save and restore your repository backups – we’ll get to that soon enough!).
Now press View and then expand the Content Files folder.
Content files can be HTML, XML, images, or any other binary content that may be required to be served when requested.
Content files can also be dynamically modified by content rule sets, we’re not covering those in this example.Content files live within a content path that must map to the content path of the application. In our simple example, hello.html
is served under localhost being the root directory, so therefore it resides in the top-level Content Files folder.
As the Hello World configuration has already been deployed from the console to the target server X Engine, this is why the page loads when requested.
So, let’s inspect the html file. Click on hello.html, and a new portal window will open for the file. Click on the Update button as follows.
A new browser window opens to show a html editor for the hello.html content file. Note the input parameter name on the form is set to Name. We don’t need to make any changes to the html file so you can close this window.
So that’s a small introduction to Content Files. Next, let’s take a look at Rule Sets.
With the Hello World repository open, expand the Rule Sets folder, then click the SendResponse rule set and press Update in the portal window that opens.
The rules editor is the graphical design tool for composing and maintaining rule sets. The rules editor is launched as a separate browser window from within the console application when you press Update.
Go ahead and browse the vast catalogue of what we describe as “digital blocks” on the left-hand side. The catalogue is grouped into collections. To use any block in the catalogue, expand the group folder, then click and drag a block onto the main canvas as shown.
In this example, you can expand the Alert group folder and drag the Send Kapow SMS block onto the canvas.
Now click to select the Send Kapow SMS block on the canvas, and the left-hand side catalogue will switch to the Properties tab.
Each block has properties you need to set when composing, along with adding a more meaningful description (like adding comments in code).
In this example you can set the properties as two variables called MESSAGE and MOBILE. The properties of this the block requires these in order to perform its intended function. These variables would need to contain the values of the SMS message, and the phone number to send the SMS message to.
Everything else is taken care of.
Each block has additional online help you can access by right-clicking over the selected block and pressing Help.
Give it a try.
So, let’s get back to our example. Click to select the first block called Set Variable and view its Properties.
Selected blocks banner colour turns grey.
The block does exactly what it says on the tin. It sets a new variable. In this example we’ve set the variable name to RESPONSE. With the value set to a snippet of html code. We enclose this snippet in quotes.
Note how this value has been constructed in three parts.
You’ll remember from earlier, the form submission responds with “Hello”, that’s because the NAME value hasn’t been defined or “passed into” this rule so therefore it processes NAME as a blank value, so the value of RESPONSE would look like this on exit.
Click to select the second block called HTTP Response and inspect the Properties. Selected blocks banner colour turns grey.
You can also COPY/ CUT / DELETE / PAST block(s) with a simple right click.
How easy is that?!
Guess what!?
This block also does exactly what it says on the tin. It responds to an http request with content of the response data that has been set in the property. In this case the variable RESPONSE is the html snippet value set in the preceding Set Variable block.
You’ll see this block also requires an HTTP Status code and Content Type set.
Click on the fourth tab called Rule Info for the SendResponse rule set.
The Export to Group and Short Description represent this rule set as a new block that can then be (re-)used in other compositions. We will use the Send Response rule that lives in the Hello World Grouped folder in the next steps.
Note it has the Parameter Type set to Input, Parameter Name set to NAME, and has been given a Label of Name.
We’ve finished looking at the SendResponse rule set now, so go ahead and close it by closing the Rules Editor window.
Do NOT save any changes if prompted to do so.
So, let’s create a new rule set that will pass the html form’s Name value into the response.
Click on the Rule Sets folder in the Hello World repository. In the portal window that opens, set the File Name to SayHello (case sensitive) and press the Create button.
Now open the newly created SayHello rule set for editing. Click on the SayHello rule set that has now appeared in the rule set folder of the Hello World repository and press Update just as you did to inspect the SendResponse rule set.
Go to the search tab and search for “Response” and drag the Send Response block onto the rules editor canvas.
Alternatively, you can find the same block in the catalogue from the first Grouped tab, located in the Hello World group folder. This is because the Rule Info tab of the SendReponse rule set has an export group defined as Hello World.
Either method is fine to search the catalogue and drag blocks onto the canvas.
Click on the Send Response block
yes, we’ve turned a rule set into a new block in the catalogue for re-use
and once again just set the properties. So, now set the Name property to Name (case sensitive, no quotes).
Remembering this was the input parameter set in the hello.html content file we looked at earlier.
Click and hold over the orange cog, then click-release over the green dot to “wire” the first block into the rule set in a right to left direction. Incidentally, all subsequent blocks are wired from the block exit chain point (right hand side) to the input of the next block (left hand side).
Press SAVE and close the rules editor window as shown.
That’s all you need for your new rule set.
The HelloWorld configuration defines the input into the X Engine and the rule sets to run.
Expand the configurations folder and click the HelloWorld configuration. The General tab is the default view, and ensure you now select the SayHello rule set from the dropdown list of available rule sets. This is the “initialising” rule set that is processed by the X Engine on the very first transaction it receives.
Embedded (dependent) rule sets that have been wired within the SayHello rule set are also deployed along with it’s parent, so you only need to set the top-level ruleset.
Therefore, any dependent rule sets will get deployed along with the configuration without having to define them.
You’ll note here that there are three other types of rule set that can be set to initialize and run when processing data. These are for (1) CONTENT, on (2) STARTUP, and on (3) COMPLETION. These are not required in this example.
Just to mention in passing, there is a fifth rule set you can set in the Timers tab of the configuration. These are rule sets that are initiated and run (as the name suggests) on a timed basis. For example, when a rule set is required to perform a defined process say, every 24 hours.
Click on the Input Source tab and inspect the different sources of data options available.
For this example, we are configuring the X Engine to process web application data, but as you can see this is just one of a multitude of available options to define in the configuration, dependent on the composition and data sources being processed.
Click on the Databases tab. It’s here where you define the databases being made available to the X Engine. You are not required to define a database for this example so there’s no need to configure a database.
Example only:
If you are interested, database connectivity specifying JDBC driver, connection string and schema credentials is an administrator set-up task in the console. You don’t need to complete that right now.
With the new SayHello rule set defined as the rule set in the configuration, you can go ahead and press the Deploy button.
Select X Engine as the target server and press the Deploy button.
Wait for the deployment to complete and the server restart in a few seconds, and you’ll see the X Engine server details are shown.
Enter World! the press the Say Hello button, and if successful you’ll receive a Hello World! response.
[the crowd erupts into wild applause 👏🏻🍾]
Want some more?
Then read on….
With the Hello, World! example now working successfully, let’s give you a glimpse under the hood of the X Engine.
Go back to the console and click Get performance data in the X Engine server portal window you have open.
On the next window click View Rules Performance
The rules editor window opens in a new window. Double click the Send Response block.
Place a probe on the Set Variable block. Right click over the green exit chain point and click New probe…
Click the Create button. Live probes are triggered by variables and values, and occurrences thereof. We can leave these blank to just trigger on the next transaction.
The exit chain point turns yellow to show the probe is set.
Now go to the browser tab of the demo page showing the SayHello output and click the back button so that the input hello.html page shows.
Input a new name Probe into the input field and click Say Hello, the page responds as expected with Hello Probe. Go back to the rules editor window with the probe set and you’ll see the exit chain point has turned red to show the probe has been triggered.
Right click on the red exit chain point and click View probe.
You can now see the transaction data that has just been processed by the X Engine. The contents of the two variables NAME and RESPONSE.
Aside from helping you view live data to assist with composing or troubleshooting your solution, it also provides a superior debugging tool that can even be used on production servers without the need for logging.
You are now ready to test if you can bypass HTTP Strict Transport Security (HSTS) protection. In your browser go to
The Is default URL also welcome page is important only to ensure that if the user already keyed in the welcome URL and already has that open (without a page, such as and not ) the browser should not open index.html but rather just focus the original welcome page).
You can download the correct JRE from here:
For this example, we have tested with RealVNC - VNC for Windows version 5.2.3.
Please ensure you refer to Licensing terms as a License key is required to install and use Real VNC for your environment and organisation.
Please refer to this link for RealVNC parameter names reference information.
Click the link to open in a new browser tab:
Login to the console using the default credentialsIn your case if you are working on the localhost console, use the default credentials:
This rule performs the final response behavior by the X Engine you’ve already experienced when you clicked the link and pressed the Say Hello button.
Click the link to open in a new browser tab and refresh the page.
Parameter | Value |
Logon | The console user ID for the new user |
Name | The full name of the user |
Password | The initial password for the user |
The email address of the user |
Type | The user type. Valid values are: 0 = Administrator 1 = Standard User 2 = Super User 3 = Security User |
Role | The role name for the user. Can be blank if no role is required. |
Time Zone | The new users time zone. Must correspond to the time zone list found in the appendixes of this manual. |
Additional Auth | The class name of any additional authentication settings. Can be blank if no additional authentication is required. Please note that only basic authentication selections are available. Overrides (such as the number of digits for one-time emails) are not supported. Currently the following are valid additional auth classes: software.tomorrow.authenticate.OneTimeEmailPlugin software.tomorrow.authenticate.LocalHostPlugin |
Method | Return value |
getAttributeLabels | An array of strings with the input field labels of the configuration |
getAttributeNames | An array of strings with the input field names of the configuration |
getAttributeValues | An array of strings with the input field values of the configuration |
getContentRuleSet | The file name of the content rule set |
getDatabaseAliases | An array of strings with the database aliases of the configuration |
getDatabaseDrivers | An array of strings with the database drivers of the configuration |
getDatabaseNames | An array of strings with the database names of the configuration |
getDatabaseSchemas | An array of strings with the database schemas of the configuration |
getDatabaseSystems | An array of strings with the database system names of the configuration |
getDescription | The description of the configuration |
getDirectory | The directory where the configuration is located |
getDoneRuleSet | The file name of the completion rule set |
getFileName | The file name of the configuration |
getInitRuleSet | The file name of the startup rule set |
getInputClass | The class name of the input adaptor used by the configuration |
getInputParms | A string with the input parameters passed to the configuration upon startup |
getLoopPrevent | The maximum number of chain point interactions before a rule set is considered looping |
getName | The configuration name |
getPerformanceLevel | The level of performance data collection 0 = Transaction counts 1 = Transaction count and inline time 2 = Transaction count, inline time and URI statistics 3 = All counters |
getRuleSet | The base rule set file name |
getServerType | The server type 0 = Production 1 = Test |
getTestDataDepth | The maximum number of test data collected |
getTimerDelays | An array of strings with the timer delay in seconds for each timer rule set of the configuration |
getTimerNames | An array of strings with the timer rule set file names for each timer rule set of the configuration |
getTimerTypes | An array of strings with the timer rule set types for each timer rule set of the configuration 0 = Real time 1 = Pause |
isAutoStart | Set to 1 if this configuration is auto starting, 0 otherwise |
isCollectTestData | Set to 1 if this configuration collects test data by default, 0 otherwise |
isEchoOut | Set to 1 if this configuration provides an echo of console messages to System.out, 0 otherwise |
isFailOpen | Set to 1 if this configuration fails open, 0 otherwise |
Method | Return value |
getAuth | The class name of any additional authentication. Can be blank. |
getCreated | The time the user was first created in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT. |
getEmail | The email address of the user |
getLastLogon | The time of the user's last logon in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT. |
getLogon | The user ID of the user |
getName | The name of the user |
getRole | The role set for the user (if any) |
getTimeZone | The users time zone. Will contain a value from the time zone list found in the appendixes of this manual. |
getType | The user type. Valid values are: 0 = Administrator 1 = Standard User 2 = Super User 3 = Security User |
Method | Return value |
getBuild | The base rules build number for the server |
isCollectTestData | A flag to indicate if the server is collecting test data. 0 = No 1 = Yes |
getConfiguration | The name of the configuration currently deployed on the server |
getConfUser | The name of the user that created the current configuration used on the server |
getConfVersion | The version of the configuration currently deployed on the server |
getDeployErrorCode | Any error code issued (if any) when attempting to deploy the last configuration to the server |
getDeployFrom | The repository name from which the configuration was deployed |
getDeployTime | The time the current configuration was deployed to the server in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT |
getDeployUser | The name of the user that deployed the current configuration to the server |
getDescription | The description of the configuration currently deployed on the server |
getErrorCode | Any error codes detected on the server. The corresponding error messages are found in the "translation.properties" file for the console application. |
getFlightRecorders | A string array with the IDs of any flight recorders in use by the currently deployed configuration |
getHost | The host name of the server |
getInputAdapter | The class name (identifier) of the input adaptor used for the current configuration |
getInputParms | The input parameters provided to the configuration to be used in conjunction with the input adaptor. This is mainly used for file polling servers and in that instance provides the directory that is polled for files. For test servers it provides the input file name to the configuration. |
getJavaVersion | The current version of Java used by the server |
getLastStarted | The time the X Engine was last started in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT |
getLastStopped | The time the X Engine was last stopped in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT |
getLastTransaction | The time the X Engine was last invoked in the format: CCYY-MM-DD hh:mm:ss.fff based on GMT |
getMajorVersion | The major version number of the X Engine |
getMinorVersion | The minor version number of the X Engine |
getOperatingSystem | The operating system and version of the server |
isPolling | If the server is polling for data (feed servers) |
getPort | The port the server is accepting instructions from |
getRevisionVersion | The revision version number of the X Engine |
getRuleset | The name of the currently deployed rule set |
isRunning | Whether the server is currently running (started). |
getStatus | The status of the server. 0=Offline, 1=Online |
getTestData | The number of available test data lines |
isTraceData | If the server has trace data |
isTraceMode | If the server is in trace mode |
getTransactions | Number of transactions processed since last server start |
getVersion | Full version number of the X Engine in text format |
ID | Function |
webpushSupportedNotSubscribed | This ID is used to identify a section that is displayed when push notifications are supported, but the browser is not yet subscribed. |
webpushNotSupportedNotSubscribed | This ID is used to identify a section that is displayed when push notifications are not supported, but the browser is not yet subscribed. |
webpushSupportedButBlocked | This ID is used to identify a section that is displayed when push notifications are supported, but the user has previously declined permission to send notifications |
webpushSupportedButError | This ID is used to identify a section that is displayed when push notifications are supported, but an unexpected error was encountered when try to register |
webpushSubscribed | This ID is used to identify a section that is displayed when the user is already subscribed to notifications |
webpushGroups | This ID is used to identify a section that displays a list of groups that the user can choose to subscribe to alongside the individual subscription |
webpushChecking | This ID is used to identify a section that is displayed while the browser is checking the availability of push notifications |
webpushTooOld | This ID is used to identify a section that is displayed if the browser is too old to support the push notification syntax. |
webpushStyleDisplayBlock | This hidden ID is used to identify a value that will be used to turn items with display=”none” to visible. The default is “block”, but should you need other values (such as “inline-block”) you can change this value to achieve that effect. |
webpushSubscriber[target] | These hidden IDs are used to identify a value that will be used as the target value for any alternative notification methods. For example, if both Email and SMS are available, the IDs: webpushSubscriberEmail webpushSubscriberSMS Must exist with appropriate values (email address and phone number). |
webpushSubscribeButton | This ID is used to identify the subscribe button. The button can in theory be something other than a button, but it must support the disabled property. |
webpushUnsubscribeButton | This ID is used to identify the unsubscribe button. The button can in theory be something other than a button, but it must support the disabled property. |
Parameter | Value |
user | The console user ID under which the script will be executed |
password | The password for the user |
script | The script to execute |
Assuming the Tomorrow Portal repository has already been installed following the steps provided within the installation guide.
Upon first login to the portal application, you may choose to edit the Unassigned Company and role which are created by default. This is not required and if you prefer you may leave it with its default settings. Please note that you cannot delete this entry. This unassigned company acts as a fallback for users who have yet to be assigned a company and role once registered.
Permissions are grouped by Company/Role/User.
When you first create a resource i.e. a page, you will see that a permission is already assigned for that page. The default permission is always assigned as a global one. Meaning that everyone can view, access, edit or download that resource however the resource is not yet active or visible to anyone. You will need to enable this option for the resource in question so that a user can view, access, edit or download.
Possible Permission Combinations
When no company or role or specific user is selected a global permission is assigned and no other permission is allowed to exist.
When only a specific company is assigned then all roles and users within that company can view, access, edit or download that resource.
When only a specific company and specific role are assigned then all users of that company and role can view, access, edit or download that resource.
When only a specific company, specific role and specific user is assigned, then only that user within the company and role selected can view, access, edit or download that resource.
The above combinations can exist in parallel between companies, roles and users in one resource.
Actions
Navigate to Companies > Add New or alternatively use the Add New button on the Company's page.
Complete the required fields and submit the form.
Fields
Company Name
Enter the company name i.e. TomorrowX Limited
Domain check
Enter the company’s domain name i.e. tomorrowx.com (without www)
Unique/Short Name
Enter a unique name for the company i.e tomorrowx
Approver(s) email
Enter a high-level e-mail address for the company. Used for approving employee actions i.e. purchasing a training course or software trial license.
Accounting email
Enter an e-mail address for the company’s account dept.
Currency
Enter the 3 letter currency code for the company i.e USD
Actions
Navigate to Companies > View All and select the company you would like to edit by clicking the Edit link to the right of the company name.
Make any relevant changes to the fields and submit the form.
Fields
See Fields from 2.0 above +
Assign Menu
Select a menu to assign to this company - see 5.1 Menus - Add a New Menu
Actions
Navigate to Companies > View All and select the company you would like to assign a support agent to by clicking the Support Agents link to the right of the company name.
Make any relevant changes to the fields and submit the form.
Fields
Assign a New Agent
Support Agent
Select a user from the list to assign as a support agent
Make Head of Support
Select this option to set the user assigned above as the Head of Support of a company. Only one agent may be assigned to a company as a Head of Support. To assign a new Head of Support agent, delete the existing one first and then add a new agent.
Assigned Agents
View/delete assigned agents
Actions
Navigate to Companies > View All and select the company you would like to assign an IP Range to by clicking the IP Ranges link to the right of the company name.
Make any relevant changes to the fields and submit the form.
Fields
Assign a New IP Range
Start IP
Enter the starting IP Address. If no IP range is to be used simply enter a single IP Address here. i.e. 10.10.10.10
End IP
Enter the ending IP Address. If no IP range is to be used leave this field empty. i.e. IP Range: 10.10.10.255
Assigned IP Ranges
View/delete assigned IP Addresses
Actions
Navigate to Companies > Roles > Add New or alternatively use the Add New button on the Roles page.
Complete the required fields and submit the form.
Fields
Role Name
Enter a role name i.e. Analyst
Assigned to Company
Select a company to assign the role to
Actions
Navigate to Role > View All and select the role you would like to edit by clicking the Edit link to the right of the role name.
Make any relevant changes to the fields and submit the form.
Fields
See Fields from 3.1 above +
Actions
Navigate to Users > Add New or alternatively use the Add New button on the Users page.
Complete the required fields and submit the form.
NOTE: Before creating a user, be sure that the company as well as the role that the user will be assigned has already been created. See 2.1 Companies - Adding a new company.
Newly created user account status is always set to Inactive once created and needs to be updated to an Active state before it can be used. See 4.2 Users - Edit an existing user below.
Fields
First Name
Enter the user's first name
Last Name
Enter the user's last name
Email Address
Enter the user's company e-mail address (must use the same company domain as below)
Company Domain
Select the user's company domain name
Mobile Number
Enter the user’s mobile number (used for receiving device verification codes, OTP codes etc.)
Company
Assign a company to the user - see note above.
Role
Assign a role to the user - see note above.
Actions
Navigate to Users > View All and select the user you would like to edit by clicking the Edit link to the right of the user.
Make any relevant changes to the fields and submit the form.
Fields
User Details
See Fields from 4.1 above +
User Internal Type
Select the appropriate user type i.e Client, Admin, Partner, Unassigned
User Account Status
Enable/disable the user’s account
Trusted Devices
View/delete user’s trusted devices
Actions
View user login stats on this page.
Actions
Navigate to Config > Menus and select the Add New button on the Menus Overview page.
Complete the required fields and submit the form.
Newly created user menu status is always set to Inactive once created and needs to be updated to an Active state before it can be used. See 5.2 Menus - Edit an existing menu below.
Fields
Menu Name
Enter a name for the menu i.e. Internal User’s Menu
Actions
Navigate to Config > Menus and select the menu you would like to edit by clicking the Edit link to the right of the menu name.
Complete the required fields and submit the form.
Fields
See Fields from 5.1 above +
Status
Select whether this menu is active or inactive from the drop down menu
Actions
Navigate to Config > Menus and select the Menu Links button.
On the Menu Links Overview page, select the Add New button.
Complete the required fields and submit the form.
Fields
Link Name
Enter a name for the menu link i.e. Downloads
Link
Enter the URL path of the link i.e. user/downloads
Link CSS Class
Add a custom CSS class name to be used on the <li> element of the menu.
Link CSS Icon
Add a custom CSS class name to be used on the <i> element of the menu which displays the icon i.e. fa fa-cog. (See fontawesome.io for compatible icons)
Sub Links
Select whether this menu item will have children menu items (Active) or not (Inactive)
Actions
Navigate to Config > Menus and select the Menu Links button.
On the Menu Links Overview page select the menu link you would like to edit by clicking the Edit link to the right of the menu link name.
Complete the required fields and submit the form.
Fields
See Fields from 5.3.1 above +
Actions
Navigate to Config > Menus and select the Menu Links button.
On the Menu Links Overview page select the menu link you would like to add sub links to by clicking the Sub Links link to the right of the menu link name.
Complete the required fields and submit the form.
Fields
Assign New Sub Links
Sub link Name
Enter a name for the sub link i.e. white papers
Sub link URL
Enter a URL for the sub link i.e. user/downloads/white-papers
Assigned Sub Links
View/delete sub links from a main menu link
Actions
Navigate to Config > Menus and select the Menu Links button.
On the Menu Links Overview page select the menu link you would like to set link permissions for by clicking the Permissions link to the right of the menu link name.
Complete the required fields and submit the form.
Fields
Assign New Permission
Company
Select a company from the drop down list to allow access to this menu item.
Role
If you would like to further restrict access to this menu link select a specific role for the company selected above.
User
If you would like to further restrict access to this menu link select a specific user for the company and role selected above.
Assigned Permissions
View/delete assigned menu link permissions
Actions
Navigate to Config > Menus and select the menu you would like to set up by clicking the Setup link to the right of the menu name.
Complete the required fields and submit the form.
Fields
Assign New Links
Links
Select a link from the drop down to assign it to the currently selected menu
Sort Order
Enter a number in order to set the position of the menu item within the menu
Assigned Links
View/delete menu links
Actions
Navigate to Config > Pages and select the Add New button on the Pages Overview page.
Complete the required fields and submit the form.
Fields
Page Name
Enter a name for the page you are creating i.e. Company Profile
URI Collection
Enter the URI collection (group of pages) i.e. profile
URI Controller (Page)
Add a URI controller name (page) i.e. pages
URI Method (Action)
Add a URI method (action) i.e. view
Actions
Navigate to Config > Pages and select the page you would like to edit by clicking the Edit link to the right of the page name.
Complete the required fields and submit the form.
Fields
See Fields from 6.1 above +
Status
Select whether this page is active or inactive from the drop down menu.
Actions
Navigate to Config > Pages and select the page you would like to set permissions to by clicking the Permissions link to the right of the page name.
Complete the required fields and submit the form.
Fields
Assign New Permission
Company
Select a company from the drop down list to allow access to this page.
Role
If you would like to further restrict access to this page, select a specific role for the company selected above.
User
If you would like to further restrict access to this page, select a specific user for the company and role selected above.
Assigned Permissions
View/delete assigned page permissions
Actions
Navigate to Downloads > Add New or alternatively use the Add New button on the Downloads Overview page.
Complete the required fields and submit the form.
Fields
File Name
Enter a name for your download i.e. whitepaper 1
No. Of Downloads Allowed
Enter the max number of downloads allowed - Limit Downloads field below must be set to Limited in order to be active.
Limit Downloads
In order to limit a file to a certain number of downloads, set this to Limited and set the total number of downloads allowed above.
File Upload
Choose a file to upload.
Mark as Featured Download
Select this in order to mark your file as a “Featured Download”. Featured downloads are displayed throughout the site in certain locations.
Actions
Navigate to Downloads > View all and select the download you would like to edit by clicking the Edit link to the right of the download item name.
Complete the required fields and submit the form.
Fields
See Fields from 7.1 above +
Visibility
Set whether the download is visible or not.
Actions
Navigate to Downloads > View all and select the download you would like to set permissions for by clicking the Permissions link to the right of the download item name.
Complete the required fields and submit the form.
Fields
Assign New Permission
Company
Select a company from the drop down list to allow access to this download.
Role
If you would like to further restrict access to this download, select a specific role for the company selected above.
User
If you would like to further restrict access to this download, select a specific user for the company and role selected above.
Assigned Permissions
View/delete assigned file download permissions.
Actions
View file download stats on this page.
Actions
Navigate to Solutions > Add New or alternatively use the Add New button on the Solutions Overview page.
Complete the required fields and submit the form.
NOTE: Before creating a new solution, be sure that you have created the appropriate solution categories and solution industries. See 8.4.1 Add solution category and 8.5.1 Add solution industry.
Fields
Solution Name
Enter a name for your solution
Solution Image
Enter the filename of the image to use as the solution image. Be sure to include the file type extension.
Synopsis
Enter a short synopsis for the solution. Used on sub-solution pages.
Description
Enter a longer description for the solution which will be displayed on the individual solution page. Use the WYSIWYG editor to style the information entered.
Deployment Requirements
Enter any deployment requirements for the solution which will be displayed on the individual solution page. Use the WYSIWYG editor to style the information entered.
Category
Select a category which the solution belongs in.
Industry
Select an industry which the solutions belongs in.
Mark as Featured Solution
Select this to mark the solution as a Featured Solution. Featured solutions are displayed throughout the site in certain areas.
Actions
Navigate to Solutions > View all and select the solution you would like to edit by clicking the Edit link to the right of the solution item name.
Complete the required fields and submit the form.
Fields
See Fields from 8.1 above +
Status
Set whether the solution is active or inactive.
Actions
Navigate to Solutions > View all and select the solution you would like to set permissions to by clicking the Permissions link to the right of the solution name.
Complete the required fields and submit the form.
Fields
Assign New Permission
Company
Select a company from the drop down list to allow access to the solution.
Role
If you would like to further restrict access to a solution, select a specific role for the company selected above.
User
If you would like to further restrict access to a solution, select a specific user for the company and role selected above.
Assigned Permissions
View/delete assigned solution permissions.
Actions
Navigate to Solutions > Categories and select the Add New button.
Complete the required fields and submit the form.
Fields
Category Name
Enter a name for the solution category.
Category Description
Enter a description for the category. Used on solution category pages.
Category Image
Enter the filename of the image to use as the solution category image.
Actions
Navigate to Solutions > Categories and select the solution category you would like to edit by clicking the Edit link to the right of the solution category name.
Complete the required fields and submit the form.
NOTE: Setting a solution category to Inactive will result to hide all solutions assigned to it.
Fields
See Fields from 8.4.1 above +
Status
Set whether the solution category is active or inactive.
Actions
Navigate to Solutions > Industries and select the Add New button.
Complete the required fields and submit the form.
Fields
Industry Name
Enter a name for the solution industry.
Industry Description
Enter a description for the industry. Used on solution category pages.
Actions
Navigate to Solutions > Industries and select the solution industry you would like to edit by clicking the Edit link to the right of the solution industry name.
Complete the required fields and submit the form.
Fields
See Fields from 8.5.1 above +
Actions
Navigate to Web Stats.
View user web stats in the table displayed.
Clicking on table rows will expand the data displayed for a user.
You may add an alias for a specific entry to keep track.
View User's Click Path displays individual pages which the user has visited.
Clicking on table rows will again expand the row and display more Geo IP information for the specific page which the user has visited.
Actions
Navigate to Web Stats > Settings.
Complete the required fields and submit the form.
Fields
Support Email
Enter an email address to use as a support email sender/receiver.
Info Email
Enter an email address to use as an informative email sender/receiver.
Auto Respond Email
Enter an email address to use as an auto response email sender/receiver.
Maintenance Mode
Enable/disable site maintenance mode.
Trusted Device Limit
Set the max number of trusted devices for users.
Before explaining how to combat CSRF (Cross Site Request Forgery), a quick explanation of the technique behind it is in order.
A cross site request forgery relies on a user visiting a malicious site, shortly after they have logged into a genuine site, and whilst they still have a session cookie active with the genuine site.
By making the user's browser send malicious requests directly back to the genuine site, the malicious site can exploit the fact that the user is already logged in, to effectuate such things as placing orders in the user's name, sending emails using the user's credentials or posting comments to other users in what may well be a trusted user's name. The list of exploits is endless and only really subject to the vulnerabilities of the site being attacked.
Ways to make the user visit the malicious site whilst still being logged into the genuine site includes phishing, posting of links in comments on the genuine site, or even just "trial and error" by posting links on sites that may also be frequented by users of the genuine site.
The limitation of the CSRF attack is that it is always "blind". The attacker cannot see what the application responds with, or what the current state of the session is; due to restrictions imposed by browser security models that say that a request from one server (domain) cannot be sent to another.
How to best protect your site against CSRF attacks depends on how it was written. Generally speaking, most applications perform actions as a result of an HTML form being posted to the site. Some sites also respond with actions to a GET request
For example: "http://www.mysite.com/delete.jsp?orderToDelete=12345"
This example will focus on protecting applications that use a form POST. This is done by adding a hidden field to every form presented by the application. This hidden field contains a random value that is unique to the specific session of the user. We will require that this field is always present on a form POST, making it virtually impossible for a malicious site to second guess what a valid POST request might look like.
The technique for protecting a site that uses GET requests is similar, simply requiring the addition of an additional URL parameter to every URL that takes parameters, instead of a hidden form field.
The first step in implementing our CSRF defense is to create a simple plan of action i.e. what do we intend to do, and how do we wish to go about doing it. It is a good idea to write this down in plain English and then use that text as a guide whilst designing the rule structure. In this case, the plan reads as follows:
If a POST request comes in whilst there is an active session, then make sure it has our hidden field, and that it is the hidden field we have generated for that session. If the field is not present, we should respond to the user with an HTTP Status code of 403 (Forbidden).
Whenever a new page is provided by the application, make sure we add a large random number as the hidden field to every form presented by the application. The large random number we use should be generated once for the session and then be stored in it for easy reference and good performance.
That sounds easy enough; so, let's begin...
Create a new repository named "CSRF Example" and add a new rule set named "CSRF".
Filter out static content before it hits the core rules using a Name Splitter and Switch rule as shown:
The Name Splitter conveniently extracts the extension of the object being requested using the following properties:
The Switch rule operates on the EXT variable. By adding new chain points for each type of static content they are eliminated from reaching the rule set.
As we are dealing with Web Applications, and we need to know information such as the method used (POST/GET), the first step is to add an HTTP Request Tracker rule from the HTTP group in the rules catalog to the CSRF rule set:
A good technique for rule writing is to start by determining the "flow" of events or pages that will subsequently have rules applied to them.
In our case we have two flows:
The verification of the forms
The addition of the form fields.
So, our next action is to add a Sequencer rule from the Flow group in the rules catalog:
Now, the first step in our written plan is to check if we are dealing with a POST request in the session, and if the form posted has our hidden field. The first part is very easy:
Only the If Condition requires some properties:
The next step is simple. We need to look up the current hidden field from the session:
Once again, there are not many properties:
The variable names and values we have chosen are arbitrarily selected, although they should be meaningful and memorable.
In this example, we have decided that the hidden field is stored with a session key named "CSRF.key
" and that the hidden field on all forms is named "CSRF
". We could have chosen any names as long as we use them consistently when we add the field to the form and store the session key.
All that is left for the first step is to make sure that if the key doesn't match, then the user receives a 403 error.
Once again, the properties are very simple:
We use a Set Completed rule after the response, as once we have decided that the user should be rejected, there is no need to proceed with the rest of the rule set. Instead we simply terminate the flow.
We are now ready to implement the second part of the plan. The first step in doing so is getting the actual response from the server so that we can add the hidden field if we need to.
The HTTP Server Execute rule takes care of this, even if you are writing rules using a built in forwarding proxy.
Once again, the properties are very simple as we are just interested in the application response:
Once again, we need to check if a session is present, but after the HTTP Server Execute rule, as that rule may in fact result in a session being created:
If there is a session, then we need to add our unique CSRF key to it. The first step in doing that is to see if we already have that key:
Once again, not many properties:
If we don’t have it, we need to create it, which is easy:
The properties for these rules are as follows:
The session key we use is the same "CSRF.key
" that we used in step 1.
All that remains now is to add the field to the form and send the response back to the user.
Thankfully there is a dedicated rule that handles the first problem, the "Insert Hidden Field" rule.
Note that we are handling various loose ends too: connecting a Session not found to the HTTP Response, and connecting the existing session key to the Insert Hidden Field rule.
The final properties that must be set are as follows:
Our rule set is now complete, and we are ready to test it. A good sample application for this test is the Qwerty application. Create a configuration for the test named "CSRFTest" and set it as follows:
(Only relevant sections shown)
Once you have set up your configuration, deploy it to the Qwerty demo server and try testing it.
You will see in the Qwerty application, in the "Set up 3rd Party Accounts" page, that there is now a CSRF hidden field added to the page:
Use the performance data to further verify that everything is working as you expected.
If you look further through the page source of the Qwerty application, you may also notice the following link:
This is a classic case of a GET request that can be exploited using CSRF. In this basic case study, we only protect POST requests of forms. However, if your application also uses actions on GET request, you can fairly easily amend the rule set to also cover GET requests.
This involves manipulating any URL parameters in the pages that are used for actions.
You can do this using the String Replacer rule, especially if your application uses ".jsp
" or ".do
" or ".aspx
" as URL identifiers for active content.
For example, you could replace ".jsp?
" in every page with ".jsp?CSRF=0123456789&
" and then check for the field on every URL that ends in ".jsp
" and has PARAMETER_NAMES (from HTTP Request Tracker Rule) not equal to blank. If you do that you will achieve the same result as the Insert Hidden Field rule does in this case study.
The above example is based on implementing the CSRF problem as a single rule set.
Frame busting refers to the ability of an application to avoid being encapsulated within an IFRAME. The later approach can be used to not only make one site impersonate the capabilities of another, but more sinisterly, it can be used to overlay a different user experience on top of an IFRAMEd site and allow events to flow through to the IFRAME.
Using this approach, a user can inadvertently be tricked into performing actions within an application without even knowing that they are interacting with it.
A July 2010 study by Gustav Rydstedt, Elie Bursztein and Dan Boneh of Stanford University and Collin Jackson of Carnegie Mellon University named: "Busting Frame Busting: A Study of Clickjacking Vulnerabilities on Popular Sites", explores the risks and problems associated with framing. It can be found here:
http://seclab.stanford.edu/websec/framebusting/
The study mentioned above forms the basis of the following case study.
The defenses we will introduce in this case study are rather simple; we will add some JavaScript and a few extra HTTP headers to the logon page of the Qwerty app. Depending upon the application, it may also be relevant to add this code to other pages, but for now we will just select the logon page for simplicity.
The JavaScript we will add looks as follows:
The above script has been placed in the public domain by the authors of the study.
In simple terms, it sets the entire page invisible through use of a CSS directive and only makes it visible if the page itself is the top frame and JavaScript is enabled.
In addition to the above code, we will add a couple of HTTP Headers that take advantage of built in frame busting defenses in certain browsers. The headers to set are as follows:
The rules required for this case study are extremely simple. Our plan is to:
Determine whether we are on the logon page.
If yes, add the frame busting code.
The very first step as always is to create a repository. In this case we will name it "Frame Busting Example".
Once done, copy and paste the JavaScript code into a text file named "framebust.js
" and upload it to the data folder in the repository.
Then create a new blank rule set named "FrameBust".
The first rules we need simply determine if we are on the logon page:
These rules are the same as in most of our other examples, so we will just list the properties here for quick reference:
Once the properties are set, simply add a chainpoint to the Switch rule and name it "logon.jsp
".
We next add the rules to inject the JavaScript and headers:
We read the frambust.js file into a variable, we then set a couple of variables to the header values we need, and finally we add the JavaScript and headers to our response. The properties look as follows:
Values are: SAMEORIGIN,allow *; frame-ancestors 'self'
Header field names are: X-FRAME-OPTIONS,X-Content-Security-Policy
That is it, save the rule set and create a configuration to test it.
The configuration for this rule set is very simple, we create one named "FrameBustTest". The following shows the relevant sections that need to be defined:
Qwerty is a suitable test application for this case study because it uses frames to encapsulate the logon and other internal pages.
When navigating to Qwerty landing page URL in the browser you will see is as follows:
To test the new rule set, deploy the configuration to the Qwerty demo server and start it. Then refresh the Qwerty logon page.
Whilst you will not see any visual differences in the appearance of the Qwerty application, the Qwerty landing page URL in the browser will now look like this:
http://localhost/qwerty/logon.jsp
We can proceed to navigate to other pages in the Qwerty application outside of the main Qwerty frame.
For example, these pages would normally all be loaded from within the Qwerty frame, but are now visible in the main browser address bar:
We have successfully "Busted" out of the frame.
This case study will show you how to inject a random customer satisfaction survey into the user experience on a site.
We will use a flight recorder to graph the responses and collate comments from the users.
The first step in implementing our customer satisfaction survey is to create a plan of what we intend to do, and how we wish to go about it. It is often a good idea to write this down in plain English and then use that text as a guide whilst designing the rule structure. In this case, the plan reads like this:
We want to ask random customers about their experience with our site.
We want to have the survey appear on our main page after log-in.
We want to make the survey experience as quick and painless as possible to get the maximum potential responses.
We are going to use the Tomorrow Software flight recorder feature to graph and view the responses.
In this case study we are going to split the decision points into three discrete components, following the recommendations mentioned elsewhere in this manual. So, start by adding a new repository called "Customer survey" and create three blank rule sets:
"SurveyLoad", which is the rule set that will pre-check our survey and make sure all of the data we need is collected before we start the survey process.
"SurveySelection", which is the rule set that will determine if a user is selected for a survey.
"Survey", which is the rule set that will contain the survey logic itself.
The plan involves injecting a customer survey on top of the user experience. We can do this as a pop-up window or we can simply overlay it on top of the site using JavaScript. Given that most users block pop-ups these days by default, the later seems like the better option. We want to keep the survey itself as pure HTML, so a little bit of basic JavaScript will take care of it:
You can copy and paste the above JavaScript code into a file named "showsurvey.js
" and upload it to the "Data Files" section of the "Customer survey" repository.
The above JavaScript will essentially grey out the application itself and overlay a HTML file on top that is named "survey.html
".
Now we need to create the survey HTML itself. Once again this involves basic web design skills. The end goal is a page that looks something like this:
The easiest way to create the HTML is to follow these steps:
Create a subfolder under the "Content Files" section of the "Customer Survey" called "Qwerty".
Add a new file under the "Qwerty" folder named "survey.html".
Copy the following HTML code to your clipboard:
Update the "survey.html" file from the console. The embedded HTML editor will open.
Click on the Update button to go to the HTML text.
Paste the HTML shown above into the editor and click Save.
The page should now look something like this:
Now we have all of the components we need and are ready to begin writing rules to present our survey.
We will begin by creating the survey selection rules. In this case, the rules are very simple. We use a random number generator to determine if a user should be asked to complete the survey or not.
In this example we want the opportunity to complete a survey to be fairly frequent.
So, we start by updating the "SurveySelection" rule set to look as follows:
The properties for these rules are:
Effectively we create a random number between 0 and 9 (1 digit) and provided the number is below 4, we proceed to perform the survey.
The purpose of the "SurveyLoad" rules is to prepare any data that may be needed by the other rule sets in the repository. It is often beneficial to do it this way to isolate or prepare data needed by other rule sets, yet at the same time keeping those other rule sets as generic as possible.
In our case, there are a couple of generic things we need to do and check:
We need to start the usual HTTP Request tracking.
We need to ensure a session has been started (meaning a user is logged on).
We need to obtain the customer account number so we can log it (no anonymous data here!).
Once everything is done, we need to proceed with the survey itself.
All of these tasks are very simple, so we show them here as a single step:
The only rule that has any non-default properties is the HTTP Session Object reader. This rule allows us to read the customer account number from the Qwerty session. The properties are as follows:
We are now ready for the actual core process itself with all data and user interface components prepared. So, lets update the "Survey" rule set.
The first issue is to place the survey in the right place in the navigation process, which, in our plan, is to inject the survey on top of main page.
We start by finding out what the name of the page that is being requested is:
The name splitter rule is extremely useful for this as it allows us to split a text string based on a separation character. The separation character in a URL is always "/", so we can find the requested page by using the following properties:
Then we can use a Switch rule to determine how to direct flow:
In this case the Switch variable is URL and adding new chain points to the switch rule determines when logic flows down a certain path.
Note the use of survey.jsp. That page does not exist in the Qwerty application. It is the name of the page that the HTML form in "survey.html" posts its data to. The X Engine simply intercepts this request and deals with it before it ever reaches the application itself.
We are now ready to determine what happens when the user reaches the main page, the first step is to make sure we haven’t already presented a survey to the user in the current session:
The properties for this look as follows:
Basically, we check the session to see if a flag named "DoneSurvey" has already been set. If not, we proceed to see whether we need to present the survey by using the already created "SurveySelection" rule set:
If the response comes back that we need to perform the survey, the next action is very easy. We read the already prepared JavaScript "showsurvey.js" file and add it to the response being sent back to the user:
Once again, the properties are shown here:
This takes care of presenting the survey to the user. Now we just need to handle the response from the user to the survey, the first step of which is to record whether the user has in fact responded to (or denied taking part in) the survey:
We record this in the session using the following properties:
Next, we check if the user hit the "Submit" button and if yes, we record the answers in the flight recorder. If no, we simply return the user to the main page, using a little bit of JavaScript to remove the survey.
The properties are as follows:
Optional index fields: ProductRange,Pricing,EaseOfUse,Delivery,Comments
Response data: "<script>parent.document.location='main.jsp';</script>"
The little piece of JavaScript used here reloads the "main.jsp"
page. As the survey flag is now set to "X", the survey will not re-appear, and the user can continue as normal.
The configuration for this example is very easy. Simply create a new configuration in the Customer Survey repository and name it "SurveyTest". The following shows all of the relevant parts that must be completed for the configuration:
You are now ready to test the survey rule set. Deploy your new configuration to the Qwerty demo server and start it. Then log into Qwerty. There is approximately a 1 in 3 chance of you getting a survey request. To quickly invoke a survey, click on the "Set up 3rd party" button and then "I'm finished", until a survey request appears. Once you have completed or rejected a survey request, log out and log back in to be presented with another one.
Make sure you answer 4 or 5 surveys at this point.
We now have some data in the flight recorder, so we need to set up a definition for it in order to view the data from within the console.
The following shows the definition used in this example:
Once you have done this, select Flight Recorders from the console menu and click on SURVEY:ANSWERS. Leave all of the fields as default and click on Search (tip: if you only wish to see survey answers with comments, put an uppercase "A" into the "Comments:" from field and a lowercase "z" into the to field).
The survey results submitted will be shown:
You can now click on the graph of one of the questions. The result is a pie chart showing you the answer distribution:
Using the flight recorder search filters, you can now use the responses to better understand your customer satisfaction ratings. For example, you can see if Firefox users generally rate the ease of use of your site higher than Internet Explorer users or vice versa.
The sample created here is fully functional, but for production purposes, you may wish to add a few things. Some possible improvements are:
JavaScript validation to ensure the customer has completed the form before submitting it.
Logic in the SurveySelection rule set to ensure that the same customer does not get the survey more than once every 6 months (The History Summary rule or the History Recorder rule are both useful for this purpose).
Google Analytics lets you do more than measure sales and conversions. It also gives insights into how visitors find and use your site, and how to keep them coming back.
This case study demonstrates Tomorrow Software as an easy integration option for adding tracking code to web pages typically done so outside of the normal software development life cycle (SDLC). Not only does this provide an easy, and rapid deployment of such third-party services, but also ensures that as and when new pages are introduced it provides comfort that tracking code will be ‘appended’ to each and every page the web application responds with back to the user’s browser.
This example is a common method whereby you can simply read a JavaScript file containing the required tracking code, insert your account ID and append it to any web page.
For information regarding the Google Analytics service please refer to:
https://www.google.com/analytics/web/
The first step of any rule writing is to determine what we want to do and how it can be accomplished.
Before you begin you will need to ensure that you have a valid Google Account email address and password for using the service, or alternatively sign up, it only takes a couple of minutes. https://accounts.google.com
We will discuss tracking code throughout this case study, which is only accessible once you have logged in to Google Analytics.
To access your tracking code:
Log in to Google Analytics https://www.google.com/analytics/web/.
From the Admin page, select the .js Tracking Info property from within the list of accounts. Please note that tracking code is profile-specific.
The tracking code can be copied and pasted from the Website Tracking text box from the Tracking Code menu item.
The code will be similar to the below (where x replaces your specific account code 'UA-xxxxxxx-x' ):
Replace the code with 'UA-xxxxxxx-x
' as we can set the account ID in Tomorrow Software rules later, which makes managing the rules and different Google Analytics accounts much easier.
Copy and paste the above JavaScript code into a file named "google.js
" and save somewhere local i.e. your desktop, for use later on in the exercise.
It is this tracking code that performs the task of collecting the browser data of visitors.
Start by creating a new repository called “Google Analytics Example”.
It’s recommended that the process involved in adding the Google Tracking code be split into two:
setting a variable which holds the unique Google User account 'UA-1234567-1’.
and then inserting this value into the tracking code itself.
This means that you can subsequently update the account or the code separately in future deployments, or when Google amend their tracking code.
Keeping this in mind, you should create the following blank rule sets:
GoogleAnalytics: this rule set will be responsible for creating the new UA variable plus reading the tracking code and adding it to the page.
Qwerty_test: this rule set will allow you to test how a deployment can work in the demonstration Qwerty example application.
The two new blank Rules will be visible now within the repository.
In the Tomorrow Software console select the Data Files folder, then upload the ‘google.js’ file you created above and saved to your desktop.
Ensure you upload to the newly created “Google Analytics Example” repository that will now be available in the drop-down list of available folders.
Press upload and the file will now be visible in the repository in data files for the rules to use.
Using a Set Variable rule set a new variable called Google_UA with the value “UA-1234567-1” where 1234567-1 is replaced with your specific Google Analytics user account.
Then using the File Reader read the google.js
file and give it a variable name called ‘GOOGLE_ADD’
Next use a String Replacer rule to insert the newly created Google_UA variable into the tracking code .js file, followed by the HTTP Response Addition rule, to append the Google Tracking code to the response.
The string replacer rule will basically look through the code (which is now called ‘GOOGLE_ADD’) and replace any found content with the value of the variable we have defined ‘Google_UA’.
The HTTP Response Addition rule will now take effect and provide the amended google.js file as an addition to the page response and will activate this in the user’s browser.
The final step for this rule set is to add a couple of Exit rules called “OK” and “Fail” which will assist in rules performance to tell you if the rule is working, and to help with embedding this as a rule set within another rule set.
This rule set will allow you to see an example deployment to the Qwerty demo application.
Of course, with every response from the application there is static content which you don’t wish to add Google tracking code to, so take a couple of simple steps to filter off transactions which don’t require code appending.
For example, a jpg
image may be served up on a page each and every time a user navigates to this page so adding code to this page will not provide any additional customer insight.
Using the Name Splitter rule to identify the URI extension is a useful way to filter out unwanted data before reading the Google Analytics rule set.
Variable Name: URI
Last Name Variable: we are only interested in the last part of the URI so we name this variable EXT.
Split Pattern: “.” is the identifier of the URI which tells the rule which part of the value we want to use or split.
Using the Switch rule set the Switch Variable properties to EXT as created above and proceed to ‘Add Chain Points’ for the static content you wish to ignore such as gif, css, html, js, jpg.
The final step is to connect the newly created GoogleAnalytics.xml rule set now located in the Rule Sets folder.
Finally, you can set up the configuration file. Click the Configurations menu, select the “Google Analytics Example” repository from the drop-down list, and enter some basic information about the rule to load.
The following screen shots show the information required for the “General”, and “Input Source” tabs.
You can now click the “Create” button to create your configuration file. Once created, click the “Deploy” button to deploy it to your Qwerty demo server.
The above case study shows how to implement Google Analytics tracking code in a specific environment, though of course each individual application will be different.
You will be able to log into your Google Analytics and select real-time traffic reports within the reporting dashboard, to validate the tracking code has been inserted, and is working correctly on your website.
You can also right click the page in the browser to view source code to verify the Google tracking code has been correctly inserted into the target application page.
With online fraud levels ever-increasing, most if not all companies are introducing additional methods of identifying their customers. One popular approach is via a method known as two-factor authentication (or 2FA).
Two-factor authentication consists of requiring online users to identify themselves through an additional method after they’ve logged in with their standard username or password. This could be via the use of a random token generating device or app, or by sending a one-time password to the user’s email address or mobile phone.
Two-factor via an SMS token sent to a user’s mobile phone remains popular, and the cost to company and customers is minimal.
One point to be aware of though, is that the organization must be reasonably confident that the mobile number data they hold, does in fact belong to their customers. It would be prudent to create additional rule sets triggered when a customer attempts to change their mobile phone number, however this is outside the scope of this case study.
In this case study we will outline what is required to deploy a two-factor SMS authentication request seamlessly into an existing application using in-built rules that ship with Tomorrow Software.
The first step of any rule writing is to determine what to do and how it can be accomplished. Drawing flow charts can be extremely helpful.
Below is a basic example flow chart of how Tomorrow Software may implement a two-factor SMS request.
Before beginning, you will need to answer the following:
Where is the login page and where does it go to authenticate the user?
Where is the data that holds the user’s mobile phone number?
What should the rule set do if there is no mobile phone number for a user?
What are the technical details for sending SMS messages?
How long should the X Engine wait for a correct response?
How many times should the rules allow someone to enter an incorrect response and what should happen after this given amount?
This case study we will use the in-built SMS aggregator Kapow to send our messages. Your own environment may use internal SMPP calls or different aggregators, which may require you to write your own extension.
Extension writing is outside the scope of this case study but is relatively straight forward for a Java developer.
Start by creating a new repository called “Two Factor Example”.
It’s recommended that the processes involved in sending a two-factor message, checking the existence of a two-factor request and checking the response against the stored value, be separated into different rule sets. This provides ease of maintenance in the future, and also allows you to turn two-factor authentication on and off, or change out functionality quickly and easily.
So, keeping this in mind, you should create the following blank rule sets:
TwoFactorLoad
– this rule set will be loaded initially and determine whether a two-factor request should be made based on the user’s login status.
TwoFactorCheck
– this rule set will check whether there is an existing two-factor request in place and display the embedded two-factor response page if required.
TwoFactor
– this rule set will generate the random token and embed it into the message template.
TwoFactorLookup
– this rule set will look up the user’s mobile phone number from the database.
TwoFactorSend
– this rule will send the message to the user’s mobile phone via Kapow.
With our two-factor authentication, we need to provide a page that will allow users to enter the token they receive via SMS. This page only needs to be very simple, with an introduction explaining what the user needs to do and a form field for them to enter their token. We will also need two additional pages:
One for an incorrect two-factor response,
And one for a two-factor time out, since the user will be given a limited time to complete the task.
Within your own web application environment, you will wish to design your pages to fit in with the site’s look and feel, but for this example we will keep it very simple.
You can use the inbuilt content editor to create your pages. to do so follow the steps below.
Expand the “Content Files” menu item and select “Two Factor Example”.
Create a new file called “twofactor.html”.
Copy the below HTML to your clipboard:
Update the "twofactor.html" file from the console. The embedded HTML editor will op
Click on the HTML button to go to the HTML text.
Paste the HTML shown above into the editor and click "Save".
The page should now look something like this:
Continue the above process for the following two files. Create new content files called:
twofactorerror.html
twofactortimeout.html
As per above, update each file, click the HTML button and paste the following HTML for each file:
twofactorerror.html
twofactortimeout.html
Save your files. Your file structure within Content Files should now look as follows:
In our example, File Reader rules will be used to read these html files. Therefore, download then upload each file separately from Content Files to the Data Files repository. All files used by File Reader rules must be accessible from the Data Files location by the X Engine.
Before we begin writing our rule sets, there is one more data file we will create. This file will be a plain text file that will contain the token and SMS message that will be sent to our users.
Begin by creating a new text document in Notepad. Copy and paste the following text into your blank document.
Your two factor token for XYZ Company is [token]. Please enter this token into our website to continue. If you are not currently logging into our website, please contact our customer service team on 01234 5678.
Save the text document as “twofactor.txt
”.
Next, go to the “Data Files” section of your Tomorrow Software console. Select the “Two Factor Example” repository from the drop-down list and click the “Browse” button to select the file just created.
Next, click the “Upload” button to upload your file to the console. All files should now be saved within Data Files as follows:
As mentioned above, we have five rule sets to deal with a two-factor authentication request. Although all functionality could be contained within the one rule set, we decided to split them out into discrete chunks that all handle a different aspect of the process.
This rule set will handle looking up the user’s mobile phone number from our local database.
To begin with, use the SQL Lookup rule to look up the user’s mobile number in our USERS database. In your web applications, of course, the database, table and field names will differ, but in this example, we are using a database called USERS with a table called “Users” looking for a field called “mobile” where the field “userid” is equal to the variable “userId”.
Examine the above image to see how we have stored the result from the field “Mobile” into a variable called “MOBILE”. If the record is found, we use the If Condition rule to check that there is actually a value in the MOBILE variable – if there is, we exit the rule set with the value “Continue”. Otherwise we exit with the value “Not Found”.
You can find the Exit Rule in the “Flow” group of rules.
This rule set handles sending the token to the user’s mobile handset. This token will be set in the TwoFactor rule set in the variable we will name TOKEN.
The user’s mobile number, as you have seen, has been set in the TwoFactorLookup rule set.
We will use the File Reader rule to read the twofactor.txt file we created earlier into a variable.
Next, we will replace the token with the actual token created by our “TwoFactor” rule set by using the String Replacer rule.
Then we will use Kapow to send the message to the mobile number we found in the “TwoFactorLookup” rule set.
IMPORTANT: You will need your own Kapow username and password in the credentials vault to use the service.
Next, we exit the rule set with either “Continue” for a successful send, or “Failed” for a failed send.
This rule set will initialize a two-factor request and save the following variables to the system: a flag that a two-factor request is in progress, what the token actually is, and what the time limit is for the request.
To begin this rule set, we need to set a time stamp as an expiry and create a random token. Next, we need to pass through the TwoFactorLookup and TwoFactorSend rule sets we created earlier.
Use the Timestamp rule found in the “Variable Marking” group followed by the Calculation rule found in the “Math” group to create a time limit.
Note that timestamps are in milliseconds, so we need to add 300,000 to the current TIMESTAMP variable to get a time five minutes into the future.
Next, we will create a random numeric token by using the Random Number rule, also found in the “Variable Marking” group. Create a random number with 8 digits and save it to a variable named TOKEN.
Now we can look up the user’s mobile number and send the SMS message to their phone. To do this, use the TwoFactorLookup and TwoFactorSend rule sets from the “Rule Set” group.
We must remember to set the session variables that tell us a two-factor request has been sent, what the time limit is, and what the token is.
First though, we need to set a variable TWOFACTOR to “Y” to tell us that we are in the middle of a two-factor request. Use the Set Variable rule to do this.
Next, we can use the HTTP Session Writer rule set to assign the three variables to the session.
Finally, we need to display the two-factor response page to the user.
To do this, we must first save the HTTP request so that later on, if the user enters the correct token in a timely manner, we can restore the application to its normal flow. Use the HTTP Request Saver rule set to do this.
Next, we use the File Reader rule to read our “twofactor.html” file into a variable for display. We will call this variable RESPONSE.
Finally, we just need to display this content back to the user, followed by a Set Completed rule to tell the system not to go any further.
This rule set will check whether or not a two-factor request is in progress, and deal with any responses or timeouts the system may encounter. This rule set will use a combination of rules we have previously encountered.
The first thing to check is whether or not the time limit has passed.
To do this, we create a new timestamp called TIMESTAMP_NOW, subtract the existing TIMESTAMP from it, and if the remaining time TIME_REMAINING is greater than zero we know the two-factor session is still valid. If not, we will read the “twofactortimeout.html” file and respond back to the user.
If there’s still time left on the authentication process, we then need to check whether or not a response has been entered, and if it has, whether or not it is the correct one.
In our HTML form we set the field name to “tokenresponse” so this is the name of the variable we must check.
If there is a value, then we check it against the variable we set earlier called “TOKEN
”. If there is no value, or the value is incorrect, we will use the File Reader rule to read the “twofactorerror.html
” file and display back to the user.
Additionally, we will reset the TWOFACTOR variable so that the system knows not to check again.
Optionally, we may redirect the user to a specific logout page, but in this example, we will not do this.
If the user has entered the correct response, we will reset the TWOFACTOR variable to “X” so that the rule sets know that the user has already been authenticated.
Finally, we will use the HTTP Request Restorer to place the user back into the original application flow.
Lastly, we will create the TwoFactorLoad rule set which will bring together all of the previous rule sets. This rule set will determine whether or not we need to check for a two-factor request, which only needs to be done if a user has been authenticated by the system, and only on non-media content
for example, not images, stylesheets, javascript et cetera
Using the Name Splitter rule we can split the URI variable to determine the extension.
In our example we are running JSP pages, so we only want the rule set to continue if the content has the extension “jsp” and the user is currently logged in.
There are several ways to determine if a user is logged in, and which method you use will be dependent upon your specific web application. There may be a cookie or session variable that we can read, or perhaps your web application has a specific URI or query string for pages that are available to logged in users only.
In this case study we will assume that a cookie with the user’s id has been set on login.
We will use the Http Request Tracker rule to expose all cookies. The rule actually exposes all request information into separate variables, but in this case, we are only interested in the “userId” cookie.
If the userId cookie is set, then we need to check if a two-factor request is in progress, otherwise we will simply exit the rule set.
If the userId cookie is set, then we must find out whether we need to initiate a two-factor request, check a two-factor request, or ignore as the two-factor request has already been successfully processed.
First, we will use the Http Session Reader rule to place the relevant session variables into variables our rule sets can query. We will store the TWOFACTOR, TIMESTAMP and TOKEN session variables into local variables.
Next, we use the Switch rule to check the contents of the TWOFACTOR variable.
This is the variable that tells us exactly what we should do.
If the variable is not set, then we need to initiate a two-factor request. If the variable is set to “Y” then a request is already in progress, so we need to look for a token response or time out. If the variable is set to “X” then we know the user has already successfully performed the two-factor authentication, and we can pass them back to the application.
Use the “Add Chain Point” button to add the “Y” and “X” points to the Switch rule.
Then, connect each chain point to the relevant rule set (found in the “Rule Sets” group) or set completed for already authenticated users.
Before you can deploy your rule set, you need to ensure that your database server is set up correctly, assuming that you need to retrieve the user’s mobile number from an external database.
In the following example, we will connect to a MySQL database – however, the process is similar for all JDBC drivers.
The Tomorrow Software Server ships with the Derby database driver, but you can easily add new database drivers to the application. The first thing you need to ensure is that the driver to the database is available in the class path of the program or application that is running Tomorrow Software.
For the Tomorrow Software Server itself, the location is /server/lib/ext/jdbc (we recommend that you create a folder in that location named mysql and that the driver jar file is placed in there).
The MySQL JDBC driver is available from http://dev.mysql.com/downloads/connector/j/
Next, you need to create the Database Connector in Tomorrow Software by clicking the Database Connectors link on the menu.
Simply enter in the class name, URL prefix (e.g., the location of the primary server to access), username and password required to access the database.
Click “Create” and your database is ready to access.
Finally, you can set up your configuration file. Click the Configurations menu and select the “Two Factor” repository from the drop-down list. Enter some basic information about the rule to load and the databases required.
The following screen shots show the information required for the “General”, “Input Source” and “Databases” tabs.
For the “Databases” tab, click the “+” icon to add a database, type the name of your database and select our newly created MySQL driver from the list.
You can now click the “Create” button to create the configuration file. Once created, click the “Deploy” button to deploy it to the server.
The above case study shows how to implement two-factor in a specific environment, though of course each individual application will be different.
You will also need to consider how you wish to handle users for whom you do not have a mobile number – alternatives could include email, or perhaps you have some kind of external token generator.
In the following case study, we will explore adding a new protocol (DNS) to the capabilities of Tomorrow Software.
For simplicity, we will restrict this to just a single DNS A record.
We will show how to proxy the protocol, how to modify the data coming back from the DNS server and how to capture a network packet and use it later as a template for requests from non-Multi-Protocol input adaptors.
This case study assumes that you intend to work with a brand new protocol, if using a predefined protocol (such as MySQL or Telnet) then you can skip this section.
Before you can begin to work with a new protocol, you need to define it. In this case study we will create a basic DNS A Record protocol interpreter. It is not a complete DNS example, but will serve well as an example on how to use the multi-protocol capabilities of Tomorrow Software.
The DNS protocol was chosen for this case study due to its simplicity and because it is well documented.
A simple internet search for “DNS Packet Format” will provide the complete details, but the following is a simplified primer.
At its core, it has the following structure in both the request and response:
A header block:
Followed by the actual questions or answers block. Questions contain the domain being queried, followed by two 16 bit fields, the first of which is the question type (1 = A record, 2 = NS record and so on) and the second of which is the question class (always 1).
The domain name being queried will have its dots removed and each section of the name is supplied with a leading byte providing the section length, followed by a zero byte to indicate all sections have been provided. For example:
labs.tomorrow.eu
will be turned into: [4]
labs
[8]
tomorrow
[2]
eu
[0]
Before we can start doing anything with the DNS packets, we need to break them down and make them available to our normal rules. We do this in the administration section under “Protocols”.
Just like normal rules, start by creating a rule set named dns_in (as shown) and open it in the rule editor.
You will notice that the rules catalogue for protocols is much smaller than the regular rules catalogue:
You can explore these rules to get a feel for what is available.
Before starting to write the rules, it is important to understand streams, protocol variables, VAO variables, VAO stream variables and stream windows.
Whenever a packet is read using the Multi-Protocol server version of the X Engine, it will be read in the form of a stream. For almost all protocols there are two streams: request
and response
. It is the job of the Multi-Protocol server to break down the binary content of the stream into variables that can be used and manipulated by the regular X Engine.
The regular X Engine is then capable of modifying the content of the stream before proxying it to the real target server. Upon a reply from the real server, the reply will also be treated as a stream and can equally be broken down and manipulated or simply returned to the original requester.
Setting a VAO variable directly refers to setting variables in the input for the regular X Engine, when the Multi-Protocol server hands over control to the regular X Engine.
To help the protocol rule writer control the workflow around breaking down a protocol, a set of variables known as protocol variables are used. These are basically String objects, and unlike the regular rules, can be treated as such. This means that assignments to a protocol variable via the Set Variable rule can use all of the regular Java language conventions such as:
Notice the use of “”+ in the last example. This is a convenient way to convert a Java integer into a String object.
VAO Stream variables on the other hand are directly tied to the request or response stream. If you modify a VAO Stream variable within the regular X Engine, then the underlying stream will also be modified. VAO Stream variables use format converters so that the underlying stream can be a binary field, but it will be presented as a regular integer (or some other valid representation) in the regular X Engine.
VAO Stream windows are used to handle the very common occurrence where part of a protocol stream may contain information such as the length of another part.
A classic example of this would be the “Content-length” header in a HTTP response stream.
If designating that a VAO stream variable is also a stream window, any modifications that you make to the content of the stream window will automatically be reflected in the value of the variable.
We are now ready to create our first protocol rules. Return to the dns-in rule set we opened earlier.
According to the protocol definition, the first field we need to read from the stream is the message ID. We do this by adding a “Read Fixed Data Type” rule:
And setting the properties as follows:
Let’s examine what is going on here:
We are using a “Fixed” data type. This refers to data types that have a fixed unchangeable length within the stream. In our case we pick an unsigned integer, MSB first (MSB referring to Most Significant Byte).
We set the length to 2 bytes.
We picked a variable name of messageid. This is the protocol variable name.
We specified that the stream we are going to work with is the request stream. This is optional as the Multi-Protocol server version of the X Engine is smart enough to know the main stream being worked with, however for clarity, it is recommended to specify it.
We specified a Stream Variable name of MessageID. This means that when the regular X Engine is invoked, it can access and modify the MessageID variable, which will have a direct impact on the stream.
Next, we wire the rule up to the rule set entry point, and also add the “Abort Connection” rule so that we can handle protocol failures gracefully.
The next couple of bytes contain the DNS flags. Since they are bit level, we will read the two bytes as a binary string of 0s and 1s. We once again use the Read Fixed Data Type rule, but this time set the following properties:
This will ensure that the value contained in these 16 bits is represented as a string, looking something like this: “1000010110000000
”, where each bit signals a particular meaning as per the DNS protocol specification.
We follow this with 4 simple rules to read the question and answer count:
Each of these new rules reads a 2 byte unsigned integer and are wired to the “Abort Connection” rule on failure.
Next, we need to deal with the actual query payload. For a simple query, this means handling the variable number of elements in the domain name being queried. Theoretically, more than one query could be included in a single DNS request, however for the sake of simplicity, we are ignoring that for now.
The full construct of breaking down the domain name looks like this:
What is happening here is as follows:
The Count and DomainElement variables are each being set to the value “1”.
The while loop is then created using the following properties:
This is followed by a Read Data Type, which is capable of reading a set of bytes with a variable length. In this case, it is the length prefixed String. The properties to perform this read are as follows:
Finally, the Count variable is incremented, using the technique described earlier:
All that remains in breaking down the protocol request is to read the query type and class. As both are simple 2 byte unsigned integers, we can do this with ease:
Finally, we tell the Multi-Protocol server to hand over to the X Engine:
The complete rule set looks like this:
Before we can use the protocol rule set in rules, we need to give it a short description and check the box that allows rules access:
We now have everything we need to perform a test of our protocol breakdown.
The next step is to create the regular rule sets that are going to set up a port to listen on and receive the packet.
Start by creating a new repository and create a rule set called DNSStart. It will only have one rule:
Save the rule and then create another rule set called DNSMain. It will also only have one rule:
The configuration for these two rule sets is also very simple:
It is now time to start up the stand-alone Multi-Protocol server instance. It is found in the Multi-Protocol folder. The easiest way to do this is to execute either the tomorrow.bat file or the tomorrow.sh file.
Once the instance is running, you can deploy your new configuration to it and start it.
Note: If you are not seeing any Multi-Protocol server instances when you try to deploy, please check that you have a server defined with the server type Multi-Protocol, and that it is configured to the correct management port of your Multi-Protocol server instance.
If you check the log for the Multi-Protocol server instance, you should see the following message:
The first thing we will do is trigger a simple DNS A Record retrieval packet against our Multi-Protocol server instance:
This will obviously not generate a reply yet, but you will see the packets generated in the console output. There will be several, because DNSDataView retries 5 times:
As you can see, the query is broken down into stream variables that are all on the request stream. Also notice how the protocol rules have sliced the query neatly into its three parts: www, testing and com.
Now that we know the request stream is working, we can proceed to create the protocol rules for the response stream as well. Fortunately, for DNS this is very simple, at least if only dealing with a single DNS A record as in this case study. The first part of the response stream is essentially an echo of the request.
So, start by copying dns_in to a new protocol ruleset named dns_out.
Then we modify each rule to point to the “response” stream and provide a new name for each stream variable by adding the letter R in front of each name:
Once done, we can proceed to read the actual response data.
Things get a little funny here, because the designers of the DNS protocol lived in a time where bandwidth was a scarce resource, so they built “compression” into the protocol.
The way they did it was by manipulating the first bit of the length field of the reply. If this bit is set, then the actual site name (www.testing.com) being replied to can be found by using the rest of the bits + the following byte to create an offset to where the name can also be found in the packet. However, given that in this example we only have one query, we will ignore that and just read the bytes:
With that in mind, the rest of the dns_out protocol rule set becomes fairly simple:
All of the above rules simply read unsigned integers MSB first. Not all have the same length though. RPointer, RType, RClass
and RLength
are all 2 bytes long. RTTL
is 4 bytes long and RIP1 – RIP4
are all 1 byte (each part of the IP address).
The final step to complete the rule set is to name it and allow it to be used in rules:
We are now ready to proxy our protocol packet and do something useful with it. We need to return to the regular DNSMain rule set and make some changes:
The first rule you see above is actually the “Proxy Input Request” rule. However, once you change the selected protocol, it automatically changes its name to the protocol it is using.
The complete properties used are:
The Host name/IP shown above is Google’s DNS server. You could choose to use your own to complete this case study.
Deploy the dns_example configuration to the Multi-Protocol server instance and restart the rule set. Then go back to DNSDataView and get ready to launch another query. Since we are using Google’s DNS server, we are going to query “www.google.com”.
This time, we get a reply:
And the console shows that the proxy worked:
Looking through the various stream variables, the significant ones are RIP1-4, which tells us that “www.google.com” can be found at 216.58.199.36.
We will now use the regular rule set to manipulate the response stream.
You may notice that the “Set Variable” rule is used to change the RIP4 value to 100. As the Multi-Protocol server version of the X Engine is a two-way mapping of the variables to the stream, changing one of the variables also changes the stream.
We will demonstrate this by deploying the rule sets and re-launching the DNS request:
As the tool shows, we have just changed the output of a DNS request in real time.
The usefulness of this is probably limited (given the recursive nature of DNS), but one example could be making the DNS server respond with a different IP address based on the requesters physical location or setting up internal honeypots.
Proxying packets using the Multi-Protocol server instance is one way to use the protocol packets. There may be times when you wish to use a protocol to access an external service. However, crafting network packets by hand is incredibly time consuming and fraught with error risks.
To get around this, Tomorrow Software includes a feature to capture a packet and use it as a template. Capturing a packet is incredibly easy. Simply modify the rule set to write the stream to a file:
Once you have a captured packet, you can easily modify it using simple stream variables. The following shows how the captured packet is read before being sent to the test DNS server using the “Write Stream to Server” rule:
Using this approach, we have added DNS lookup capability to rules using no code whatsoever.
We now need a tool to send DNS packets to any given DNS server easily. There are many options available on the net, and we selected DNSDataView from was chosen for this example.
CSRF attack prevention
Two Factor Authentication
Frame Busting
Google Analytics
DNS Multi Protocol
Customer Satisfaction Survey
0..7 - 8..15 | 16..23 - 24..31 |
Message ID A unique number that the sender can use to tie a response to a request | Flags 16 bit flags. The most important of those is the first bit, which is 0 for a query and 1 for a response |
Number of questions A simple 16 bit count of questions | Number of answers A simple 16 bit count of answers |
Number of authoritative answers A simple 16 bit count of answers that are authoritative | Number of additional answers A simple 16 bit count of additional answers |