Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Servers or Composable Architecture Platform servers are each instances of the X Engine or a database server. There are six types of servers: test, production, Multi-Protocol, database, template, and production with forwarder.
Test servers are used to validate a rule set before it is deployed into production. Test servers can take and process a data file containing test data extracted from a production server or manually created.
Afterwards, performance data from the test server can be used to validate that a particular rule set performed as planned.
Production servers are capable of taking a data feed from any number of sources, such as a Web Service, a HTTP request or a data file placed in a folder. Production servers continue running once started and are continuously waiting for data.
Multi-Protocol servers are servers that are capable of intercepting one or more standard network protocols and breaking them down for analysis by the Composable Architecture Platform.
They differ from Production servers in that they can take data directly from the network layer without interpretation and can then pass that data through a set of protocol rules and proxy that same (or altered) data to a protected server.
The response from the protected server follows a similar path (with retained visibility of the original request).
Database servers do not have X Engines installed and do not show up as Composable Architecture Platform Servers as such. They can however be defined as servers so that JDBC connections to them can be specified.
Template servers are used as "parents" for other servers. This allows settings such as web proxy settings and mail server settings to be inherited by other servers and reduces the maintenance load for large clusters.
This refers to a group of production servers that have a built in forwarding proxy controlled by Composable Architecture Platform. These servers also have the capability to enable a browser proxy so that the forwarders can be used for testing rules against applications.
It is important not to confuse a Composable Architecture Platform server instance with a hardware server instance. A single hardware server can run many Composable Architecture Platform servers at once.
Introducing CAP: The Composable Architecture Platform
TomorrowX's Composable Architecture Platform (CAP) revolutionises the way developers build complex applications. Inspired by the connected agile methodology, CAP breaks down applications into independent, reusable building blocks. Imagine constructing intricate systems with pre-built components, where each component seamlessly integrates and functions together.
Fine-grained Control: Unlike pre-built packages, CAP grants developers complete control over every aspect of the components and their interactions. This enables the creation of highly customised applications tailored to specific needs.
Effortless Scalability: Applications built with CAP are inherently scalable. New features can be effortlessly added by introducing new components, while existing functionalities can be modified or replaced without affecting the entire system.
Enhanced Agility: CAP's modular approach fosters a more agile development process. Developers can focus on building specific features without getting bogged down in managing complex application structures. This allows for faster development cycles and easier adaptation to changing requirements.
Distinct from Low-code/No-code Platforms: While low-code/no-code platforms offer rapid development through pre-built components, they often limit flexibility and control. CAP, on the other hand, empowers developers to build components from scratch, ensuring a perfect fit for the application's requirements.
Surpassing Traditional Applications: Monolithic applications are difficult to scale and adapt. CAP's modular approach using independent components overcomes this limitation, enabling effortless scaling and modification as the application evolves.
Composable Components: Build applications by assembling independent, reusable components that can be easily customised and combined.
Improved Maintainability: Reduce code duplication and complexity, making your codebase easier to understand, modify, and debug.
Enhanced Scalability: Effortlessly scale your applications by adding or removing components as needed.
Boosted Developer Efficiency: Focus on building features instead of managing intricate application structures.
Modular and Scalable Applications: Build applications that are easily adaptable and can grow as needed.
Maintainable and Testable Codebases: Reduce code complexity and improve code quality for easier maintenance and testing.
Adaptable to Changing Requirements: Build applications that can evolve alongside your project's needs.
We have worked tirelessly and relentlessly to empower you. As a core part of the team. As a critical part of the Composable Architecture model. But, empowerment brings with it responsibility.
We will introduce, describe and demonstrate the various capabilities of Composable Architecture Platform.
We are excited to be able to share this with you. We are delighted to have the opportunity to work with you and support you in your endeavour to create amazing digital solutions and systems, as part of the TomorrowX community and a Composable Architecture Platform user.
As a vibrant and global community, we value your ideas, thoughts and suggestions, which you can share with us via your medium of choice. Please visit Get Help for options.
Experience Tomorrow. Get started with CAP by TomorrowX, today!
Composable Architecture Platform is continuously developed. As a result, some screen captures may differ from the current product version. This will only be the case where educational value derived from the image is not affected by the difference encountered.
The rules editor is a graphical design tool for creating and maintaining rule sets. The rules editor is launched as a separate browser window from within the console application.
Projects are collections of tasks that needs to be completed to fully create a solution. Tasks are assigned to users and are listed as To Do items for that user. All tasks result in a work output of some kind.
By using predefined project definitions, it is easy to set up and track all the basic tasks required to complete a full enterprise deployment (such as rule writing, firewall rules, SSL certificate ordering and so on).
This document refers to rules, the rule catalogue and rule sets throughout. It is important to understand the distinction between them to avoid confusion.
A rule is a fundamental building block created by a Java programmer. Rules can be dynamically added to the system by downloading them from the TomorrowX website or other sources. In-house programmers can also add them.
Analysts generally do not create rules; they use them to build rule sets.
Examples of rules are: If Condition, History Recorder and Name Splitter.
In the rules editor, all of the rules installed are represented in the rule catalogue. It is a tree of all the rules that are available for an analyst to use. Rules are grouped into functional areas.
There is a different type of rule used to break down elements of a given network protocol. Protocol rules can only be managed by an Administrator. Protocol rules use a different rule catalogue than the standard rules.
A protocol rule is specified against any new supported protocol as well as against the response supplied from a proxied protocol request.
Rule sets are combinations of rules put together to perform a given task or strategy. They are created using the rules editor and are typically a combination of rules and other rule sets. Rule sets created by analysts show up in the rule catalogue in the rule sets section.
When a rule set is completed, it is considered a rule by itself.
A rule set mode is the rule set being executed when a data set hits the X Engine. By default, there is only one mode, but additional ones can be created to deal with specific situations (such as stand-in, promotion etc.). Modes can be changed without redeploying rules.
Configurations hold the additional information a server needs to be able to execute a rule set. For example, if a server is reading from a CSV
file, the configuration will define which column in the CSV
file is stored in which variable, before being processed by the rule set.
Input formats, initial variable values, global variables, database information etc., are also all held in configurations.
Case Managers are databases and workflows that contain cases to be investigated and followed up by fraud or security experts. Cases can be inserted into the case manager either manually or through rules. Alongside each case are tasks that need to be completed and task queues that facilitate the workflow.
Tasks are connected to cases in the case manager. Each case can have one or more tasks that need to be completed. Once a task is completed, the action of completing that task may result in the case changing status (for example from “Open” to “Closed”) or yet another task being generated for that case.
Tasks are always contained in a queue. Queues are priorities and when a user picks the next task to perform, the queues assigned to that user will be checked consecutively for the next most urgent task.
Content files are a structure of files being served up by the X Engine in a manner similar to an HTTP server. The structure mirrors the structure of the files in the protected server and can be used to overlay images and pages on top of an existing application.
The Composable Architecture Platform X Engine is highly extensible. New rules are developed regularly. To allow use of them within the rules editor, an extension may need to be installed (manually or via the update server). This will add the new rules to the rules catalogue.
Test data can either be in the form of manually created CSV
, XLS
or XML
files, or it can be files captured during the execution of a rule set on a server (TST file).
All production servers allow the download of a set of test data at any time to use for further development of rules on a test server.
The credential vault provides a central location (only accessible to administrators) for storing user IDs, passwords, access codes and other data that is required to access external services. The information stored in the vault is automatically transferred to the X Engine upon deployment, eliminating the need for rule set writers to know any credentials.
Once a rule set has been executed on a server, performance data can be retrieved (in graphical form) to see how rules are performing.
The console is used to administer the system. It is a browser-based application that provides the user with a complete view of all installed servers, rules, configurations and so on. The following is a sample view of the console after the user has logged on.
The console has four distinct panes as described below:
The top banner contains simply the logo and the log out button. This banner is visible on all console pages to facilitate easy log out from the product.
The administration tree is used to navigate the console application. To view the properties for a particular function, click the icon or folder in the tree in front of that function. To get back to the main login page, click on Console at the root of the tree. Increase or decrease the vertical space taken up by the tree by dragging the divider left or right.
Whenever a function is selected in the administration tree, a corresponding page is shown in the Action window.
When a server is processing data, it has the ability to output information to an internal console. The console for each server is replicated in real time to the administration console and can be viewed in the pane at the bottom.
Any error messages from the Composable Architecture Platform server will also be visible here. Toggle the visibility of the Server console viewer by clicking the Hide Console or Show Console tab. Enlarge or reduce the size of space taken up by the viewer by dragging the bar across the top up or down.
The console can be cleared so that only new messages will be shown by clicking the button in the bottom left-hand corner.
Introduction
Concepts and Terminology
Architectural Scenarios
Console
Servers
Projects
Configurations
Rules Editor
Rules, rule catalogue, protocol rules and rule sets
Test Data
Trace Data
Flight Recorders
Case Managers
Data Files
Content Files
Performance Data
Extensions
Protocols
Credential Vault
Custom Functions
Databases
Input Adaptors
Users
User Roles
Access Rules
Repositories
Audit Log
Proxies
Custom functions are features that can be added to the Composable Architecture Platform console using rules. These same custom functions are subject to the normal Composable Architecture Platform console roles configuration and can be used to add simple maintenance tasks (such as blacklists) directly to the console so that they are accessible to normal console users without needing any other applications.
Trace data is captured during the execution of a rule set on a server (TRC files). They contain detailed information of the flow of data through a rule set. All servers allow a trace to start at any time for use in debugging rules.
Protocols are network specific, low-level interpretations of the data flowing over a network. Examples of protocols include HTTP, SMTP and ISO8583. Composable Architecture Platform has a special X Engine (accessible via administrator privileges) that allows for the definition of protocols.
Flight recorders are databases containing log records of specific activity. A flight recorder starts recording once a trigger event has happened and will then record for a specific number of records, until a time limit is reached, forever, or until it is manually closed. This can be used to track specific user activity.
Data from flight recorders can be converted to test data at any time.
Databases are another fundamental concept of Composable Architecture Platform. In Composable Architecture Platform terminology, a database is anything that can be connected to via a JDBC driver.
JDBC drivers are connectivity modules for the Java language.
The term database refers to a system that is capable of storing data in a structured relational manner.
Alongside databases, there are a couple of terms that are important, which are listed below (Tables through Keys).
Tables are the name of the individual file within the database where a given set of data is stored. Sample table names are CUSTOMERS and ACCOUNTS. Some Composable Architecture Platform rules have the ability to create tables or read/write
data to them.
A schema is a way to segment a database into multiple entities i.e. there can be two schemas on the same database containing the same table names. For example, there can be a PRODUCTION schema that contains an ACCOUNTS table as well as a TEST schema that contains an ACCOUNTS table.
Rows and columns refer to the individual elements inside a table. For example, like a spreadsheet where all of the columns have a name.
Typically, all tables have keys that map to one or more columns in the table. In most cases tables have unique keys, which is enforced at the database level.
Data files are files used to assist the X Engine in making decisions.
Examples include CSV files of undesirable individuals, IP geo location databases and blacklists.
Reference Docs for the Composable Architecture Platform
Input adaptors define how the X Engine interprets data that is sent to it. A large number of different adaptors such as XML
, CSV
, TST
, HTTP
and other data formats are shipped out of the box. New adaptors can be added, if required.
Access rules can be used to define conditions of user access. Access rules can, for example, restrict a user to only be able to sign in from the local physical system, require a second-factor logon or call out to a single sign-in system for password validation. Access rules are plug-ins and the system can be extended with new rules on demand.
Not all users signing in to the console will have the same roles. It is possible for all non-administrative users to have their access restricted to specific console functions.
Repositories are collections of configurations, rule sets and data. They work similar to folders in a normal file system to logically separate and control access to important assets. A typical set of repositories for any given system would be Test, Staging and Production – reflecting the lifecycle of the configurations, rule sets and data involved in a deployment.
Each X Engine is controlled from a Composable Architecture Platform Console. The console is a web application. Any information pushed to the X Engine from the console is stored in the X Engine’s home folder. This includes any software required to execute the rules.
It is important to note that the X Engine and the console are autonomous entities. They do not need to be connected for the X Engine to execute rules.
Composable Architecture Platform is a very network centric product. As such there are a number of network level definitions that can be confusing - especially when it comes to the large number of potential proxies being involved. The following is a list of the terms used for various proxies within and used by the product.
A web proxy is a proxy server that is installed as a performance optimizer or security feature betweenComposable Architecture Platform and the World Wide Web. Typically, this is a corporate proxy that Composable Architecture Platform must traverse to access services such as the update server, MaxMind, SMS services etc.
The built in Composable Architecture Platform proxy, called Proxy Server in the console, is a forwarding proxy used to protect sites that are unable to take advantage of the inline filter (e.g. all non-J2EE sites). It is also used to test Composable Architecture Platform rules against any site, without performing any installation. The latter is achieved using the built in browser proxy.
The browser proxy is a feature of the built in forwarding proxy. The browser proxy allows configuration of a proxy within browser settings and have the requests from that browser sent through the Composable Architecture Platform built in proxy. This provides a convenient method for testing rules against sites without installing any additional software.
This scenario is largely the same as the API Transformation scenario from an installation perspective. The only difference is that the requests in this scenario comes from end users rather than server applications and the target for the requests are existing web applications or Software as a Service applications.
This scenario lends itself to a myriad of use cases:
Digital transformation, where an existing application (often beyond the control of the business) is functionally enhanced, without the explicit need for the existing application being aware of these changes
Bot management, where the X Engine detects bots and adds policies for how those bots are able to access the underlying application
Robotic process automation where requests to the existing application results in data also being entered into secondary systems
Orchestration of multi existing applications that are joined together to form a new experience
… and many more
This scenario is especially useful for regaining control of web applications that are otherwise difficult, expensive or impossible to change
Users, in the context of Composable Architecture Platform, are the users signing in to the console.
Installing the X Engine as a Servlet filter in a Java Application Server (such as Jetty, WebSphere, JBoss etc.), is a common approach. In this scenario, the X Engine is acting like a Servlet Filter sees all requests coming in and has the ability to modify the request before it reaches the web application. Similarly, is also sees every response coming back from the web application and it has the ability to modify these responses on the fly.
Use cases for this approach mostly centre around making temporary changes to an existing web application, out of band of release cycles, or when the web application is third party and not able to be modified.
Examples of such temporary changes include:
Adding security such as CSRF or SQL Injection protection
Frequently changed compliance rules that can be required on short notice
In this scenario the X Engine is configured to work as an application server with the ability to create client-side rules for validation, data storage and other mobile device features.
The client-side X Engine is running entirely in JavaScript and is portable across any mobile device with a browser. It creates a native look and feel mobile experiences using pure HTML5
, CSS
and JavaScript
.
Use cases include:
Rapid mobile application prototyping
Offline/Online mobile data capture
Command and Control
Simplest Form
Servlet Filter
API Transformation
Active Web Proxy
Web Application Server
Active Proxy With Content
Mobile Application Server
Asynchronous Multi-Protocol
Data Loss Prevention Architecture
In this scenario the X Engine is installed alongside the Composable Architecture Platform Application Server and they work as an integrated whole, turning the X Engine into a high-performance HTTP proxy with the ability to provide SSL termination and on the fly transformation of requests and responses between the existing applications and the existing APIs.
Use cases for this approach include:
Vendor abstraction, which provides the ability to create generic APIs for things such as text messages, geo-location, two-factor authentication and use those APIs in the existing applications instead of vendor specific APIs
API Sunsetting, where the X Engine is capable of transforming the structure of an API call between different versions, such as to facilitate the removal of older version code from the API server source code
API Accounting, enabling chargeback of API calls that have a monetary cost to the respective users of that API
In this scenario, the X Engine is configured as a proper web application server. It is capable of serving up content, including multi-media and other web assets. Rules are used to create a dynamic user experience.
Use cases include:
Web forms that capture data on a single page and disperse it into one or more locations
Dashboards that capture data from multiple different sources and serve up a singular view for all those sources
Fully fledged stand-alone web applications
With the active proxy and the web application server used in combination, in this scenario the X Engine can be configured to act as a proxy that has the ability to add content.
Use cases include:
Customisation of the user experience of a SaaS application, without the knowledge of the SaaS provider
Mobile enablement of an existing application without any changes
Adding two-factor authentication to an existing insecure application
Short lived campaigns and surveys that would otherwise clutter the target application code
In its simplest form, the X Engine receives data from a file (CSV, XML, Spreadsheet) and processes it by interacting with databases, APIs or other servers.
The X Engine in this form requires very little other than a software platform with Java and the X Engine core installed, a configuration file and a home folder. Upon deployment of rules, the console will deploy all other required dependencies along with the rules.
The internal audit log provides a view of all events relating to administrative objects. Events such as logins, logouts, user profile changes and so on are all logged here.
Should you forget your password, click on the Forgot user ID or password? link. This will bring up the access recovery page as shown:
If you have forgotten both your user name and your password, enter your email address in the top box and click submit. If you are registered in the system, an email will be sent with your user name but no password.
If you have only forgotten your password, enter your user name in the bottom box and click submit. Your password will be reset to a random text sequence and an email will be sent to you containing the new password only.
Once you have signed in, you can change your password by clicking on the Click here to change your password link on the main administration page. This will redirect you to the password update page.
To update your password, enter your current password, the new one twice and click on submit.
The Asynchronous multi-protocol scenario operates at the network packet layer and expands the reach of the X Engine from HTTP into other protocols. TCP and UDP packets are supported.
To facilitate this approach a secondary protocol level engine breaks down the packet into elements the main X Engine can understand. The X Engine can then modify these elements and the underlying protocol packet will be changed accordingly. The X Engine is then able to forward the modified package to the designated network endpoint and can even in some instances commence a chat with the endpoint before forming a response packet for the initiating computer. As always, the X Engine can rely on secondary data sources (APIs, data, other systems) to help form the modified request and response packets.
Use cases for this scenario includes:
Database field level security
ATM stand-in
SCADA security
Advanced DNS
… and many more
Before you start using the product, there are a number of important tasks that you should perform to get the most out of the product, and to secure the console from unauthorized access.
The first step is to click on the "Administration" section of the console and select "Console Setup".
You will see a page that allows you to configure a number of console settings:
The first and most important thing to decide is the console type. Composable Architecture Platform ships as a single distribution, but it can be used in a variety of configurations. By selecting the console type, Composable Architecture Platform will delete elements that are unnecessary for the specific installation. The Demo Server is the most suitable type for training and testing, whereas the other listed types are all production configurations. Once you have selected the type you wish to use there is no undo. If you select the wrong type you will have to reinstall the product.
Unless the system where you have installed the console has a direct connection to the internet, you will need to configure the console's web proxy. If you leave your console without internet access, then you will be unable to receive product updates, new extensions, case studies and fixes.
If you are behind a web proxy that uses Microsoft NTLM authentication, you must also set the Web Proxy Domain value. For Microsoft NTLM to work correctly the Web Proxy Host should be the fully qualified network name of the proxy,
for example: mywebproxy.mycompany.com,
and the Web Proxy Domain should be the simple name for the domain.
Having an email server defined for the console is an essential step in ensuring that you can reset lost passwords and recover forgotten user IDs. It is also a requirement if you intend to use the email second factor login method.
Please note that unless a mail server is defined, there is no way to recover a lost password. It is often important to set the email sender as many SMTP servers are configured to reject email from unknown senders.
Once you have defined your console type, web proxy and email setup, make sure you click the save button to store the settings. If these settings have been edited, you must restart the console server.
After the console has been configured, you should ensure that the list of authorized users is correct by clicking on the "Administration" section of the console and selecting "Users".
As a minimum, you should set up a new personal account. The User ID is required to be at least 6 characters long and may not contain spaces.
Supply your real name, email and set the user type to be Administrator. Administrators are not required to have a user role, but it is a good idea to provide your time zone as this will make reports and search queries match your local time.
Finally supply a strong password twice and click on Create.
At this point we recommend that you log out of the console and log in as your new user.
By default, the console ships with 3 active accounts: admin
, super
and security
. All of these accounts have elevated access to manage the console and should not be left with default passwords (which for all of them is the same as the user ID).
If you decide to keep these accounts, as a minimum you should change their passwords and supply them with a valid email address that can be used for a password reset.
Once you have logged in, you will have visibility of all of the Composable Architecture Platform servers defined within the console. You can see the status of each server by expanding the Servers section of the administration tree as shown below.
Servers with a green tick in front of them are recognized as being online and available. Servers with a red exclamation mark are offline and unavailable. To see the status of an individual server, click on the icon in front of it. An example of a server’s status is shown below.
If you have a lot of servers, you can filter them by host name, port number or description.
The filter stays in place, not just for the active servers but also filters the list of servers accessible for deployment.
Now we can verify the browser and proxy configuration. In the browser you chose for browsing via the proxy, type www.google.com in the URL (address) bar and hit enter. You will see the country specific main Google page:
Now switch back to the browser running the console. You should see some activity in the server console viewer. You can enlarge the server console viewer to get a better look:
Without going into too much detail at this stage, what you are seeing is the browser request for each interaction that the browser had with the requested host. You can see items such as the IP addresses, User agent (Browser), Request URL, request method, cookies, protocol scheme etc. This is by no means an exhaustive list of the data Composable Architecture Platform can detect but gives you a general idea.
The thing to take note of at this stage is that you can see all requests, including requests for images as well as JavaScript, CSS and other page elements. This is an important thing to be aware of when writing rules.
In addition to protecting internal websites from attacks and fraudulent activity, Composable Architecture Platform can also be used to monitor employee’s interactions with external websites (such as social networking sites, blogs and wikis). The most common use of this feature is for internal users to limit access to specific sites (for example Facebook and Twitter) for business purposes.
The following diagram illustrates how Composable Architecture Platform can be configured to monitor all traffic going to one or more nominated sites:
The key to this feature is to introduce a second DNS server within the company infrastructure. This second DNS server provides an override IP address for sites that are monitored by Composable Architecture Platform, ensuring that all the traffic is visible to the appliance. The monitoring appliance can be hosted and managed by an external service provider or can be installed in-house.
After you have completed the console setup, click on Console at the very top of the administration tree:
If there are updates, fixes, new rules, case studies or other material ready on our update server, there will be a message about those updates.
To see the updates available, simply click on the notification.
Please note that the updates available depend on the type of server licenses you have installed
The update screen will appear as shown below.
Depending on how your console was shipped, downloads will include new or updated rule examples, new or updated extensions and updates to demo and the console application itself.
Note that brand new updates are marked with the BETA tag for 7 days after their release. This is done to allow you to apply updates conservatively.
To install a given update, simply select it and click on Install selected (alternatively you can install all updates available by clicking on Install all).
If you choose to download a console update, please be patient as they can exceed 25Mb in size and can take several minutes do download and install. Once the download update is completed, simply log out of the console, wait around 30 seconds, and log back in to get the new version. Any users who don’t log out will simply remain on the old version until they do.
Note that if you install a new application version, you should always clear your browser cache to pick up any changes.
In some instances, the console does not have direct internet access. To still facilitate the download of updates, Composable Architecture Platform can use a fallback mechanism known as CORS (Cross-Origin Request Services). This essentially allows the console to reverse proxy through the browser used to access it. At the time of release only the latest versions of Chrome and Firefox support this web technology.
Every X Engine receives data in the form of variables. These variables are initially supplied by an input adaptor. The most commonly used input adaptor receives web application input, but other adaptors receive XML data, CSV data or other more complex input.
For the purpose of understanding the above rule set, the web application input adaptor supplies the variables REQUEST_URL, URI and REQUEST_TIMESTAMP. It also supplies as variables any parameters provided by GET
or POST
requests. To obtain more detailed information about the HTTP request, the HTTP Request Tracker rule is used
The reason for this separation is that you may not need all of the detailed information for most requests (such as images). This example provides a quick window into the world of Composable Architecture Platform. The next step is to create a configuration that will have a more interactive result.
The first step in our example is to prepare the browser proxy so that all traffic to and from Google is successfully routed via the Composable Architecture Platform Proxy Server. This will give us visibility of the data and provide all of the information we need to manipulate it.
Many browsers have in-built security features to prevent user access to websites whenever there is an untrusted SSL certificate, and will block the incoming request without exception
In our example, because it is not possible to install Google’s SSL certificate to the Proxy Server, overcome this by using redirection settings within the Proxy Server. In Administration, Server Definitions click on the Proxy Server as follows.
Click on the Forwarding tab and set the Request redirection properties for Google as follows. Our example is for a UK IP address request, which follows the redirect of Google.com to Google.co.uk based upon the IP geolocation from the originating browser.
The first line entry is for example format use only and has no impact on the Proxy Server:
http://thishost>http://thishost:8001
http://google.com>https://google.com
http://www.google.com>https://www.google.com
http://google.co.uk>https://google.co.uk
http://www.google.co.uk>https://www.google.co.uk
Once you have input the redirection settings, scroll to the bottom of the page and save the modified Proxy Server definition.
The Proxy Server will now successfully route the http to https protocol redirection and allow the browser to access the website even without a correct SSL certificate.
Next, deploy a configuration to the Proxy Server. The configuration we will use in this example is the one named BasicWebTrial, which is under Configurations->Product Trial in the administration tree:
When you click on it, you will be presented with a number of options:
At this stage we are not going to make any changes to the configuration, only the changes made earlier to the Proxy Server server definition.
So now deploy it and start the Proxy Server by clicking on Deploy.
You will see a choice of servers you can deploy to:
Select the Proxy Server as shown, check Restart immediately and then click Deploy. You will then see the action window switch to the server view showing the configuration and all of its dependencies being deployed to the proxy:
Once complete, you will see that the Proxy Server is started and ready to use:
Start by actually running a query. For the purpose of this example, go to www.google.com and query the word dishwasher. You will get a country specific page similar to the one shown below. If you don’t see any ads at the top or on the right look at the bottom of the page. In our example, we are using www.google.co.uk.
The goal is to remove the ads along the top, and the ads along the right-hand side.
The next step is to work out how to go about removing the ads.
The first thing that is required for a new configuration is a new repository. All data, rules, content and so on, live within Composable Architecture Platform in a repository.
To create a new repository, click on Repositories, enter the name as “Google Ad Remover” and click Create.
This will create the repository. The next step is to figure out what our rules should do. This requires a closer look at what Google does with their search results.
In this section we assume that the Composable Architecture Platform console application has been installed or activated and that you have access to a URL that brings up the login screen. If this is not the case, please refer to Installation and Configuration for instructions on how to install and configure the product.
When you first access the Composable Architecture Platform console you are presented with a login screen as shown:
You can select the preferred language to use for accessing the system. If you change the language, the login page will change accordingly. The language you select will be stored as a cookie within your browser, so you only have to select it once.
You can now sign in using the user ID and password provided by the system administrator. If you are the system administrator and this is the first time that you are signing into the system, you can use the word admin for both the user ID and password.
Note that both user IDs and passwords are case sensitive.
Login
Essential Things to do First
Keeping the Product Current
Common Console Management Tasks
Viewing Active Servers
It is now time to take a closer look "under the hood" to give you an understanding of what just happened. The first thing to look at is the configuration that we just deployed. Select it again for a closer look:
Configurations are what tie a solution together.
Each solution consists of a number of building blocks which can include several rule sets, data files, content files, database configurations, field settings, input source definitions and much more:
To learn what this configuration does, you can review each of the various tabs and look at each rule set. Alternatively, click on Document, and select a target server:
Select the Proxy Server and click on Document.
A new page will appear that contains a complete summary of the configuration:
This page is specifically designed for printing a given configuration for audit purposes, but it is also an excellent way to get a quick understanding of what is going on in a rule set. Just focusing on the rules in this case, scroll to the bottom of the document:
The rule set shown (BasicWebLister) is executed whenever a request is sent from the browser to the server. The rule set is effectively a flow chart, executing from the green dot on the left through the rules towards the right. This is a very simple rule set with no decisions, so the flow should be very clear.
The summary page below the rule set shows the properties set for each rule, but for the sake of understanding, we will elaborate a little further:
The first rule executed is the HTTP Request Tracker rule. This rule takes a basic HTTP request and extracts all of the common header attributes from it (header names, request URL, tracking cookies etc.) and places that information in variables. It also sets tracking cookies (if Use cookies is set to Yes).
The second rule is the MaxMind Geo Info rule. It uses the IP address supplied on the HTTP request and attempts to convert it to a physical location (country and city) using the MaxMind Geo Location database. In this case, the rule returns nothing, as the localhost IP address (127.0.0.1) doesn't resolve to any country in the data lookup.
Finally, the List Variables rule sends all of the variables that have a value to the server console viewer so that the user can examine them, which is what you saw earlier.
The purpose of the configuration we just deployed and tested is to obtain the HTTP request data, augment it with Geolocation and then send the information to the console. If you scroll down the server console viewer, you will notice the various requests coming in, including requests for images, style sheets, icons and so on.
This rule will be used to filter out all the non-search content. As mentioned earlier the web application input adaptor provides the variable URI.
This variable contains just the path part of the request (without the hostname) and is very suitable for this test.
So, click on the If Condition rule. You will see the rules editor change to show the properties for the rule on the left-hand side:
At the very top of the list is always the Label, Rule Class and Description. Label and Description are the short and long descriptions, respectively. The label defaults to the rule name (If Condition in this case). The label is the rule name given in error messages if a problem occurs while starting or executing a rule set. If required, you can change the label to be any short text that you can use to identify the rule.
For each rule, you can also set a description. This should be a short note explaining what the rule is supposed to do in the context of where it is placed.
For now, complete the properties as follows:
You may have noticed that the Value property was entered as “.”
. In general values within the rules editor are treated as follows:
Value
Example
Meaning
Number
1234
A decimal number that can be used for calculations.
“Text”
“Hello World!”
Text is always enclosed in double quotes. If not, it will be treated as a variable (see below).
Variable
FROM_ACCOUNT
A variable is a field that contains data. It can be numeric or text.
By convention variables should be typed in UPPER CASE, however, this is not enforced.
Variables may not have commas or double quotes in their name.
Array variable
HEADERS
An array variable is essentially a text variable formatted to contain keyed arrays in a format that is readily recognized by applications and browsers (JSON). There are rules available to convert between JSON and CSV formats too.
CSV
A,B,C
A list of values separated by commas. If the values are strings, double quotes around them are not required (unless they have a comma in them).
Note: Sometimes it can be difficult to know if a property value requires a constant (no quotes required) or a value (quotes, a number or a variable required). To assist you in knowing what to put, property values are light orange input fields, whereas constants are white.
We now have our condition in place, and we can connect it up to the data flow. All data arrives into a rule set from the green dot:
To connect the If Condition to the incoming data, click the ?
image on the rule and drag a line to the green dot and then release. An arrow will appear:
All rules work this way. They get their input on the left and exit through one or more "chain points" on the right.
Now that the Proxy Server is running, the browser needs to be configured. There are a number of different ways of doing this, depending on the browser of your choice.
Our preferred method is to use one browser (e.g. Chrome or IE) for managing the console and another browser (e.g. Firefox) for browsing via the Proxy Server. The advantage of this approach is that Firefox has its own local proxy settings allowing us to run basic queries and other web browsing unrelated to our testing in the non-proxied browser.
Note: When using the Composable Architecture Platform browser proxy for accessing secure web sites over HTTPS, you will encounter certificate warning in the browser. These warnings are relatively easy to get around by clicking on the Advanced button and adding an exception. However, with the advent of HTTP Strict Transport Security (HSTS) this has now become impossible to do as the browser will refuse to add the exception.
The Browser Certificate Installation Guide (in the documents folder) provides instructions on how to overcome this problem by installing a trusted certificate authority into your browser that Composable Architecture Platform in turn will use to generate valid replacement certificates for each SSL site on the fly.
The following shows how to configure the browser proxy in Firefox Quantum 60.0 on Windows 2012 Server:\
Select Options then click on Network Proxy > Settings
:
Set the proxy options as shown below:
Normally the starting point for a new rule set is using the New rules wizard. We will cover that later, but for the purpose of simplicity this exercise will instead build a new rule set from scratch. Return to the console and click on Rule Sets:
In the action window select your new repository and give the rule set a name (in this case NoAds) and click on Create:
Note: The rule set name should always be a single word with no spaces.
A new rule set is created, ready for us to edit:
Click on Update to start editing the rule set. A pop-up window will appear showing the rule set in the Rules Editor:
Note: If no pop-up appears then check your browser's pop-up blocker. Pop-ups (though blocked by some users) are useful for the rules editor. It allows you to have many rules editing windows open at the same time and edit them all concurrently (including copying and pasting between them).
We encourage you to expand some of the elements of the Rule Catalog to see what is available. The complete rules reference is also available as a PDF document from the main console page.
At this stage you should also add a short description of what your rule set is going to do. Do this by clicking on the Rule Info tab and keying in a short description of the purpose of the rule set:
The next step is to start building some rules to handle the search result. The first consideration is that the rules should only apply to search results, not items like images, CSS and the like. Normally the New rules wizard would insert a special rule to take care of that problem, but with Google there happens to be a very simple solution: Any request that has a dot (.) in it, is sure to be non-HTML.
Other sites may use some other consistent extension for pages (such as .php, .jsp or .html), but for Google it is pages with no extension at all.
Therefore, our first task is to filter out all requests with a dot (.) in them, and to do this, we need a condition. Expand the Conditions group and drag an If Condition from the tree onto the canvas:
At this point, move your mouse over the If Condition on your canvas, right click it and select Help. The expanded help for the rule appears:
All rules have this help available. In addition, in the bottom left corner of the rules editor you will also see a summary help notice:
These help features are often useful when trying to find the best rule to suit a specific purpose (as some rules may sound very similar).
The next step is to change the actual server response before it is sent to the user. In our case this change consists of the html string replacement we identified earlier. The rule for a string replacement is called String Replacer. Locate it and drag and drop it onto the canvas. How to connect it up should be easy now:
Notice that we connect both the Found and NotFound chain points to the following rule.
We do this because not all Google pages display ads. This time the properties are set as follows:
There are a number of different ways to work out what actions need to be performed in the rules. In this case, the only action is to alter the response, so we need to determine where to make changes. Browsers like IE, Chrome and Firefox all provide developer tools to help identify specific elements in the page source code. In all of those browsers hit F12 to access the debugging tool if using Windows. For other platforms, please check the browser help instructions for how to access the tool. They all work in a similar fashion, but we will just cover Firefox Quantum version 60.0 operating on Windows 2012 Server in this example.
Click on F12 to open up the Inspector:
Click on the html inspector tool:
Now select the sponsored ads box:
This is where it is useful to know HTML, especially when dealing with a multinational site such as Google, as the tags tend to change from country to country. In our example, it is worth noticing that there are various advertising tags output within the source of the page.
There is a DIV with ID “rcnt”. To make the ads disappear you need to hide the tag using inline css styles.
To accomplish this:
becomes:
With this information to hand, the next step is to start building a rule set.
IMPORTANT NOTE: Individual versions of Google will differ depending upon operating system, browser, and country. Make sure to work out the right way to make this modification in the version being used.
The next step is to forward the request from the browser to Google's server so that we can get a result to work with. The rule to use for this is called HTTP Server Execute. Rather than trying to locate the group the rule is in, this time we will search for it. In the rule editor, click the Searchable tab:
In the search box type execute. The search list updates for each character typed and quickly locates the rule as shown:
Once again, drag the rule onto the canvas and then connect the False (URIs that do NOT contain a ".") chain point from the If Condition rule to the input of the HTTP Server Execute rule:
Once again set the properties as follows:
What we are doing here is requesting the response from the server. The response will be loaded into the RESPONSE
variable name; the content type of the response will be supplied in the CONTENT
variable. We have chosen not to override any part of the request (although in theory you could override the "dishwasher" query to be something else entirely)
Finally, we have elected to obtain the headers and status code from the server as well. We will need all of this information later to send back a proper response to the user.
Now that your console is fully operational, we are ready to take you through a basic example that illustrates how to use it. Our example will show you how to remove all advertising from Google search results.
Even though this example has limited real world practical use (unless you wish to run it on your corporate internet gateway), it provides a basic case study that shows many fundamental features.
We have only one final step to complete the rule set, we must return the changed response to the user. This is done with the HTTP Response rule:
The final properties are set:
Our rule set is now complete, so save and exit the editor.
IMPORTANT: If you are using Google Chrome to edit the rules make sure to hit the Save button in the rules editor before closing the pop-up window.
Our rule set is now complete, and the next step is to get it running on the Proxy Server. This requires a configuration. Configurations provide all of the instructions for how rule sets obtain their input, how they connect to databases, under what circumstances rule sets are run and so on.
To create a new configuration, click on Configurations in the console administration tree:
The create new configuration page is shown. Select your Google Ad Remover repository, enter the file name RemoveAds and a short description:
Click Create to create the configuration. You will notice that the configuration automatically selects the NoAds rule set. However, if you have multiple rule sets you will need to ensure that you set the correct one.
The final step in this case is very simple. We go back to the Firefox browser and hit refresh on the Google search. Remember that before the result looked like this:
And after the refresh it now looks like this:
So, with a bit of preparation and just 4 rules, we have transformed the Google search result.
Our configuration is now complete, and we can deploy it to the Proxy Server to test it. Within the configuration page, click on Deploy:
Clicking Deploy does two things:
The configuration is automatically saved
The configuration is automatically verified for errors
If everything is OK, you will see the server selection window. Just like you did with the BasicWebTrial configuration, select the Proxy Server, check the Restart immediately box, and click on Deploy.
Once again you will see the deployment process followed by the status screen showing ready:
Our configuration is almost complete and ready to deploy. The last step is to define the input source for the X Engine. Click on the Input Source tab and set the From Server Type to Production, and the Source of data to Receive web application input:
Each configuration is divided into 6 tabs with settings. In this section we cover these tabs in detail.
The General tab contains all of the basic information about a configuration.
The file name should be a single word (no spaces). You can rename the configuration by changing the name and clicking "Save".
The description provides an easy way for other people to understand what your configuration does and serves as basic documentation.
Every configuration must have a rule set. This is a mandatory field.
Content rule sets allow you to manage specific content (new pages, images and so on), that you can introduce to the application. If the target server definition has a context path set for serving content files, you can optionally check the Use server context path checkbox to use that as the defined directory path.
If you have rules that should be executed before your main rules, or immediately after they have all completed, you can define them in the configuration.
Please note that if you set a startup or completion rule set, the X Engine will be restricted to running in a single thread to ensure the correct order of events.
Modes are named collections of rule sets and content rule sets that can be used to replace the default rule sets that are running. For example, if you wish to take a website offline for maintenance, you can create a “Maintenance” mode and assign it to rules that display a maintenance page instead of your normal website.
The input source tab provides details to the X Engine about where input is coming from and how to deal with it.
The server can be either Production, Multi-Protocol, or Test. This makes the configuration target a specific server type and determines which input adaptors are available to select from.
A critical part of the configuration is the input adaptor or “source of data”. The options available depend on the type of server selected. As a general rule, the file name or URL being processed will be made available by the input adaptor as the variable URI (Uniform Resource Identifier).
For file names, this includes the full file path in the file system dependent format.
Input adaptors are frequently added via extensions. At the time of writing, the following input adaptors are available by default:
Execute a load test against a server: This input adaptor is only available for production servers. It allows you to take a stress test rule configuration generated by the “New rules wizard” modified to suit the application and use that to generate a load against a website. You can control ramp up times and total threads as well as think times.
Please see the “Using the New Rules Wizard for stress testing” section in this manual for more information.
Process multi-protocol input: This input adaptor is only available for Multi-Protocol servers.
It allows you to take input from any protocol defined within the administration section of the console and control the input, proxy and output of that protocol.
Protocols supported include (but are not limited to): MySQL, DNS, Telnet, FTP, ISO8583 and SMTP.
Transports include: SSL, TCP and UDP.
Process a single CSV file: This input adaptor is only available for test servers. It allows you to define each column in a CSV file that you wish the server to process. The file must be present amongst the test data files uploaded to the console.
Process a single multiline CSV file: This input adaptor is only available for test servers. It allows you to process CSV files that have records spread over more than one line. Typically, this could be a file that looks as follows:
To process the above file, you would need the following definition in the Input Fields tab of your configuration:
The break column (BREAK_COLUMN) defines which column number is used to identify a unique record. The record column (RECORD_COLUMN) defines which column number contains the record type.
Each individual field for an entire record is then defined as a field name, with the label indicating which record and column number that field can be found in.
Process a single identifier delimited file: This input adaptor was designed to rapidly traverse files that contain somewhat structured data, where each piece of data is preceded by a recognizable identifier, and all of the identifiers are in the same order (although missing identifiers are tolerated).
This adaptor relies on the data in the file containing a format where an identifier can be used to spot breaks in the data. The following example illustrates how this adaptor can be used:
Sample data that could be provided to this adaptor could be:
The first label listed in the input fields for the configuration MUST be the break for each new record. A record can be on more than one line in the file.
Processing the above shown data would result in two records, the first with the variables set as follows:
The second record contains:
Strings are always being trimmed of leading and trailing blanks but can contain more than one word. If you have identifiers in the file that you wish to ignore, you must still specify them in the list or they will be considered part of a context of a previous identifier.
Process a single XML file: This input adaptor is only available for test servers. It allows for an XML file to be processed by the X Engine. Each XML tag in the file (and its attributes) will be converted to a unique variable name. For example, the following XML document:
results in the following variables being generated:
It is important to note that all tags below the root tag will have a counter attached to them to ensure uniqueness. This is what results in the “_1“ being added to the “name” tag in the example above.
If more than one “name” tag is present, the conversion will be as follows:
Which results in the following variables being generated:
As this process can result in some rather long variable names (especially when processing XML documents such as SOAP requests), the use of the Alias rule is encouraged to simplify rule writing.
Process all CSV files in a directory: This input adaptor is only available for production servers. This input adaptor will look for files in a folder/directory. When one is present, it will process it and then delete the file. Each field in any supplied CSV file must be defined in the configuration.
Process all identifier delimited files in a directory: This input adaptor is only available for production servers.
This input adaptor will look for files in a folder/directory. When one is present, it will process it and then delete the file. The data within the supplied file is converted into unique variable names as outlined in the “Process a single identifier delimited file” adaptor.
Process all multiline CSV files in a directory: This input adaptor is only available for production servers.
This input adaptor will look for files in a folder/directory. When one is present, it will process it and then delete the file. The data within the supplied file is converted into unique variable names as outlined in the “Process a single multiline CSV file” adaptor.
Process all XML files in a directory: This input adaptor is only available for production servers.
This input adaptor will look for files in a folder/directory. When one is present, it will process and then delete the file.
The tags within the supplied XML document are converted into unique variable names as outlined in the “Process a single XML document” adaptor.
Process free format test data: This input adaptor is only available for test servers.
This adaptor is specifically designed to receive data from a file generated by the “Test Data Creation” rule (TST files). There is no need to define any input fields as the data within the file are composed of a field definition list as well as a data value list for each record.
This adaptor is designed to process data from production servers that process web application inputs with the actual variable names changing for each request.
The test server will be able to emulate what happens on an actual production application server without the requirement to simulate anything in a test environment.
This adaptor is also useful for pre-testing any new rule set to evaluate the impact of installing it into production.
Process on heart beat: This input adaptor is only available for production servers.
This input adaptor is used to process the same rule set at regular intervals. You can specify the delay between each run in ms.
Process once and stop: This input adaptor is only available for production servers.
This input adaptor will run the rule set once upon startup and then stop. This is predominantly used for testing rules.
Receive input via HTTP POST: This input adaptor is only available for production servers.
This input adaptor is designed for high-speed processing of a specific HTTP POST (for example from a known JSP or HTML page). Each field that the X Engine is expected to process must be defined in the configuration, just as if the input came from a CSV file.
The field names listed must be the same name (case sensitive) as they appear in the form post from the HTML that submits the request.
It is important not to confuse this adaptor with the “Receive web application input” adaptor, which is slightly slower but much more flexible.
Receive web application data: This input adaptor is only available for production servers.
This input adaptor is probably the most flexible, but also most complex. It is capable of receiving data from any HTTP
request, be it a GET
or a POST
, and translate it into variables that can be used by the X Engine.
The adaptor understands and translates standard HTML
, XmlHttpRequest (AJAX
) and SOAP
requests as long as the appropriate content type is set in the HTTP
request.
For HTML POSTs and GETs, the URL parameters and form fields are translated directly into input variables, with each variable name matching the corresponding parameter or field name. For XmlHttpRequest and SOAP requests, the tags within the supplied XML document are converted into unique variable names as outlined in the “Process a single XML document” adaptor.
This particular input adaptor allows you to enforce some web application security settings:
For HSTS:
At the very minimum you must provide a ‘Max age’ value in seconds. The most recommended value is 31536000 seconds (one year).
Optionally you can check the box to include sub domains.
Note: Any of the above settings that modify cookies require that you are running on a Servlet Specification 3.0 or later web application server. For the standard installation that means Jetty 9 or later.
After the selection of an input adaptor in a configuration, there are a number of fields specific to that adaptor. The largest difference is typically between test and production adaptors, we will show two examples here:
The above scenario is for a production input adaptor that processes all files "dropped" into a given directory. The input adaptor will poll that directory and whenever a new file is added it will be processed and then deleted.
The additional fields are as follows:
Selecting this option causes the X Engine to always collect test data by default. You can also start collecting test data on demand using the server status view.
This is the maximum number of records that the X Engine will keep in memory. For data with a large record size, this value should be set properly to avoid retaining too much memory.
This is the sub-directory from the home folder where the X Engine will look for files. For other input adaptors that do not use files (such as the web application adaptors) this can be a different field name providing different information.
Selecting this option will cause the X Engine to automatically start when the server is started. There is no need to selectively click the start button.
Test servers behave differently to production servers in that they always take a single file as input data, process that file and then stop. The following is an example of the settings for a test input adaptor:
Test data is the name of the file to process. This file must be in the test data section of the console tree in the same repository as the configuration.
The testing flags are used to control how the X Engine interacts with the environment around it.
The remaining input source fields are generic. The following is a list of their meanings:
This flag determines if messages written to the console (via any of the List rules) are also written to the system's standard out log.
Enable Debug Mode turns “List Variable for Debug” rules as well as “Exit” rules with “List Variables on Exit” set to “Debug mode”, on, so that all variables will be listed at selected points throughout the rule sets.
The Fail Open setting determines how the X Engine deals with a fatal error. If selected, the X Engine will automatically stop and let all normal traffic proceed transparently should a fatal error occur. If unselected, the X Engine will attempt to recover from the failure and continue running.
This setting is used to control how the X Engine detects infinite loops. Effectively every connection (chain) between rules has a counter built into it. When the number of chain events reaches the count set here, the X Engine will consider itself looping and will terminate to avoid impacting other services.
This setting determines how much performance data is collected as part of the X Engine execution. Please see the performance data section for more information.
Input fields are used for a variety of purposes. They can be used to identify column settings for input adaptors and also to determine global settings that can be changed at the configuration level without changing any rules. The following shows such an example:
When the X Engine is running, it is possible to set global fields. These are fields that can be accessed by the X Engine at any stage and are not dependent on input from other sources. Global fields can be changed during the execution phase of a rule set, allowing you to potentially alter the flow of rules, set different thresholds or check for different conditions.
The following shows an example of defined global fields:
It is important to know that global fields are persistent. This means that the default value set in the configuration only applies for the very first time a global field is set. After that point, the global field retains its set value, even after the X Engine is restarted.
The field name is the global variable name set when the value is changed. The Label is the label that the user sees.
The field type for each global field is important, as are the allowed values. You can set fields types as follows:
Text: This creates a simple text field that can be changed. The allowed values have no effect.
Number: This creates a simple text field for numeric input that can be changed. The allowed values have no effect.
Switch: This creates an on/off style switch that can be changed. The allowed values represent the values set for the ON condition, followed by the OFF condition.
Slider: This creates a slider that can be changed. The allowed values represent the min value, max value and optionally a third value representing the increment. Only integers can be used.
List: This creates a drop-down list of values that can be picked. The allowed values represent each of those selections.
The following shows an example of how the above defined values are displayed in the server settings change function:
Many of the rules available in Composable Architecture Platform are capable of accessing data in databases, either locally to Composable Architecture Platform or externally stored somewhere within the network.
As a user, you will need to know the name of the database that you want to connect to, and in some cases also the table and schema names.
Configurations need to list all of the databases that the rules within them are capable of accessing. This allows the deployment system to provide
Additional information to the Composable Architecture Platform servers about how to access those databases at a technical level. The databases themselves are normally defined by a system administrator.
The following shows a configuration of a sample database:
You must enter the database name and the type of database (driver). The list will vary depending on the types installed on the network.
If you are writing rules that may be used on different systems where the database names may differ, you can use a database alias name. The database alias name in the rule sets will be mapped back to the database name defined in the configuration.
There may be times where you wish to access to a database with a schema name that varies between test and production systems. To allow this, you can override the schema name in the configuration. If you leave the schema name blank, the default value configured by the system administrator will be used.
The system defines where the actual database is located. You can combine this with the defined servers to allow JDBC drivers to connect to any given location.
You have the ability to set a list of rule sets that execute at a given time interval. These rule sets are independent of any input data and simply run on a repetitive cycle. There are two types of cycles: Delay and Real Time:
Delay timers simply execute the rule set, then sleep for the delay period and then execute again.
Real time rule sets will run at a precise interval, regardless of the time it takes to execute the rules.
An example of a timer setting is as follows:
In the above example, the OnTimer rule set will be executed, then pause for 30 seconds and then repeat.
Please note that the data object used within the timer stays the same. This means that you can set variables within the timer rule set and use those same variables the next time the rule set executes (for example variables for a counter).
Note that for web applications, the timers will not start until the first real transaction has been processed.
For each configuration you create, you have the option of producing a complete documentation set. To create this, click on the Document button:
Then select the target server and click on the Document button.
A pop-up window will appear that lists a complete view of the configuration, the selected server, JDBC drivers, databases, rule sets, data files and so on.
Selectively you can include the actual details of the contents of data files and content files. For each of the respective files, simply tick the check box as shown below and click on Save.
IMPORTANT: The output in the pop-up window is very browser dependent and the quality of the results may vary. We recommend using the print preview option in individual browsers to see the final result.
At the time of writing the following conditions applied:
Firefox 11.0 produced a fairly faithful representation of the intended output but was slow.
Internet Explorer 9 produced a reasonable representation of the intended output, was faster than Firefox, but did not always respect page breaks.
Chrome 18 produced a terrible graphical look as it does not print background images in pages but was otherwise fast and true to page formatting.
The previous section gave you a quick introduction to working with the X Engine. We will now go into much more depth and provide a deeper description of the various features of the product.
Please see the “” section in this manual for more information.
Preload is a method whereby the most common browsers will load a list of sites that MUST use HSTS. Google maintains a list of sites that are preloaded as requiring HSTS and that list is used by the Chrome, Safari and Firefox browsers. To have your site registered as preload, you must apply here: . Google will verify that you have the preload flag set against the HSTS header before adding you. If you do not have this flag, Google will reject your application to be added to their list.
Preparing the Browser Proxy
Setting up the Proxy in the Browser
Verifying the Browser Configuration
Understanding the Configuration
Understanding input and variables
Preparing a New Repository
Locating the Page to Modify
Determining the Actions Required
Building the First Rule Set
Setting Rule Properties
Connecting up the First Rule
Getting a Server Result
Manipulating the Server Result
Returning the Result to the User
Creating a Configuration for the Rule Set
Selecting the Input Source
Deploying the New Configuration
Testing the Rules
Record ID
Record Type
Value Column 3
Value Column 4
12345,
R1,
John
Doe
12345,
R2,
Melbourne
Australia
23456,
R1,
Bob
Smith
23456,
R2,
London
United Kingdom
34567,
R1,
Jane
Doe
34567,
R2,
Auckland
New Zealand
Flag
Usage
Update Internal Data
If selected, the rules in the X Engine will update data with tables that it can directly access. This includes the internal case manager.
Update External Data
If selected, the X Engine will write data to external systems that are not database connected. This could, for example, be an external case manager that receives cases via a web services call.
Send Alerts
If selected, the X Engine will send alerts such as emails, SMS messages and other forms of external messages.
This section covers how to simplify the rule sets and keep them readable by dividing them into smaller blocks. You do this by using inclusions and exit points.
To illustrate this, please find below a rule set that has the ability to do some basic fraud prevention analysis on incoming HTTP
requests:
If you were to copy all of the rules into your rule set every time you needed to perform these checks, you would end up with very large unwieldy rule sets. To avoid this problem, Composable Architecture Platform gives you the ability to include (or embed) existing rule sets into new rule sets as if they were just another rule. The following example illustrates this:
The BrowserInfoCheck rule set as part of the Qwerty, AccountMain rule set is behaving just like any other rule. This was done by dragging it from the Rule Sets folder in the editor and onto the editor canvas. The key to making this approach work is to use Rule Set Exit rules in the embedded document. The rule set exit rules in any given rule set determine what chain points will be available for that rule set when it is embedded into another page.
If you look back to the BrowserInfoCheck rule set, you can see the OK, Error and Warning exits.
There are no limitations on the number of rules sets that can be embedded within one another.
Embedded rule sets have many advantages:
Complex rule sets can be built for a given function, keeping the logic central
You can share rule sets between functional areas without the need to copy
You can have experts build rule sets for novice users
If you have rule sets that perform a unique discrete function, it can be advantageous to wrap them up so that they look like a new rule that can be included at the repository level. You can specify the group, name and a set of input and output parameters for that rule. This makes the rule set show up in a specific group and parameters subsequently show up as properties of the included rule set.
To define parameters, go to the rule sets “Rule Info” tab:
Enter your group and name and click on the Add Parameter button to create a new parameter:
The Parameter Name defines the name of the variable used internally by the rule set for this parameter.
The Parameter Label fields determine what is displayed as the property description when the rule is included.
The Parameter Type determines how the parameter is used. There are 3 options:
An input parameter is set when the rule set is called. So, a top level rule set could use the variable “Name” within it and you could define a parameter equally named “Name”. Once a rule set has parameters, it can no longer “see” or access variables outside its own boundaries.
So, at the top level the variable “Name” could contain the value “Smith”, yet a value of “Jones” could be passed to the parameterized rule set (also with the variable “Name”) and the embedded rule set will only see the value “Jones”.
Input parameters to a rule set can be constants (in quotes), a variable or a value from a drop-down list. To make the input parameter a drop-down list, supply a CSV list of values in the Set values field.
An output parameter is set when the process flow exits the rule set at the top level. So, when the embedded rule goes down a chain point or returns from its function, the original set of variables will come back into effect and only variables set against the output parameter will be transferred from the embedded rule set to the top level rule set.
Output parameters can only be variables.
I/O parameters are variables that are read on input and set again on return.
I/O parameters can only be variables
To illustrate how parameter can be used, consider the following parameter definitions against a rule set:
And the following rule set:
The above rule set will simply add to or subtract from a value.
When this rule set is included into another rule set, the following parameters become visible:
The user can now pass any give variable name to this rule set and the included rule set will perform the expected math on it (using its own internal variable names) and return the result in the passed variable.
This approach enables the building of generic rules that are guaranteed to not have a variable name overlap with other rule sets.
You can now go to the repository level and select the individual rule sets that should be exported from that repository:
When another repository includes this particular repository, the only rule set available will be names “Quick Add” and will be found in the “Math” group as specified on the Rule Info tab.
Rule sets can be compared to a visual programming language. What you can do with the rule sets and custom rules is almost unlimited. In the following sections we will cover rule sets and the rules editor in more detail.
As already shown in the introduction, you can add documentation information to each of your rule sets. As rule sets are often interlinked, we strongly recommended doing this.
The file name for your rule set should be a single word (no spaces). It can be mixed case. To rename a rule set, simply change the name and click on rename.
To avoid more than one person working on the same rule set at the same time, the rule editor employs a locking mechanism. Once a rule set is opened in the rule editor, another user can only open it for viewing. You will receive notification messages if you attempt to open a rule set for editing in more than one window at a time.
The “Maintain Rule Set” page (shown above) will show which user currently holds a lock.
Data files represent any kind of additional data that you may need to have deployed on the server for it to be able to execute a given rule. Examples of data files are CSV files detailing ISO country codes and geo location databases.
You can upload these data files to the console server and from there deploy them out to any Composable Architecture Platform server on your network in a single operation.
As a rule, those files will be installed into the Composable Architecture Platform HOME folder.
Most rules will register a listener to detect if a data file they depend on has been changed. This means that a changed data file will be picked up by a rule set automatically without the need for a restart. This detection typically happens within 30 seconds of deployment.
The rules editor is designed to be easy to use. There are however a few tricks that will make using the rules editor even easier. The following covers a few of these tricks.
If you need to move multiple rules around in unison or select them so that you can copy them to another rule set, hold down the CTRL key whilst clicking and selecting rules. Alternatively, you can hold down the mouse key and drag a rectangle around the rules you wish to select.
Rules can be cut, copy or pasted within the same rule set or between open rule sets by right-clicking whilst the rules are selected. To paste the rules into a new position, right-click on the canvas where the rules should be placed and select Paste.
Note: Not all browsers support the right-click feature. For this reason, edit options can also be obtained by holding down shift and left-click.
If you mistakenly connect one rule to another, you can remove the connection by right-clicking on the chain point and selecting Disconnect.
Many of the rules in the system take a variable, text or a number as a parameter. Generally speaking, variables may only contain letters, numbers and the characters: ‘_’, ‘:’ and ‘.’, and they must start with a letter.
There will be times where a variable can be confused with a text literal, and for that reason text literals should always be enclosed in double quotes. For example: ABC is a variable whereas “ABC” is the text ABC.
Numbers on the other hand are unambiguous and can just be keyed as numbers.
When entering a CSV list of values, there is no need to enclose the entire block in double quotes. Since CSV text has commas in it, it will automatically be detected as a list of string elements.
There is no need to close a rule set to test it. You can keep multiple rule sets open in multiple windows, deploy your rules, test and then return to the already open windows to continue editing. Click the save button rather than the close button to stay on the page.
Several of the standard rules make use of a string matching feature known as regular expressions. Books have been written about regular expressions and it is beyond the scope of this manual to cover more than the very basics.
In its most basic form, you can use regular expressions to see if a certain text is available within another, to count characters and to look for certain pieces of text at certain positions within words.
An example of a regular expression would be: (ab|cd)
This expression will check if a text contains the character sequence “ab” or the sequence “cd”. So both “baby” and “lcd” would be a match.
The following tables list some of the common uses of regular expressions and how they can be used to validate text.
Character(s)
Description
Example
Matches
Does not match
[]
Matches any of the enclosed characters
AB[cC]
ABC, ABc
abC, def
[^]
Does not match any of the enclosed characters
AB[^cC]
ABD, ABz
ABc, ABC
[a-z]
Range
AB[C-F]
ABC, ABD, ABF
ABG, ABc
.
Any character except newline
N.
N1, N2
N
\d
Any digit between 0 and 9
N\d
N1, N2
NA
\D
Any non-digit character
N\d
NA
N1, N2
\w
Any letter, digit or underscore (equal to [a-zA-Z0-9_])
ab\wd
abcd, abCd, abad
Ab/1d
\W
Any character other than a letter, digit or underscore (equal to [^a-zA-Z0-9_])
ab\Wd
Ab/1d
abcd, abCd, abad
\s
Any single white space character (space, new line etc.)
ab\sde
ab de
abde
\S
Any single non-white space character
ab\Sde
abcde
ab de
Character(s)
Description
Example
Matches
Does not match
*
Zero or more occurrences
Tes*
Test, Television, Tess
Two, Trees
?
Zero or one occurrence
Colou?r
Color, Colour
Colouur
+
One or more occurrences
Tes+
Test, Tess
Television
{x}
Exactly x occurrences of the previous character
Hel{2}o
Hello
Helo,Helllo
{x,}
x or more occurrences of the previous character
Hel{2,}o{2,}
Helllloooo
Hello, Heloo, Howdy
{x,y}
Between x and y occurrences of the previous character
Hel{2,3}o
Hello, Helllo
Helo, Hellllo
Character(s)
Description
Example
Matches
Does not match
()
Group together
(abc){2,}
abcabc
abc
x|y
Either x or y
(abc|def){2,}
defabc, abcabc, ghjabc
abc
Character(s)
Description
Example
Matches
Does not match
^
Match at beginning of string
^hello
hello world
say hello
$
Match at end of string
hello$
say hello
hello world
\b
Match at beginning or end of a word
\bhel
say hello, hello world
Chellos
\B
Match inside a word
\Bhel
Chellos
say hello, hello world
(?=x)
Only if followed by x
ab(?=c)
abc
ab, abd
(?!x)
Only if not followed by x
ab(?!c)
ab, abd
abc
Lists and data sets provide an efficient way to work with keyed data that needs to be stored either in memory or a database.
Lists are capable of storing normal variables, lists or data sets, allowing effectively for multi-dimensional arrays.
There are two types of lists: Regular lists and fixed sized lists. When you insert a value into a list, the list will automatically be created as a regular list if it hasn’t already been created.
Fixed size lists are extremely useful for memory caching. Elements inserted into a fixed size list will stay there until the maximum size of the list is reached. Once that happens, the oldest element that has not been accessed (read or updated) will be removed from the list.
Elements inserted into a list must have a fixed key. If a new element is inserted with an existing key, the existing element will be replaced.
It is possible to create global lists by creating the list in a startup rule set and then setting it as a global variable. When the global list is read from a normal rule set, any changes made to that list as a local variable will directly affect the global list. This provides a means for caching data at a global level.
Data sets provide a way to create a collection of correlated data. For example, you can define a data set called “Fruit” with the properties Name, Color and Shape. Once you have defined a data set, you can create instances of it in a database or in a list. For example: Apple, Red and Round.
The X Engine will automatically handle the correct storage of the data set in a database and properties of the data set can be added and/or removed at any time in its lifecycle. So, if at a later stage you need to store another property in your Fruit data set, you can simply add it to the definition.
Data sets should be defined in a start-up rule set and can only be defined once within the life cycle of a deployment.
Data sets can optionally contain a number of lists. However, lists stored within a data set may only be single dimensional (you cannot have lists within lists).
Once a data set is stored within a database or a list, you can read it by key, delete it, update it etc. To update a data set within a list, you simply create it again with the same key name.
A quick way to get started building rules for a new web application or for stress testing is to use the rules wizard. The rules wizard uses live test data to extract URIs visited and build a structured collection of rule sets.
The rules wizard creates a large number of files, so we strongly recommend that you create a new repository to write the new files to.
Before using the rules wizard, you must first deploy and start either the RuleWizardConf or the StressTestConf configuration found in the Rules Wizard repository to a test server that is protecting the target application (alternatively you can use the zero installation test method which is covered later in the manual). Once done, simply begin navigating the various components of the target web application.
Once you have visited all of the pages you wish to cover with your initial rule set, return to the console and go to the server status screen:
In this example the Qwerty demo application was chosen and there are 19 test records ready to be processed.
Now click on the "New rules wizard" button. You will be presented with the following page that controls the wizard:
At this point you can have the rules wizard create a filter for the rules that automatically exclude static content. It is a comma separated list and you can easily add new elements.
Make sure that you select either New rule set or Stress test, depending on your requirements.
When you have selected an appropriate repository (in our case, we have use the repository ‘Name’) and reviewed the exclusion list, click on "Create".
After a brief pause, the X Engine will write out a complete configuration and collection of rules. The following pages show how these rule sets are structured for a new rule set:
In keeping with best practice for rules writing, the rules wizard always creates a "Load" rule set. This contains rules that are generic for all URIs. The name of the page will be the name of the repository followed by the word "Load".
This rule set will first check for malformed HTTP requests and, if found, will reject them. Subsequently it filters out static content, adds a tracker rule and then proceeds to the main rule set.
The main rule set page contains a basic structure that determines the URI being visited and then uses a switch to re-direct to a rule set covering that URI.
Each one of those rule sets, in turn, are blank, but are already created and ready to have rules added to them. The new rules wizard creates a quick foundation to get you started with writing rules for your application.
When the rules wizard creates each of the blank templates it includes sample information of the fields gathered and their values in the page description:
You can use this information to identify the fields available to your rules.
As described above, an alternative use of the New Rules Wizard is to create a set of rules for stress testing an application. To do this, you must first deploy the StressTestConf configuration from the Rule Wizard repository to the web application on the browser proxy.
Once deployed, you should work through the application to be stress tested, step by step. Try to complete pages as normally as possible, making sure not to pause unnecessarily during the process, as wait times are recorded.
During the new rules wizard creation, select Stress test instead of New rule set.
The following pages show how the rule sets are structured for a stress test:
In keeping with best practice for rules writing, the rules wizard always creates a "Load" rule set. This contains rules that are generic for all URIs. The name of the page will be the name of the repository followed by the word "Load".
This rule set allows you to set up a multitude of things. First of all, the target server identity, and also the user agent to use. It is very common to modify the Load rule set to pick up randomized values for elements such as users. The following shows a modified Load rule set to include diversified User ID and Password configurations for a Qwerty stress test:
In this scenario, the users and their passwords are picked from a CSV file of valid entries. The settings for the CSV Line Picker rule are as follows:
It is important to notice how the THREAD_NO variable is used to pick a line in the CSV file. The THREAD_NO is incremented for every stress testing agent and should the number exceed the number of entries in the CSV file, the rule will start reading from the beginning again.
The main rule set page contains a basic structure that determines the URI being visited and then uses a Number Sequencer rule to re-direct to a rule set covering that URI.
The wait rule set page is used to define wait times. This delay can be fixed, or it can be a function of the recorded wait time during rule set creation.
The following shows a wait time delay:
The wait time properties can be set as follows:
The default for the percentage random is zero, but it can be randomized to create a more realistic user load.
An alternative is to either remove the wait time completely or insert a fixed delay:
The fail page is used to define any action to be taken if a stress test page invocation fails. The default is to abort the flow and stop the thread:
Each page that is detected will be assigned a unique page name and will be given a unique rule set. The following shows the main page in the Qwerty application:
In this case, the page is a GET
and the invocation is relatively simple. For POST
requests, the structure looks slightly different:
In this case, the recorded POST
variables are set up in a single rule. You can override these POST
variables to use pseudo random values (as shown previously in the Load rule set).
The individual pages are where you should include test data, and possibly review the response data from significant pages to ensure that the stress test is progressing well.
There may be times when you wish to test rules against a site without affecting that site in any way and without installing anything. Typically, this applies to an initial first trial of Composable Architecture Platform, tests against websites external to your organization, or for convenience where you are waiting on the installation of a built-in proxy or inline filter.
The zero installation rules testing approach relies on the forwarding proxy and browser proxy built into the console application. The following example will reiterate the sample from the earlier section, Quick product introduction using Google's search page.
The first step in this example is to prepare the browser proxy so that all traffic to and from Google is successfully routed via the Composable Architecture Platform Proxy Server. This will give you visibility of the data and provide all of the information needed to manipulate it.
Many browsers have in-built security features to prevent user access to websites whenever there is an untrusted SSL certificate, and will block the incoming request without exception. In this example because it is not possible to install Google’s SSL certificate to the Proxy Server, to overcome this set the Server Definitions for the Proxy Server located in Administration by clicking on the Proxy Server as follows.
Click on the Forwarding tab and set the Request redirection properties for Google as follows. This example is for a UK IP address request that follows the redirect of Google.com to Google.co.uk based upon the IP geolocation from the originating browser.
The first line entry is for example format use only and has no impact on the Proxy Server:
http://thishost>http://thishost:8001
http://google.com>https://google.com
http://www.google.com>https://www.google.com
http://google.co.uk>https://google.co.uk
http://www.google.co.uk>https://www.google.co.uk
Once the redirection settings have been input, scroll to the bottom of the page and save the modified Proxy Server definition.
The Proxy Server will now successfully route the http to https protocol redirection and allow the browser to access the website even without a correct SSL certificate.
To prepare for the test, start by deploying and starting the BasicWebTrial configuration from the Product Trial repository to the Proxy Server. This will allow you to see the data flowing from the website.
Depending on the type of web browser being used, you should now set the manual proxy configuration to tell the browser to go through the Composable Architecture Platform built in proxy.
The following shows how to configure the browser proxy in Firefox Quantum 60.0 on Windows 2012 Server:
Select Options then click on Network Proxy > Settings
:
Set the proxy options as shown below:
Please note that if you have changed the ports for the browser proxy in the server settings for the built-in proxy, then these settings may differ.
Now go to the website you wish to test (in this example: http://google.com). The data will quickly start flowing to the console:
You can now start writing rules, deploy them to the Proxy Server, and test them. The only place this will have any effect is for transactions coming from the browser with the browser proxy set. For example, you could create a simple content rule that replaces the Google logo, adds a link to your intranet, or scans for bad language used in searches. The possibilities are endless.
By default, the built in Proxy Server settings will only accept connections from the local host (127.0.0.1). This restriction is in place to ensure a given Composable Architecture Platform Server installation does not become an open proxy. The various server settings can be controlled via the server definition of the Proxy Server.
There are two ways that Composable Architecture Platform can be used in a Web Services environment. It can operate as an inline filter, just as it would for a normal HTTP request intercept.
Alternatively, it can be configured to act as a Web Service itself by installing a SOAP servlet into a new (or existing) web application and placing the filter in front of it. To facilitate this, a default servlet is provided named:
software.tomorrow.engine.server.RulesEngineSOAPServlet
This servlet will always provide a valid SOAP response regardless of the input provided to it. The precise response is as follows:
The Composable Architecture Platform inline filter and built in proxy offers the ability to wrap and control the behaviour of a Web application. Usually this is done in conjunction with the Receive Web Application data input adaptor.
Rules in this environment can see all of the post data from the Web browser. By using rules such as the HTTP Request Tracker, HTTP Header Reader and other various rules for interacting with the application’s session object, it is possible to do a lot of additional pre-checking of the request being forwarded to the application before it is even processed.
Rules such as the HTTP Request Saver and HTTP Request Restorer also allow for requests to the backend server being temporarily placed on hold, pending actions by the X Engine.
When writing rules, it is a good idea to keep in mind that you may wish to test your rules without the application server being present. To do this, it is usually a good idea to create a Load rule first, which reaches out to all of the elements of the Web application, the request, dynamic databases, and so on to collect all of the data required for a decision before embarking on making that same decision. By doing so, you can insert a Test Data Creation rule at the point where all of the data is ready, thus allowing you to properly test the business functions of your rule set separately on a test server with all of the relevant data being available.
If you need to look at the output from a given request, insert an HTTP Server Execute rule. This will forward the current request to the server for processing and bring back the resulting response in a single variable. You can then scan that variable for information (for example, look for a balance value or a name) using the Scanner rule. This rule allows you to define the text surrounding information that you are interested in, and then extract that information into a variable.
Similarly, you can manipulate the response before forwarding it to the Web browser. As the response is simply stored in a variable, you can use the Replacer rule to modify text within given tags or specific locations inside the response page.
Once you have finished working with the output from a request, it is imperative that you forward it to the Web browser using the HTTP Response rule.
Finally, there may be situations where you wish to simply append additional data to the end of a Web page before it is transmitted to the user. Typically, this would be in the form of JavaScript appended to the end of the page. The best way to do this is to store the data in a data file, upload it to the server, read it into a variable using the File Reader rule, and append it to the server response using the HTTP Response Addition rule.
This section showcases a number of common rule set patterns used when working with Web applications.
One of the first problems always facing you when you start working with a new application is the ability to understand which data flows where under each circumstance. The following rule set pattern, which includes comments, provides a good starting point:
First, the HTTP Request Tracker rule takes care of getting the browser information and adding tracking cookies. Second, a fast lookup to the MaxMind geo location database (which requires a subscription) identifies the origin of the request. Third, the request is written to the console so that you can monitor it in real time. Finally, the data is written to the test data queue so that you can download it for analysis.
This rule is best deployed to a test system during the initial rule writing phase to better understand what variables and pages are available. It is prudent to deploy it initially during a live install to retrieve a large portion of live test data and play it through the desired rule set on the Composable Architecture Platform test server.
A common issue in dealing with application servers that not only serve dynamic content, but also static data (in the form of images, style sheets, fixed HTML), is to filter this content before it even hits the core rule set. This is best done in the Load rule using a Name Splitter and Switch rule as shown:
The name splitter conveniently extracts the extension of the object being requested using the following properties:
The Switch rule operates on the EXT variable. By adding new chain points for each type of static content they are eliminated from reaching the rule set.
It is often a good idea to know the time a user has spent on a form. This is the foundation for filtering and/or slowing down “screen scrapers” or “data extraction” bots. This is an example of what a browser timing rule set looks like:
The basic concept is to first check if a session is present. If not, this rule set does not proceed. In some sites, this may be overly simplistic and may require modification, but for most sites it will be valid.
The rule set then goes through a series of checks. It reads the last time any request was made to the application, timestamps the current request and stores it. If it is possible to measure a time delay (via the previous timestamp), the method is a POST and the delay (in this case) is less than a second, an attack is assumed since no human can complete a form in less than a second.
A more sophisticated version of this rule set would include a CSV lookup to a list of known forms and the estimated time required to complete them. Based on that, a very effective defense against scripted readers can be mounted.
For reference, the rule set properties are listed below:
In many instances, it is preferable to collate data for decisions over the course of multiple pages. The best way to do this is to use the HTTP Session Writer rule. The rules allow you to specify a list of variables and a list of corresponding key names so that they can be stored in the application server’s session.
The application server’s session provides a convenient place to store data that should only live for the time of the user’s online experience. As the application itself also has access to the session and can set its own keys, it is a good idea to choose key names that are unlikely to conflict with the application. For example, do not use keys such as “user” or “balance”. Instead, use “tomorrow_user
” or “tomorrow_balance
” (or some other unique prefix).
When the time comes to obtain all of the data in a single request, use the HTTP Session Reader rule. Specify all of the keys names you wish to read and the corresponding variables to restore them to, and you will have all of the available data required.
At times you may wish to serve up a Web page that is not known to the application. Examples of this include a two-factor request page, a challenge page or an information page for a rejected request. The easiest way to do this is to use a content rule set in the configuration, which will handle the delivery for you.
An alternative to using a content rule set is to create a template HTML document, upload it as a data file and deploy it to the target server.
Once it is deployed, it can be read using the File Reader rule. Next, have dynamic contents inserted using the Tag Replacer rule or the String Replacer rule. Finally, the HTTP Response rule can be used to serve the page back to the user. The following pattern shows this in action:
There will be many times where you may wish to create a specific link to a page that does not form part of the application. You will only need to do this for application servers that do not allow you to control content via the content delivery rules. If you are in that unfortunate situation you can use the following approach:
You will need to “piggy-back” onto an existing page using URL parameters.
For example, the main page of an application could be “main.jsp”. However, by appending URL parameters to a link (for example as http://myapplication/main.jsp?ShowGif=penguin), you can use the following rules pattern to detect not just images and display them on request, but also additional pages that you may need to link to.
This pattern effectively sits ahead of the normal rule set for that page and allows you to serve up anything you need. The Switch rule makes it easy to handle multiple different files.
The properties for the first rule are:
The Switch then operates on the ShowGif variable. The File Reader then reads the correct file and sends it back to the user.
Once your application starts to deliver custom content, you would generally want it to “look and feel” the same way as the application it becomes a part of. The best way to manage this is to use style sheets. Many applications already have style sheets in place, and provided your new page is served up within a frame, it will automatically be applied to the new page.
However, if your page must stand alone or if it contains specific structures that are not covered by the standard style sheet, you may need to add style sheet tags to the template or an import reference to the applications style sheet. The HTML syntax for this is as follows:
The name (href) of the style sheet will depend on the actual application, and some applications have more than one style sheet. The best way to find the ones that apply is to view the source of one of the pages within the application.
Sometimes you may wish to allow the server to execute the request from the user so that you can look at the response it provides. The HTTP Server Execute rule provides the means to do that, as shown in the following example:
Note that this example also inserts extra data into the response. The next section covers this in more detail.
Once the response data from a HTTP Server Execute rule has been obtained, you may wish to alter it before it is forwarded to the user. Examples of this include removing high-risk features if the user is coming into the application from an anonymous proxy or a country known for high levels of risk or add a picture (such as for advertising).
Composable Architecture Platform includes a number of string manipulation rules to make this task easy. The following shows the properties for the example mentioned above. It alters the response by adding an image to the page:
You can’t see it in the above example, but the full replacement text property is:
This maps back to the much earlier example of creating links to pages or content now already known to the application. The above will cause the browser to make a second request on the application server for the page URL shown. You can intercept the request and use it to return the image to the browser.
In this particular case, the image is inserted into the HTML
in a spot that looks like this:
When you detect a condition that requires action or further user input (such as a two-factor input request), your best option is to redirect to a “piggy-back” page as described above. You can do this by using an HTTP Redirect rule.
Please note that this cannot be done after an HTTP Server Execute rule. This is because the server has detected content already being written to the response and will no longer allow redirection.
The workaround for this is to send a response to the browser that causes it to redirect instead. This can be done with the HTTP Response rule, sending back a line of text as follows:
This pattern once again illustrates the “piggy-back” pattern in action.
Flight recorders can be used for more than logging of critical events for forensics. The following is an example of a flight recorder used simply to record web stats for a specific page:
The properties for the Flight Recorder Trigger are as follows:
The rule will trigger a single record into a flight recorder, giving you information about the user, browser, country or access, referrer and any other fields that you may wish to store. This can later be used to graph access to your application and give you valuable development and marketing feedback. Please see the “Working with Flight Recorders” section later in this book for more details.
The console contains a basic editor for managing most non-image content files. To edit a given content file, simply select it and click the Update button.
The editor will appear as shown:
You can use this editor to make changes to the file, before saving it and deploying it to the target server. The editor is fully functional with the most common keyboard short-cuts (such as Ctrl-F to Find and Ctrl-H to Find and Replace). Tab and Shift-Tab are also supported for indents and tag elements can be collapsed to improve code visibility. The editor has syntax highlighting and checking for most common file types.
You can download test data to view it. In its base form, test data (TST) is very compact, with each line containing a list of fields followed by the data for each field. This can be hard to read for humans, so an alternative is to download the test data in a format compatible with Microsoft Excel. To do this select a test data file and click the “Download as XLS” button.
Similarly, you may wish to upload test data received from other sources for analysis, or you may wish to process a file created in Microsoft Excel. Click on the test data folder you wish to upload to, browse for the appropriate file to upload, and submit it. If the file is in Microsoft Excel compatible format, it will automatically be converted to the Composable Architecture Platform test data format (TST) during the upload.
The console has a file previewer for most common web-based content (HTML, images and so on). To preview a file, select it and click the Preview button.
Unlike data files, content files cannot be deployed singularly. This is because content files often have interdependencies. To avoid a conflict, the console will always deploy the entire content file structure wrapped up in a single file. You can do this on demand, or it will happen automatically when a configuration is deployed.
As part of the deployment, any JavaScript file with the following setting ticked will result in an automatic minimization and a map file being generated on the deployed target:
This allows you to work with full source JavaScript in development and minimized source in production that is still easy to debug.
In terms of benchmarking, outcomes will obviously depend on the rule sets used. During benchmarking testing for the product, 8.4 million transactions per hour were achieved for a rule set that included:
Geo location lookup for each IP address using the MaxMind Geo Location rule
Keyed CSV lookup of a blacklist using the CSV Lookup Rule
Storing of the last five and retrieval of the last two IP addresses using the History Recorder rule
Numeric comparison using the Value Comparer rule
Text substring comparison using the Value Comparer rule
Console listing for 0.8% of the test data entries using the Attribute Lister rule
The throughput was achieved on an IBM 8-way 1.8 Ghz Power 5 system running AIX with the help of the Composable Architecture Platform in-built load balancer.
These performance numbers are not intended to serve as a promise regarding what can be achieved in any specific configuration. It is merely provided to give a feel for the overall performance capabilities that may be achieved.
Content files can be HTML, XML, images, Flash files or any other binary content that may be required to be served up to support a rule set. The content files live within a content path that must map to the content path of the application being protected. You can add arbitrary sub-directories below the initial content path, but the first one MUST match the application name. Alternatively, there is the Use server context path option in the configuration to use the Optional Context Path as defined for the server definition.
Content files can only be created within a repository. They cannot be created at root level.
In the configuration, you can nominate a specific rule set to be used for content only. This rule set will receive a copy of the served up content in a CONTENT variable, where CONTENT is the variable name, along with the URI variable containing the path and name of the item being served up.
Using the content rule set, you can modify the CONTENT variable with dynamic content before it is transmitted to the client.
Test data usually consists of TST or CSV files (unless you have another input adaptor for a different file type). The CSV files can be manually created by you and uploaded to the console server, whereas TST files can be downloaded from a production server based on real live input to those servers.
Once you have test data available, you can select it in your configuration and upon deployment of that configuration to a test server, have the test data automatically sent along with it.
If your performance collection level is set to "Transaction count and inline time" in your configuration, you will get additional information about the time spent inline in each rule. The following is an example from the Qwerty rules:
The time shown is always in ms (milliseconds) per transaction. If the time measured is less than 1 ms, 0 is shown.
Performance numbers below 10ms are shown in green, 10-50ms in yellow and more than 50 ms as red.
A key component of the output from any rule set execution is the performance data. The performance data tells you exactly what happened inside your rules.
It is important to understand what influences the rules performance. For example, database delays, the user of external/networked services, the need to call the underlying application (as done by the HTTP Response Addition and HTTP Server Execute rules) all can add considerable transaction delays beyond the control of Composable Architecture Platform.
The performance information for each URI and each of the three distribution diagrams is the same. The details of each element is as follows:
The distribution diagrams show transactions below 100ms, below 1s and up to 20s. Transactions taking 20 seconds or more will all be included in the >=19
seconds group. These distribution diagrams are especially useful when combined with the transaction counts in the rule sets.
The complete performance report is available when the performance collection level is set to "All counters". The full report includes all of the statistics described in the previous sections, plus a URI level report and database pool statistics
The database pool report provides a view of the size of the database pool and the number of live connections (connections handed out to the X Engine), spaced out over the life of the X Engine. The timeline will compress depending on the time the X Engine has been running.
The URI level report is divided into 2 sections. The first section provides a summary of the highest X Engine delays measured for each URI in descending order from the highest to the lowest. This measurement does NOT include the very first transaction for each URI. It is a useful measure to quickly identify rule set hot-spots in very large rule sets.
The second section provides a quick view of individual URI response times (or all URIs combined if "*"
is selected). There are up to three of these distribution diagrams:
Performance with X Engine stopped. This diagram is only present if data has been collected with the X Engine stopped. This can be achieved by stopping a configuration and continuing to undertake transactions.
Performance with X Engine started. This diagram provides information about the combined delay of the application and the X Engine.
Performance of X Engine alone. This diagram provides information about the X Engine alone. Application response times are automatically deducted from the result. However, any calls to external services or databases are still included.
The amount of data you see will depend on the performance collection level set in the configuration deployed to the server.
For production environments with stable rule sets it is highly recommended to set the level to Transaction count only. Collecting performance information has an overhead on its own.
To obtain the performance data for a given server, select it in the Composable Architecture Platform Servers folder and click on Get performance data.
The default name for the performance data is the name of the server. Enter a name and click the Retrieve button. Once the download is complete, you can see the performance data by clicking on the Performance Data folder and selecting the file name.
Click the View Rules Performance button to see the details. The diagram below shows an example for Qwerty:
The numbers shown next to the inputs and outputs for each rule represent the number of times a set of data entered and exited the rule. In the above example, 25 transactions entered the Initialization rule set, and the rule set filtered out 2 of those before sending 23 further through the rule set.
For content rule sets, a similar view of the transaction counts for that rule set can be viewed by clicking on the View Content Performance button.
For timer rule sets, a similar view of the transaction counts for any individually selected timer rule set from the drop-down list can be viewed by clicking on the View Timer Performance button.
Previewing Content Files
Editing Content Files
Deploying Content Files
Uploading and Downloading Test Data
Element
Meaning
Transaction count
Total number of transactions that have been processed for the section.
First transaction
The time spent in the very first transaction for this URI or the very first transaction in the case of all URIs (*). This first transaction is always isolated as startup time for some underlying services may cause delays that are not consistent beyond the first transaction and can therefore result in misleading data on small stress testing runs.
Lowest recorded
The lowest transaction time recorded for the selected URI
Max recorded after first
The highest transaction time recorded for the selected URI, excluding the first transaction. Please note that for the combined URIs (*) this is only excluding the very first transaction for any given URI. Hence it can be the second largest first transaction for a secondary URI.
Average
This is the average response time for the selected URI, including the first transaction.
Average without first
This is the average response time, excluding the first transaction. For small stress test runs, this is a more accurate reflection of the real end user experience.
For complex debugging, it can at times be easier to conjoin performance views with trace data. To do this, download the performance data after a trace has been started. You will receive the option to include the trace:
The result of doing this is a new list box showing up in the performance view, immediately after the rule name:
The list box offers either the regular performance data or a view by transaction:
For assistance, the URI variable used by almost all input adaptors is displayed to help determine a relevant transaction. Once a transaction is selected, the performance display will change:
You are now only seeing the entry count for each rule (for data volume reasons, exit points are not shown).
At any point you can now see the state of the variables at each entry point, simply by clicking on it:
If you have loops in your rule sets, more than one transaction can be seen at the entry point of a rule:
Clicking on that entry point allows you to iterate through each invocation, one by one:
If you have access to trace data you will also have access to system failure reports. System failure reports are normally emailed as well, but depending on the restrictions in place in your organization they may or may not contain an enumeration of the variables set at the time of failure.
If you have access to trace data you also have access to system failure information in the server status. This is represented at the bottom of the server status page as follows:
With all the relevant variables at the bottom of the page:
This information is especially relevant to gain insights into the cause of the X Engine failure.
Please note that only the first failure will be recorded for performance reasons. If the X Engine is not set to fail open, many more failures may occur. Those failures will be logged in the relevant system log instead.
If you wish to see the state of data at a given chain point, right click the chain point and select New probe…
The following dialog box appears:
To set a probe, you can specify CSV lists of variables and values that must be matched (or you can leave the fields blank to have the probe trigger on the next transaction through that chain point).
Variable can be surrounded by various special characters to indicate additional criteria. For example, consider a variable named VAR:
!VAR
: means that VAR is not equal to the selection value
VAR*
: means that the VAR content starts with the selection value
*VAR
: means that the VAR content ends with the selection value
*VAR*
: means that the VAR content contains the selection value
VAR<
: means that the VAR content is less than the selection value
VAR>
: means that the VAR content is greater than the selection value
For the < and >
characters, the value will first be attempted to converted to a number. If that succeeds, full numeric comparisons are performed. If not, a lexicalist comparison will take place.
You can also select the occurrence number of the probe to see. This is especially useful when debugging loops.
By default, only the first 100 characters of any variable is recorded.
Once the probe is set, the chain point changes color to yellow:
It will stay yellow until a matching transaction reaches the chain point, at which point it will turn red:
You can now see the variables as they were when they reached the probe. To do this, right click on the chain point again and select View probe:
A dialog box appears with the variable information:
To close the dialog box, click the X in the top right corner.
The trace has a number of simple text entries. The following tags are samples of the informational messages and help determine the flow:
Tells you that a new transaction was started and the time it happened.
Tells you that the variable MAXPAY was set to 25.
Tells you that a variable named ERROR was deleted.
Tells you that the rule set entered a given rule.
Tells you which chain point was used from a given rule.
Tells you that the rule set returned from a nominated chain point.
Tells you that the Set Completed rule was called and the scope set.
Defining the attributes of the flight recorder in the console is an administrative function and must be performed prior to accessing the flight recorder. However once done, the flight recorder information can be accessed and searched. To do this, simply click on Flight Recorders in the console tree.
You will be presented with the following search screen:
You can select ranges of fields for searching or simply click the Search button to view all Flight Recorders:
In this case there is one Flight Recorder in the ATTACK table in the HISTORY database. You can see the DEVICE_ID
used to trigger the attack, when it was started and its status (Open meaning that it is still recording, Closed meaning that recording for the specified DEVICE_ID
has been stopped).
Following this is the number of records recorded so far for the given DEVICE_ID
, the reason for the trigger, the IP address, the User ID (if known), the Case Number if a case has been triggered and the Browser used at the time of the trigger.
We can deduce that someone attempted an SQL injection attack in the password field on the QwertyLogon page.
To view a log, click on View server logs and select a file from the drop down list:
Please note that only files from the last 7 days will appear in the drop down. If you require access to older logs you will have to access the server using a different method.
It is highly recommended not to attempt to view very large logs this way. Extremely large logs may break the memory constraints of either the server or the console. It will also be very slow to render if the connection is low bandwidth.
Given that the live view of the rules constantly polls the target server for updates every 5 seconds, it is not recommended to keep many live view rule sets open at the same time. The performance impact on the target X Engine has the potential to be significant.
Provided the user has access to trace and performance data for a given server, the user can explore the live performance of a running server and can insert probes to see the live data for any given chain point through-out the rules. This is an excellent debugging tool that can even be used on productions servers.
To obtain live performance data, go to the server status and click to retrieve the server performance as normal. If you have the ability to trace and view performance for the server, some additional options will show:
In the View live section, click on View Rules Performance to see the live performance of your rules:
You can not only see the number of transactions flowing through each rule, you can also see the live properties of the rule set in use. The data and rule sets visualized for the live view are all extracted from the server. The transaction counts will update every 5 seconds.
Trace data can be retrieved from any long-running server. It creates a step-by-step trace of how your rule set is being processed by the X Engine. This is particularly useful for finding problems in web application rule sets.
To start a trace, simply click the Start trace button on the server status page as shown below.
Once the trace is in progress, you will see the status screen change as shown.
The trace mode will be active for a maximum of 20 transactions, after which it will automatically turn off.
To retrieve and view the trace data, simply click the Get trace data button and supply a repository and file name.
Once the trace data has been retrieved, you can view it by selecting it in the administration tree.
The view of the trace can also be downloaded or deleted (as shown below).
Provided that you have a Base Rules Build equal to or higher than 20200 you can obtain the server’s native logs. By default the log location is set relative to the home folder. Provided that the log location is a valid folder and the user has both tracing and performance data retrieval permissions, logs can be viewed.
To set a different log location, add the following lines to your magic.properties
file:
Please note that the above example applies to instances running under the standard Jetty installation of the software. If you are running on a different application server then you will need to modify the location accordingly.
Provided that you have the correct Base Rules Build deployed you can view the logs:
This applies whether the server is started or not.
Flight recorders are essentially on-demand log files. Their most common use is to gather data for later study or forensic requirements.
A good example of the use of a Flight Recorder is the following snippet adapted from a banking demo:
In the above example, all of the data about the particular access to the application has been obtained with the HTTP Request Tracker. The next rule is an Injection Checker rule that will check all submitted fields for Java Script, IFRAMEs, SQL
injections etc. Should one be found, a Flight Recorder Trigger is invoked, recording the field in which the attack is found, the contents and the URL of where the attack occurred. This data is keyed by DEVICE_ID
in this case. The full properties for the Flight Recorder Trigger are shown below:
When this rule is hit, a number of things happen:
An open record is written to an index table (whose name will start with “ATTACK” and which will be found in the “HISTORY” database. Should the table not exist, it will be created). A record of all the fields presently available for rules is also stored in a data table (using the same creation conditions as the index table).
Up to 5 user-defined index fields are also written to the index table, alongside a number of default data elements, such as the browser type, IP Address and User ID (if known).
So now we have a complete record of what happened at this specific point in time. We then close the rule set by logging the user off (since attempting an attack is a pretty serious violation of terms of use).
However, we may wish to continue recording any further activity performed by this user, since it may yield useful forensic information. So, we also insert a Flight Recorder Add rule.
The index table is keyed by DEVICE_ID
, meaning that subsequently, if the same DEVICE_ID is found by a Flight Recorder Add rule, it will add another record of all of the fields presently available to the Flight Recorder’s data table. The properties for the Flight Recorder Add rule are as follows:
This essentially maps the key fields to the trigger.
Now, we not only have the data from the point in time of the attack, but also, we have started a specific log of all subsequent activity coming from that DEVICE_ID
.
This data is recorded in the same format as test data and can be downloaded and used in the same way (as a Microsoft Excel compatible spreadsheet or by running rules over it on the test server).
We may now wish to investigate this further. To do this we can download the events recorded by the Flight Recorder as test data. Simply click on the number in the Records column to be presented with a page identical to the one used to download test data from an active server:
Simply nominate a repository and file name to place the test data in and click on Retrieve.
Once the test data has been retrieved, you can view it by selecting it in the administration tree.
Setting Probes
Live Performance Impact Considerations
Understanding the Trace
Viewing a log