Configuration Settings
Last updated
Last updated
Contact Us
Get Help© Copyright TomorrowX 2024 Composable Architecture Platform and Connected Agile are trademarks and servicemarks of TomorrowX.
Each configuration is divided into 6 tabs with settings. In this section we cover these tabs in detail.
The General tab contains all of the basic information about a configuration.
The file name should be a single word (no spaces). You can rename the configuration by changing the name and clicking "Save".
The description provides an easy way for other people to understand what your configuration does and serves as basic documentation.
Every configuration must have a rule set. This is a mandatory field.
Content rule sets allow you to manage specific content (new pages, images and so on), that you can introduce to the application. If the target server definition has a context path set for serving content files, you can optionally check the Use server context path checkbox to use that as the defined directory path.
If you have rules that should be executed before your main rules, or immediately after they have all completed, you can define them in the configuration.
Please note that if you set a startup or completion rule set, the X Engine will be restricted to running in a single thread to ensure the correct order of events.
Modes are named collections of rule sets and content rule sets that can be used to replace the default rule sets that are running. For example, if you wish to take a website offline for maintenance, you can create a “Maintenance” mode and assign it to rules that display a maintenance page instead of your normal website.
The input source tab provides details to the X Engine about where input is coming from and how to deal with it.
The server can be either Production, Multi-Protocol, or Test. This makes the configuration target a specific server type and determines which input adaptors are available to select from.
A critical part of the configuration is the input adaptor or “source of data”. The options available depend on the type of server selected. As a general rule, the file name or URL being processed will be made available by the input adaptor as the variable URI (Uniform Resource Identifier).
For file names, this includes the full file path in the file system dependent format.
Input adaptors are frequently added via extensions. At the time of writing, the following input adaptors are available by default:
Execute a load test against a server: This input adaptor is only available for production servers. It allows you to take a stress test rule configuration generated by the “New rules wizard” modified to suit the application and use that to generate a load against a website. You can control ramp up times and total threads as well as think times.
Please see the “Using the New Rules Wizard for stress testing” section in this manual for more information.
Process multi-protocol input: This input adaptor is only available for Multi-Protocol servers.
It allows you to take input from any protocol defined within the administration section of the console and control the input, proxy and output of that protocol.
Protocols supported include (but are not limited to): MySQL, DNS, Telnet, FTP, ISO8583 and SMTP.
Transports include: SSL, TCP and UDP.
Please see the “Case study: Multi-Protocol” section in this manual for more information.
Process a single CSV file: This input adaptor is only available for test servers. It allows you to define each column in a CSV file that you wish the server to process. The file must be present amongst the test data files uploaded to the console.
Process a single multiline CSV file: This input adaptor is only available for test servers. It allows you to process CSV files that have records spread over more than one line. Typically, this could be a file that looks as follows:
Record ID
Record Type
Value Column 3
Value Column 4
12345,
R1,
John
Doe
12345,
R2,
Melbourne
Australia
23456,
R1,
Bob
Smith
23456,
R2,
London
United Kingdom
34567,
R1,
Jane
Doe
34567,
R2,
Auckland
New Zealand
To process the above file, you would need the following definition in the Input Fields tab of your configuration:
The break column (BREAK_COLUMN) defines which column number is used to identify a unique record. The record column (RECORD_COLUMN) defines which column number contains the record type.
Each individual field for an entire record is then defined as a field name, with the label indicating which record and column number that field can be found in.
Process a single identifier delimited file: This input adaptor was designed to rapidly traverse files that contain somewhat structured data, where each piece of data is preceded by a recognizable identifier, and all of the identifiers are in the same order (although missing identifiers are tolerated).
This adaptor relies on the data in the file containing a format where an identifier can be used to spot breaks in the data. The following example illustrates how this adaptor can be used:
Sample data that could be provided to this adaptor could be:
The first label listed in the input fields for the configuration MUST be the break for each new record. A record can be on more than one line in the file.
Processing the above shown data would result in two records, the first with the variables set as follows:
The second record contains:
Strings are always being trimmed of leading and trailing blanks but can contain more than one word. If you have identifiers in the file that you wish to ignore, you must still specify them in the list or they will be considered part of a context of a previous identifier.
Process a single XML file: This input adaptor is only available for test servers. It allows for an XML file to be processed by the X Engine. Each XML tag in the file (and its attributes) will be converted to a unique variable name. For example, the following XML document:
results in the following variables being generated:
It is important to note that all tags below the root tag will have a counter attached to them to ensure uniqueness. This is what results in the “_1“ being added to the “name” tag in the example above.
If more than one “name” tag is present, the conversion will be as follows:
Which results in the following variables being generated:
As this process can result in some rather long variable names (especially when processing XML documents such as SOAP requests), the use of the Alias rule is encouraged to simplify rule writing.
Process all CSV files in a directory: This input adaptor is only available for production servers. This input adaptor will look for files in a folder/directory. When one is present, it will process it and then delete the file. Each field in any supplied CSV file must be defined in the configuration.
Process all identifier delimited files in a directory: This input adaptor is only available for production servers.
This input adaptor will look for files in a folder/directory. When one is present, it will process it and then delete the file. The data within the supplied file is converted into unique variable names as outlined in the “Process a single identifier delimited file” adaptor.
Process all multiline CSV files in a directory: This input adaptor is only available for production servers.
This input adaptor will look for files in a folder/directory. When one is present, it will process it and then delete the file. The data within the supplied file is converted into unique variable names as outlined in the “Process a single multiline CSV file” adaptor.
Process all XML files in a directory: This input adaptor is only available for production servers.
This input adaptor will look for files in a folder/directory. When one is present, it will process and then delete the file.
The tags within the supplied XML document are converted into unique variable names as outlined in the “Process a single XML document” adaptor.
Process free format test data: This input adaptor is only available for test servers.
This adaptor is specifically designed to receive data from a file generated by the “Test Data Creation” rule (TST files). There is no need to define any input fields as the data within the file are composed of a field definition list as well as a data value list for each record.
This adaptor is designed to process data from production servers that process web application inputs with the actual variable names changing for each request.
The test server will be able to emulate what happens on an actual production application server without the requirement to simulate anything in a test environment.
This adaptor is also useful for pre-testing any new rule set to evaluate the impact of installing it into production.
Process on heart beat: This input adaptor is only available for production servers.
This input adaptor is used to process the same rule set at regular intervals. You can specify the delay between each run in ms.
Process once and stop: This input adaptor is only available for production servers.
This input adaptor will run the rule set once upon startup and then stop. This is predominantly used for testing rules.
Receive input via HTTP POST: This input adaptor is only available for production servers.
This input adaptor is designed for high-speed processing of a specific HTTP POST (for example from a known JSP or HTML page). Each field that the X Engine is expected to process must be defined in the configuration, just as if the input came from a CSV file.
The field names listed must be the same name (case sensitive) as they appear in the form post from the HTML that submits the request.
It is important not to confuse this adaptor with the “Receive web application input” adaptor, which is slightly slower but much more flexible.
Receive web application data: This input adaptor is only available for production servers.
This input adaptor is probably the most flexible, but also most complex. It is capable of receiving data from any HTTP
request, be it a GET
or a POST
, and translate it into variables that can be used by the X Engine.
The adaptor understands and translates standard HTML
, XmlHttpRequest (AJAX
) and SOAP
requests as long as the appropriate content type is set in the HTTP
request.
For HTML POSTs and GETs, the URL parameters and form fields are translated directly into input variables, with each variable name matching the corresponding parameter or field name. For XmlHttpRequest and SOAP requests, the tags within the supplied XML document are converted into unique variable names as outlined in the “Process a single XML document” adaptor.
This particular input adaptor allows you to enforce some web application security settings:
For HSTS:
At the very minimum you must provide a ‘Max age’ value in seconds. The most recommended value is 31536000 seconds (one year).
Optionally you can check the box to include sub domains.
Preload is a method whereby the most common browsers will load a list of sites that MUST use HSTS. Google maintains a list of sites that are preloaded as requiring HSTS and that list is used by the Chrome, Safari and Firefox browsers. To have your site registered as preload, you must apply here: https://hstspreload.org/. Google will verify that you have the preload flag set against the HSTS header before adding you. If you do not have this flag, Google will reject your application to be added to their list.
Note: Any of the above settings that modify cookies require that you are running on a Servlet Specification 3.0 or later web application server. For the standard installation that means Jetty 9 or later.
After the selection of an input adaptor in a configuration, there are a number of fields specific to that adaptor. The largest difference is typically between test and production adaptors, we will show two examples here:
The above scenario is for a production input adaptor that processes all files "dropped" into a given directory. The input adaptor will poll that directory and whenever a new file is added it will be processed and then deleted.
The additional fields are as follows:
Selecting this option causes the X Engine to always collect test data by default. You can also start collecting test data on demand using the server status view.
This is the maximum number of records that the X Engine will keep in memory. For data with a large record size, this value should be set properly to avoid retaining too much memory.
This is the sub-directory from the home folder where the X Engine will look for files. For other input adaptors that do not use files (such as the web application adaptors) this can be a different field name providing different information.
Selecting this option will cause the X Engine to automatically start when the server is started. There is no need to selectively click the start button.
Test servers behave differently to production servers in that they always take a single file as input data, process that file and then stop. The following is an example of the settings for a test input adaptor:
Test data is the name of the file to process. This file must be in the test data section of the console tree in the same repository as the configuration.
The testing flags are used to control how the X Engine interacts with the environment around it.
Flag
Usage
Update Internal Data
If selected, the rules in the X Engine will update data with tables that it can directly access. This includes the internal case manager.
Update External Data
If selected, the X Engine will write data to external systems that are not database connected. This could, for example, be an external case manager that receives cases via a web services call.
Send Alerts
If selected, the X Engine will send alerts such as emails, SMS messages and other forms of external messages.
The remaining input source fields are generic. The following is a list of their meanings:
This flag determines if messages written to the console (via any of the List rules) are also written to the system's standard out log.
Enable Debug Mode turns “List Variable for Debug” rules as well as “Exit” rules with “List Variables on Exit” set to “Debug mode”, on, so that all variables will be listed at selected points throughout the rule sets.
The Fail Open setting determines how the X Engine deals with a fatal error. If selected, the X Engine will automatically stop and let all normal traffic proceed transparently should a fatal error occur. If unselected, the X Engine will attempt to recover from the failure and continue running.
This setting is used to control how the X Engine detects infinite loops. Effectively every connection (chain) between rules has a counter built into it. When the number of chain events reaches the count set here, the X Engine will consider itself looping and will terminate to avoid impacting other services.
This setting determines how much performance data is collected as part of the X Engine execution. Please see the performance data section for more information.
Input fields are used for a variety of purposes. They can be used to identify column settings for input adaptors and also to determine global settings that can be changed at the configuration level without changing any rules. The following shows such an example:
When the X Engine is running, it is possible to set global fields. These are fields that can be accessed by the X Engine at any stage and are not dependent on input from other sources. Global fields can be changed during the execution phase of a rule set, allowing you to potentially alter the flow of rules, set different thresholds or check for different conditions.
The following shows an example of defined global fields:
It is important to know that global fields are persistent. This means that the default value set in the configuration only applies for the very first time a global field is set. After that point, the global field retains its set value, even after the X Engine is restarted.
The field name is the global variable name set when the value is changed. The Label is the label that the user sees.
The field type for each global field is important, as are the allowed values. You can set fields types as follows:
Text: This creates a simple text field that can be changed. The allowed values have no effect.
Number: This creates a simple text field for numeric input that can be changed. The allowed values have no effect.
Switch: This creates an on/off style switch that can be changed. The allowed values represent the values set for the ON condition, followed by the OFF condition.
Slider: This creates a slider that can be changed. The allowed values represent the min value, max value and optionally a third value representing the increment. Only integers can be used.
List: This creates a drop-down list of values that can be picked. The allowed values represent each of those selections.
The following shows an example of how the above defined values are displayed in the server settings change function:
Many of the rules available in Composable Architecture Platform are capable of accessing data in databases, either locally to Composable Architecture Platform or externally stored somewhere within the network.
As a user, you will need to know the name of the database that you want to connect to, and in some cases also the table and schema names.
Configurations need to list all of the databases that the rules within them are capable of accessing. This allows the deployment system to provide
Additional information to the Composable Architecture Platform servers about how to access those databases at a technical level. The databases themselves are normally defined by a system administrator.
The following shows a configuration of a sample database:
You must enter the database name and the type of database (driver). The list will vary depending on the types installed on the network.
If you are writing rules that may be used on different systems where the database names may differ, you can use a database alias name. The database alias name in the rule sets will be mapped back to the database name defined in the configuration.
There may be times where you wish to access to a database with a schema name that varies between test and production systems. To allow this, you can override the schema name in the configuration. If you leave the schema name blank, the default value configured by the system administrator will be used.
The system defines where the actual database is located. You can combine this with the defined servers to allow JDBC drivers to connect to any given location.
You have the ability to set a list of rule sets that execute at a given time interval. These rule sets are independent of any input data and simply run on a repetitive cycle. There are two types of cycles: Delay and Real Time:
Delay timers simply execute the rule set, then sleep for the delay period and then execute again.
Real time rule sets will run at a precise interval, regardless of the time it takes to execute the rules.
An example of a timer setting is as follows:
In the above example, the OnTimer rule set will be executed, then pause for 30 seconds and then repeat.
Please note that the data object used within the timer stays the same. This means that you can set variables within the timer rule set and use those same variables the next time the rule set executes (for example variables for a counter).
Note that for web applications, the timers will not start until the first real transaction has been processed.
For each configuration you create, you have the option of producing a complete documentation set. To create this, click on the Document button:
Then select the target server and click on the Document button.
A pop-up window will appear that lists a complete view of the configuration, the selected server, JDBC drivers, databases, rule sets, data files and so on.
Selectively you can include the actual details of the contents of data files and content files. For each of the respective files, simply tick the check box as shown below and click on Save.
IMPORTANT: The output in the pop-up window is very browser dependent and the quality of the results may vary. We recommend using the print preview option in individual browsers to see the final result.
At the time of writing the following conditions applied:
Firefox 11.0 produced a fairly faithful representation of the intended output but was slow.
Internet Explorer 9 produced a reasonable representation of the intended output, was faster than Firefox, but did not always respect page breaks.
Chrome 18 produced a terrible graphical look as it does not print background images in pages but was otherwise fast and true to page formatting.