Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Input adaptors are used to define how a server receives input. Examples are CSV files, HTTP or XML. Setting up input adaptors is an administrative function.
To set up a new input adaptor, you will need to obtain relevant details from the programmer who wrote the code for the adaptor. The following section details the individual settings.
The class name is the fully qualified class name of the adaptor. It must exist in the classpath of the server that is going to use it.
Input adaptors come in two forms: test and production.
Test adaptors only work from file data that is pushed to the adaptor from the console.
Production adaptors either wait or poll for data. An HTTP request adaptor would wait for data, whereas an adaptor that reads files from a directory would be polling.
The optional parameter label denotes the lead text for a configuration field available when configurations are defined. This allows you to specify custom data to pass to the adaptor when it is loaded (for example a directory or file name).
Log adaptors are used to customize how the X Engine writes log information. Log information includes status messages, stack traces and failure notifications. To be able to use log adaptors, your X Engine must be at version 6.0.1 or later.
Setting up log adaptors is an administrative function.
By default, log adaptors for the most common log providers are installed:
Additional log adaptors can be installed via extensions.
Custom functions are features that can be introduced into the console via rules. The basic concept behind custom functions are that they add a new entry to the console tree and subsequently the behaviour of that entry can be changed via rules.
The key thing to note is that due to the way this is implemented, all of the standard security features of the console remain intact and the custom functions can be managed via user roles.
The following shows a custom function setup for a blacklist:
To work with the data in the flight recorder, the console must have access to it. This requires that the console can access the database used over JDBC
.
Provided this is the case, an administrator can simply add a new Flight Recorder Setup as shown below:
This defines the database parameters (from already defined database information set in the JDBC driver section) and the optional index fields (columns).
The credential vault is used to hold user IDs, secret keys, passwords and other information used to access external services from rules.
The main benefit from using the vault is that rule writers have no exposure to any of those account details, and it also avoids sensitive data being stored as clear text.
Each set of credentials in the vault is installed at the same time as the rules themselves are installed via an extension.
The following shows sample credentials for the Kapow SMS service:
By default, passwords are hidden and user IDs visible. You can manually hide the user IDs as well by clicking the Hide Value button. However, please note that there is no "unhide" feature.
Credentials stored in the credential vault are automatically sent to the X Engine when a configuration is deployed.
With large clusters comes the headache of keeping each server in the cluster up to date and synchronized. This not only applies to the applications installed on the cluster servers, but also to the rules that are being executed on each of those servers.
Composable Architecture Platform solves this problem by delegating it to a small cluster of slave consoles. Each slave console manages its own section of the cluster. It keeps track of which servers have which X Engine installed and will automatically deploy, stop, start and manage servers in its assigned segment of the cluster.
Each slave console is assigned a repository on the master console from which to deploy rules and configurations. This repository has one consideration that must be followed:
Avoid editing configurations or rule sets directly in the repository, otherwise they will be propagated and deployed to the entire cluster instantly. This means that any mistake will be propagated too. Instead, you should promote the rules from a UAT or staging repository into the cluster repository when they are approved for deployment (using Configuration copy with dependent rule sets).
In the master console, a Cluster Node Definition is used to set up a cluster node.
The node name is an identifier that is specified in the slave console’s configuration.properties
file alongside a password. This is used to authenticate the slave console to the master console. For additional security, you can also lock the access from a given slave console to a single IP address.
The link URL is the link to the slave console’s web application. This is used to allow the master console to “drill down” to the slave console.
The feed repository is the repository from which all servers in the cluster will receive updates and the feed configuration is the configuration that will be deployed from that repository.
Once a new Cluster Node Definition has been created, a new element appears at the top of the administration tree:
Clicking on this element allows you to see the status of your cluster node:
Once your cluster node shows up in the administration tree, you can manage all of its defined servers as a single unit. This means that you can start and stop all of the X Engines in the servers attached to a node at once.
If a given server in the node is offline for maintenance when you issue a start or stop command, the slave console will remember the desired state of the cluster and set it appropriately when the server comes back online.
Please be patient after issuing a start or stop command as it can take some time to be reflected in the master console (20-30 seconds is normal).
You can also drill down to the individual slave console directly from the master console. To do this, simply click on Maintain. The console for the slave console appears in a pop-up window.
You will notice that you are not required to log on, and you see a reduced set of features. The logon is handled by a single sign on process, and the reduced feature set reflects the limited tasks that can be performed on a slave console (no rule edits, user edits etc.). Of course, you can still get trace data, test data and performance data from individual servers in the cluster.
There are in fact only four items to configure on the slave console: The data base connectors, the log adaptors, the credential vault and the servers themselves. These items all need to be manually defined within each slave console.
Even though JDBC drivers in theory could be replicated down from the master console, they have been kept separately in the slave consoles for segmentation purposes.
This segmentation comes into play when individual slave consoles are deployed into a cluster that is geographically separated. Under these circumstances, the actual databases themselves may also be separated and simply replicating to each other.
Within these geographical locations, the actual addressing of the database servers may therefore be different (in some configurations). To address this problem, each slave console has its own addressing information for each database connector. This allows the rules in the master console to be written without specific knowledge of the location into which they may be deployed.
Setting up connectors in a slave console is identical to setting it up in the master console.
Log adaptors and credentials are also not replicated (for the same reasons as data base connectors. These items are set up the same way as in the master console.
Defining servers in a slave console is identical to doing it in the master console. Please note that any given server should only ever be configured in a single slave console at any time.
Work output is used to set up tasks for projects. You can add additional work output if you have a special need.
If you define any pre-requisite work output, then this will be taken into account as the project is created. The four “Produce by” options are used to help create a meaningful text for each task.
The code used for the work output has no particular significance, it is simply an identifier. If you create your own, we suggest that you do NOT prefix them with “wo” as we may add new identifiers later.
Project definitions are used by the Project assistant during project creation. These templates greatly simplify the job of creating a new project.
The code used for the project definition has no particular significance, it is simply an identifier. If you create your own project definitions, we suggest that you do NOT prefix them with “pr” as we may add new definitions later.
The rules deployed on a system out of the box are not permanently fixed. New rule packages can be installed at any time by a system administrator.
Extension packages are stored in a zip file containing a manifest that explains the attributes of any new rules, their functions and chain points. It must also contain a file with error messages that the rules can produce.
In addition to this, an extension package contains the actual Java code for each new rule, input adaptor and any code libraries they may depend upon.
As discussed in the section “Keeping the product current”, you can install new packages at any time and have the system automatically deploy them to the relevant servers. Please refer to that section for an in-depth example on how to install a new package.
An alternative to managing users locally is to use LDAP authentication. LDAP authentication is set up manually by providing an access manager plugin in the console’s configuration.properties file. Please see Console server configuration below for more information.
Within the LDAP server itself, the following attributes must be set for each user:
In addition, each user must be a member of (memberOf) one of the following groups:
Optionally, the user can also be a member of the following group:
For example, if a role named Tester exists, then the user can be enrolled into that role by setting:
Creating the database itself is beyond the scope of this manual as it depends largely on the type of database used and its vendor.
Composable Architecture Platform supports a very large collection of databases through its use of JDBC drivers for connectivity. However, configuring the JDBC driver correctly to work with a given database is normally the job of an experienced database administrator, since it requires the understanding of security, network and connectivity issues. If you are not experienced in these matters, we suggest you refer this section to someone who is.
To set up a new driver and thereby allow connection to a database, follow these steps:
Make sure that the driver class is available in the class path of the program or application server that is running Composable Architecture Platform. In the case of Jetty, the location for the driver JAR file is in server/lib/ext/jdbc/[database name].
Enter the driver class name (fully qualified) as the JDBC driver class. If the settings differ for systems, make sure that you set up a blank system name (this represents the default settings) as well as settings for each system that is different.
Enter the URL prefix and suffix for the driver. This maps to the parts of the URL string sent to the driver. The complete URL sent will be in the form of:
A common method used to define a database connection within a J2EE application server is to use a JNDI defined data source. Composable Architecture Platform supports this through the use of JDBC driver class names that start with the letters "JNDI". By default, a single JNDI data source driver is created.
To set the actual JNDI name for the data source, supply the JNDI name in the database alias field in the rules configuration and select the JNDI Datasource as the driver.
There may be times when you wish to add another JNDI data source driver, for example, to use a database that requires specific overrides for BIGINT or VARCHAR(1024). In which case, you can create a database specific driver (for example: JNDI_ORACLE), and then make sure that specific driver is selected in the rules configuration for the relevant database.
Please note that if using JNDI data sources for items such as Flight Recorders and Case Managers, you will still need to define the correct JDBC drivers within the console for it to be able to access the data.
The following is a list of common JDBC drivers and their equivalent configuration.
More information: http://www.ibm.com/
Driver class: com.ibm.db2.jcc.DB2Driver
URL prefix: jdbc:db2:
More information: http://db.apache.org/derby/
Driver class: org.apache.derby.jdbc.EmbeddedDriver
URL prefix: jdbc:derby:
URL suffix: ;create=true
More information: http://jtds.sourceforge.net/
(open source driver – others exist)
Driver class: net.sourceforge.jtds.jdbc.Driver
URL prefix: jdbc:jtds:<server_type>:<server url>
(where <server_type
> is one of either 'sqlserver' or 'sybase')
URL suffix: Valid suffixes can be found at http://jtds.sourceforge.net/faq.html#driverImplementation
More information: http://dev.mysql.com/downloads/connector/j/
Driver class: com.mysql.jdbc.Driver
URL prefix: jdbc:mysql:<server url>
If your database is configured to use UTF-16, you also need to force UTF-8:
URL suffix: ?characterEncoding=UTF-8&useUnicode=true&characterSetResults=UTF-8
More information: http://jdbc.postgresql.org/
Driver class: org.postgresql.Driver
URL prefix: jdbc: postgresql:<server url>
More information: http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
Driver class: oracle.jdbc.OracleDriver or oracle.jdbc.driver.OracleDriver
(depending of version)
URL prefix: jdbc:oracle:thin:@<server url>
BIGINT: NUMBER
For Oracle RAC Server or Amazon Oracle RDS connectivity, the settings will typically look something like:
URL prefix: jdbc:oracle:thin:@(DESCRIPTION =(ADDRESS=(PROTOCOL=TCP)(HOST=<host name>)(PORT=1521))(CONNECT_DATA =(SERVICE_NAME= URL suffix: )))
Server maintenance is an administrative function and is not available to standard users.
For each server instance that is deployed, you must create a server definition entry in the administration console so that it is visible to the users. The following shows an example of the information required to create a server:
The server name and description are arbitrary and can be anything you like. The license key should be a valid key provided by your supplier (refer Server Licensing section).
The host name and port number are given during the installation of the server instance into the application server. See the “Installation and Configuration” section for more information.
The server encryption key is optional. If you specify a key, it must match the EncryptionKey defined in the magic.properties
file on the target server. It is important to note that you will always see a value in this field, even if you have chosen to make the field blank. This is to avoid accidentally exposing whether a console connection is encrypted or not. To disable encryption, simply blank out the field.
The actual server type is designated here. There are 6 options as described below:
From an installation perspective, there is no difference between a test server and a production server. They run the same code. However, by defining here which type a server is, you can limit which type of execution it can undertake.
If you designate the server as a database server, it will not show up in the tree of Composable Architecture Platform servers, but you can select it during configuration of databases (configuration, flight recorders and case managers).
A template server is defined only to allow advanced settings to be inherited from a nominated server. Like a database server, it also does not show up in the tree of Composable Architecture Platform servers. See Advanced Server Configuration below for more information.
The optional Amazon Instance ID is used to automatically manage proxy instances that are located behind one or more Amazon Web Services (AWS) Elastic Load Balancers. If the ID is provided and Amazon credentials are set in the advanced settings (or inherited), then the server will automatically be deregistered and re-registered with the load balancers during deployment.
Number of CPUs is used to control the load on the target hardware. While running, the X Engine will never attempt to use more than the CPU limit specified here for actual processing. This does not mean that other load cannot occur on additional installed processors (such as communications and other maintenance tasks). However, the core execution will never run on more CPUs than specified. You can put a number exceeding the physical number of CPUs, if so the server will start as many threads as you have specified CPUs and perform the execution that way. On hyper-threading CPUs, this may provide some performance benefits.
The console depth is used to control the memory set aside on the server for the console. The depth refers to the number of lines of text kept before they are discarded.
If you nominate a server to inherit advanced settings from, then the "Advanced" tab will disappear, and all of the settings will be derived from the nominated parent/template server.
In addition to the basics of setting up a server, there are some advanced settings that determine how the server operates. The following shows the advanced settings:
The advanced settings determine parameters such as the Web Proxy the X Engine must traverse to obtain internet access, the name of an SMTP server the X Engine can use to send emails and the details of the email message to send, and the recipients of the notification email message sent when the X Engine fails. It is also possible to modify the server encryption algorithm. Please do not enter anything in this field unless specifically instructed to do so by our support team.
To avoid keying these parameters repeatedly for many servers, you can set up a Template server and inherit the settings from the Template server definition.
The rules in the X Engine that use web access (such as the HTTP invocation rule) will obtain the proxy settings from the server definition. Please see the section on server definitions for more details. It is important to note that some rules using web access may still require direct access to the internet. This is typically dependent upon how the vendor API for the particular function is implemented.
The console supports the ability to dynamically add and remove a selected server from one or more AWS Elastic Load Balancers during deployment. This approach serves to reduce the load contention on a very busy server if deploying whilst live. To make this work, you will need to specify an Amazon Instance ID in the basic settings, and either directly specify the region, load balancer name and credentials in the advanced settings or on the inherited template server settings.
Logging by default will go to System out. However, some environments have configurations where it is convenient to have the X Engine log information and errors to other places.
Apache Commons Logging is an open source logging feature that allows for logging to a variety of locations.
By default, Apache Commons Logging will auto-detect the correct log mechanism to use (Log4J, Avalon LogKit, JDK). However, it is possible to provide specific log factory parameters if required.
File out logging provides the ability to log to a custom file name in a specific directory.
The configuration consists of a template file name (which may include a relative or absolute path) and optionally the number of days to retain log files.
The template file name itself will be pre-pended with a time stamp in the form "CCYY_MM_DD."
For example:
"logs\tests.log,60
" will result in files being saved in the folder logs with log files named CCYY_MM_DD.tests.log
and the files will be retained for 60 days.
IMPORTANT: If you have more than one X Engine installed on the same server, they may NOT write to the same log file.
System out logging will go to the standard System out configuration of the JDK or Application Server. No specific configuration is required.
If you nominate a server as a Production with Forwarder, then you will have one additional (Forwarding) tab to configure:
It is extremely important to take care configuring these settings as an incorrect configuration can result in the creation of an open proxy.
The protected hosts and schemes refer to a list of hosts and their access scheme. This is a definitive list of requests that will be allowed to go through the proxy. For example, it could be configured as:
http://mysite.myhost.com
https://mysecuresite.myhost.com
This will ensure that only http requests can go to the mysite.myhost.com site and that only SSL requests can go to mysecuresite.myhost.com.
The request redirection option allows you full control over where incoming requests are redirected. This replaces the host file manipulation of earlier versions and also allows for port redirection and same server co-existence. Essentially, incoming requests for any given host can be redirected to any other host and/or port. The redirection is a list of hosts and schemes, followed by the ">" and the target host and port. For example, to redirect the site mysite.myhost.com to port 8080 on the same server, you would create an entry that reads:
http://mysite.myhost.com>http://mysite.myhost.com:8080
Composable Architecture Platform can be configured to be an SSL terminator by redirecting the protocol from https to http.
The allowed client IP addresses let you to control where requests coming to the forwarding proxy are allowed to originate from. This is predominantly useful in ensuring that a Proxy Server set up for testing does not become an open proxy for the entire corporation to use to bypass internet controls. The default setting is to only allow access from the loopback address of 127.0.0.1
The browser proxy port allows you to set up a proper browser level proxy that can be configured in Internet Explorer, Firefox, Chrome or other web browser. This type of proxy correctly manages how the browser connects to the forwarding proxy for the purpose of SSL connections. It is especially useful for configuring the browser for testing new rules against sites that do not have Composable Architecture Platform installed (refer to section Zero Installation Rules Testing).
The maximum size for cached objects determines how large objects are handled by the built-in proxy's accelerator cache. It is a performance setting and should only be modified by a qualified performance professional.
The maximum total client connections sets a limit on how many client connections to the proxy are allowed at any one point in time. It is a performance setting and should only be modified by a qualified performance professional.
The maximum client connections to one host determines how many client connections the proxy is allowed to make to a single host at any point in time. It is a performance setting and should only be modified by a qualified performance professional.
Close client connections to host enforces the closure of TCP/IP connections after each request. It is a performance setting and should only be modified by a qualified performance professional.
Clean cookie path is a feature required to ensure PHP sites operate correctly behind the proxy. For most sites this setting can remain on. However, if you experience cookie path problems, you can try setting this to off.
Trace enables a detailed level trace of every transaction going through the proxy. It is a performance setting and should only be modified by a qualified performance professional.
Use web proxy allows you to force the proxy to connect to other hosts using the same web proxy as the X Engine. This is predominantly useful if you are doing a "reverse protection" (that is: using the X Engine to manage sites external to the local network, such as social networking or other data sensitive sites).
Composable Architecture Platform is a licensed product and the terms of your license is contained in a license key that you obtain from your supplier. The license key is rather long so we suggest you copy and paste it directly into the server definition when you receive it, and do not attempt to type it.
Once a valid license key is in place, the server will show the correct license terms on the server status screen as shown below:
Although an invalid license key doesn’t prevent the server from being used, it will display INVALID
if the license is invalid or missing.
If you have a license that has expired, you will see a bold red EXPIRED
notification, but the product will not stop running.
It is your responsibility as the customer to ensure that you adhere to the license terms of your purchase. You may also be asked to provide your license key when obtaining product support.
Another alternative to managing users locally is to use SAML authentication, where an Identity Provider (IdP) is the entity providing the identities, including the ability to authenticate a user.
SAML authentication is set up manually by providing an access manager plugin in the console’s configuration.properties file. Please see Console server configuration below for more information.
In the SAML Identity provider (IdP) you need to specify the single sign on URL as:
You have the option of passing the following parameters along in the sign on:
One of UserType or UserRole MUST be provided. If a role is provided, but no type, the type will be set to User.
Access rules are plug-ins used to alter the way the logon process for the console is managed for individual users. Examples of access rules are local system only and email one-time passwords.
Some of these plug-ins may require configuration. The following example shows the configuration for the email one-time password setup:
Once this Access Rule is enabled for a user, the logon experience changes and a new step is visible to the user just after they have performed the initial logon. This step allows the user to enter the one-time password that was emailed.
Access rules can be set against each user when they are created.
Each case manager has a number of core components that can be created through the console. This section lists the various parts and how they fit into the overall picture of the case manager.
Before you can define anything for a case manager, it must have a connection to a database. This is achieved using a JDBC connection. Select the Case Manager Setup section in the administration tree.
Make sure that the database exists if necessary (the Derby based demo database will auto-create it, but most other database systems will not). Then specify the database name and driver and click Create. The database will appear in the administration tree and the case manager management page will be presented.
From the previous page you can also upload a previously saved definition. Definitions are stored in XML files with the extension type “.cms”
.
Once the database definition has been created, it will normally not contain any definitions at all (unless you are connecting to an existing case manager database). So, you will be presented with the following page:
At this point you can import an existing definition (often a good starting point to avoid beginning from scratch). If so, you will see all of the case manager components from that definition appear in the tree below the case manager. We will now cover those components in detail.
Central to the case manager is the concept of queues. Whenever a new case is created, it is assigned a task that in turn is connected to a work queue. Users of the case manager can organize their work and manage basic workflow by using the queues. For example, the default definition has queues for emergencies, customer calls, quality assurance, rules review and so on.
Through the user roles, queues can be designed to be auto-picked from or simply be a holding pattern for specific tasks. At times it may be relevant to create queues for each staff member or role, to allow these staff members (or a supervisor) to assign specific tasks to specific individuals.
To add or update a queue, select the Queues section under the name of the case manager in the Case Manager Setup section in the administration tree.
Queues have priorities (where the lowest number means the highest priority) so that an auto-pick always pick from the lowest numbered queue first.
As a good rule when assigning priorities to queues, make the lowest 100, the next one 200 and so on. This will allow you to later comfortably insert another queue into the priority chain (for example, 150) without having to change too many queues.
Also notice the small “globe” icon next to the Default description. Clicking this icon allows you to edit the multi-lingual parts of a case manager definition. This concept is replicated for each case manager component that may require translation and clicking the icon presents you with a screen similar to this:
The languages shown are each of the languages available for the console.
Status codes are used to indicate if a case has reached a conclusion or is still a work in progress. As a minimum a blank status code MUST exist to indicate cases that are open. Other status codes are mainly used for statistical and reporting purposes.
Even if a case is closed, it is still possible to have active tasks against that case (for example quality assurance or rule reviews).
Since a case manager is multi-lingual, any lists boxes showing a selection of values must have a description associated with each list entry that is translatable.
These list entries are used in fields.
Fields are associated with tasks and are used to contain structured or important mandatory information that must be captured for each task. For example, for a “Call Customer” task, it may be relevant to record the name of the person called and the phone number used.
Fields can either be mandatory or optional, contain an input field or a list and can have a validation rule. The following shows a free format input field for an email address, which in turn is validated using a regular expression. For more information on regular expressions, see the rules reference.
Alternatively, fields can contain a list of list entries:
The list values are the list entries created previously.
Selections refer to the name of actions that can be taken when a task is completed. These selections can have actions linked to them (see below).
A single selection can result in one or more actions being taken.
Actions are what adds the dynamical workflow to the case manager and allows for a predetermined, predictable case management experience. Actions are taken when a user makes a selection to complete a task (see below).
Actions also allow for a dynamic review of the performance of the user of the case management system. By setting a percentage of cases that needs to be reviewed, the case manager can randomly pick these cases out and assign them to a quality assurance queue for review.
Actions can also delay when the next task becomes due for action. This way, tasks that require a later follow up will automatically come to your attention when required, and then flow back into the normal queue priorities.
Tasks are the final part of the case manager setup. A task is a representation of something that needs to be done to move the case to its next step.
A given case can have more than one task pending. For example, you may have both quality assurance and rules review pending on the same case at the same time.
A task can have fields associated with it. These are the fields defined earlier. Each field serves as a means of collecting data relevant for that task.
Finally, each task has a number of selections. These selections represent the actions that can be performed once a given task has been completed. Any one of these selections can result in one or more actions (see Actions above).
Once you have completed the setup of your case manager definition, you can export it to an XML file. The file will appear under the Case Manager Setup with the same name as the case manager.
If you select it, you can download it for future reference, or to define another case manager.
Just like data files, and other files in repositories, case manager definition files are versioned automatically within the console and you can restore a previous version if required.
At times (if you are comfortable with XML files and you have a lot of definitions to enter), it can be quicker to edit the XML file directly and upload it to the console.
If you import a new definition file, be aware that it will update/add itself to any existing definitions and will not remove any existing entries.
The user roles can be used to set very specific authorities for the access of a user to the various parts of the case manager. Please see the Managing User Roles section for an in-depth discussion on roles.
The following shows a sample role setup:
This is a user role that shows a basic fraud analyst role. Some tasks (such as quality assurance and staff training) are omitted, as are certain queues. This distinction makes it possible, as an example, for a supervisor to manage those same tasks within the case manager, without the analyst being able to see the result of QA reviews or training requirements.
Server type |
Production | This is the standard production server accepting any input |
Multi-Protocol | This is the server for managing protocols other than HTTP |
Test | This is a test server that takes test data as input |
Database | This is a database server. This server type is used in configurations. |
Template | This is a server template that other servers can inherit advanced setting from |
Production with Forwarder | This is a production server with a built in forwarding proxy |
Parameter | Values |
UserType | Admin/User/Super/Security |
UserLocale | Any valid locale. Default is en_US |
UserTimeZone | Any valid time zone. Default is GMT |
UserName | User full name. Default is SAML ID |
UserEmail | User email. Default is SAML ID |
UserRole (can be multiple) | Any valid role |
UserIU | Classic/Portal. Defaults to console default |
The following sections detail functions that are only available to administrative users. Please note that any of these functions that have security implications are tracked in the audit log.
The audit log provides a comprehensive audit of all changes to security sensitive objects within the Composable Architecture Platform console. The audit log contains information about security changes to users, roles, extensions, server definitions and more.
The following shows the search feature of the audit log:
The search result will provide details of every object modified and a summary of the changes as shown:
User roles are used to define the individual access rights of a standard user.
Very fine-grained authorities can be set in a user role, which can then be applied to any number of users.
Note: When you create a new server, you must update each role that will have access to that server to include the relevant authorities. If you do not, the role will automatically be excluded from accessing the new server.
Note: When you create a new repository, you must update each role that will have access to that repository to include the relevant authorities. If you do not, the role will automatically be excluded from accessing the new repository.
If you have case managers defined, they will show alongside the ability to manage which tasks and queues can be accessed under which circumstance. Please see “Working with Case Managers” earlier in this manual for more details.
The Composable Architecture Platform system has its own built in security and auditing. As a result, you need to manage the users of the system and their type. This is an administration task. Alternatively, you can manage users with LDAP. Please see the Authenticating via LDAP section for more information on LDAP configuration.
To create a new user, specify the user ID, full name, email, type, console view preference (classic/portal), role(s) and password. Only standard users are required to have a role assigned to them. Administrators and super users automatically have full access.
Note: Once a user’s password has been set, you can no longer see it or change it from within the application.
Administrators and super users essentially share the same abilities with the exception that super users cannot administer user accounts. In turn, user administrators can only administer user accounts and roles and not perform any other functions unless specifically assigned via a role.
When the system is first installed, it automatically creates a user called admin with the password admin. We strongly urge you to change the password for this user immediately.
Composable Architecture Platform stores its passwords in a table alongside other user information. To ensure that no one can read or extract a user’s password, it is encrypted using the Triple-DES algorithm. The key to the encryption is the password itself. Essentially, this means that there is no simple way to decrypt a password. In fact, Composable Architecture Platform never decrypts a password. Instead, it encrypts the password entered by the user and compares the result of the encryption to the one stored in the database. If there is a match, the authentication is considered valid.