The Tool Integration concept of RCE is used to integrate external tools for calculations, simulations and so on into RCE and use them in a workflow as a component. The tools must fulfill these requirements:
The external tool must be callable via command line
It must have a non-interactive mode which is called via command line
Input for the tool must be provided through command line arguments or files
If these requirements are fulfilled, a configuration file can be created that is used for the integration.
If you use RCE with a graphical user interface this can be done with the help of an wizard which guides you through the settings. This wizard can be found in the menu Tool Integration -> Integrate Tool.... Required fields are marked with an asterisk (*). When the wizard is finished and if everything is correct, the integrated tool will automatically show up in the Workflow Editor palette.
The wizard has a dynamic help, which is shown by clicking on the question mark on the bottom left or by pressing F1. It will guide you through the pages of the wizard.
When executing an integrated tool, a certain directory structure is created in the chosen working directory. This structure depends on the options you have chosen in the integration wizard. The two options that matter are "Use a new working directory each run" and "Tool copying behavior".

Root Working Directory: This is the directory you choose in the "Tool Integration Wizard" as "Working Directory" on the "Launch Settings" page.
Config Directory: In this directory, the configuration file that may be created by the tool integration will be created by default. The configuration files can be created from the properties that are defined for the tool on the "Tool Properties" page.
Input Directory: All inputs of type "File" and "Directory" will be copied here. They will have a subdirectory that has the same name as the name of the input (e.g. the input "x" of type "File" will be put into "Input Directory/x/filename").
Output Directory: All outputs of type "File" and
"Directory" can be written into this directory. After that, you can use the placeholder
for this directory to assign these outputs to RCE outputs in the post execution script.
To write, e.g., the output directory into an output "x" of type "Directory" the following line
in the post execution script would be required: ${out:x} = "${dir:output}"
Tool Directory: This is the directory where the actual tool is located. If the tool should not be copied, it will be exactly the same directory that you choose, otherwise it will be the same as the chosen directory but copied to the working directory.
Working Directory: A working directory is always the location, where all the other directories will be created. If the option "Use a new working directory on each run" is disabled, this will always be the same as the "Root Working Directory". Otherwise, a new directory is created each run (the name will be the run number) and is the working directory for the run.
When a component is created in the integration wizard, a configuration file is created.
All configuration files from the tool integration are stored in the directory
<profile folder>/integration/tools/
In this directory, there is a separation between different kinds of integration
realized through one subdirectory for each. The common folder always
exists.
In these subdirectories, the integrated tools are stored, again separated through into a subdirectory for each. The name of the directory is the name of integration of the tool.
If an integrated tool is copied to another RCE instance or another machine, the
directory of the tool must be copied, containing a configuration.json and
some optional files. It must be put in the equivalent integration type directory of the
target RCE instance. After that, RCE automatically reads the new folder and if
everything is valid, the tool will be integrated right away.
If you want to delete a tool folder that contains some documentation, this can cause an error. If you have this problem, first empty the documentation folder and delete the empty folder the documentation folder at first (it must be empty), afterwards you can delete the tool folder.
The tools are executed by using a command line call on the operating system via the "Execution Script". When the tool finished executing (with or without error), its exit code is handed back to the execution script and can be analyzed in this script. If in the script nothing else is done, the exit code is handed back to RCE. When there is an exit code that is not "0", RCE assumes that the tool crashed and thus lets the component crash without executing the "Post Script". Using the option "Exit codes other than 0 is not an error" can prevent the component to crash immediately. With this option enabled, the post script wil be executed in any way and the exit code from the tool execution can be read by using the placeholder from "Additional Properties". In this case, the post script can run any post processing and either not fail the component, so the workflow runs as normal, or let the compoennt crash after some debugging information was written using the Script API "RCE.fail("reason")".
Extending the common Tool Integration concept, the CPACS Tool Integration has some additional features.
Parameter Input Mapping (optional): Substitutes single values in the incoming CPACS content, based on an XPath configured at workflow design time as a dynamic input of the component
Input Mapping: Generates the tool input XML file as a subset of the incoming CPACS file XML structure, specified by a mapping file
Tool Specific Input Mapping (optional): Adds tool specific data to the tool input file, based on a mapping file and a data XML file
Output Mapping: Merges the content of the tool output XML file into the origin incoming CPACS file, based on a mapping file
Parameter Output Mapping (optional): Generates output values as single values of the CPACS result file, based on an XPath configured at workflow design time as a dynamic output of the component
Execution option to only run on changed input: If enabled, the integrated tool will only run on changed input. Therefore the content of the generated tool input file is compared to the last runs content. Additionally the data of the static input channels are compared to the previous ones.
All the features listed above can be configured in the tool integration wizard on the dedicated CPACS Tool Properties page.
The mappings can be specified by XML or XSLT as shown in the following examples. RCE differentiates between these methods in accordance to the corresponding file extension (.xml or .xsl).
For XML mapping, the following mapping modes are supported (see the mapping mode definitions in the mapping examples below):
append: Elements in the target path that have no equivalent in the source path are retained and are not deleted. Otherwise the elements in the target path are replaced by the corresponding elements in the source path. Two elements in the source and target path are considered to be the same if they have the same element name, the same number of attributes and the same attributes with the same values.
delete: Before copying, all elements that are described by the target path are deleted in the target XML file. This is also the standard behavior if no mapping mode is explicitly set in a mapping rule.
delete-only: All elements that are described by the target path are deleted in the target XML file.
If a target element described by the target path is not available in the XML file, it is created including all of its parent elements.
Example for an input or tool specific XML mapping :
<?xml version="1.0" encoding="UTF-8"?>
<map:mappings xmlns:map="http://www.rcenvironment.de/2015/mapping" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<map:mapping mode="append">
<map:source>/path/to/your/element</map:source>
<map:target>/toolInput/data/var1</map:target>
</map:mapping>
<map:mapping mode="delete">
<map:source>/path/to/your/element</map:source>
<map:target>/toolInput/data/var2</map:target>
</map:mapping>
<map:mapping mode="delete-only">
<map:target>/toolInput/data/var3</map:target>
</map:mapping>
<map:mapping>
<map:source>/path/to/your/element</map:source>
<map:target>/toolInput/data/var4</map:target>
</map:mapping>
<xsl:for-each select="$sourceFile/result/cases/case">
<map:mapping mode="delete">
<map:source>/path/to/your/case[<xsl:value-of select="position()" />]/element</map:source>
<map:target>/toolInput/data/condition[<xsl:value-of select="position()" />]/var</map:target>
</map:mapping>
</xsl:for-each>
</map:mappings>
Input or tool specific XSLT mapping:
<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="cpacs_schema.xsd">
<xsl:output method="xml" media-type="text/xml" />
<xsl:template match="/">
<toolInput>
<data>
<var1>
<xsl:value-of select="/path/to/your/element" />
</var1>
</data>
</toolInput>
</xsl:template>
</xsl:stylesheet>
Example of an output XML mapping:
<?xml version="1.0" encoding="UTF-8"?>
<map:mappings xmlns:map="http://www.rcenvironment.de/2015/mapping" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<map:mapping>
<map:source>/toolOutput/data/result1</map:source>
<map:target>/path/to/your/result/element</map:target>
</map:mapping>
</map:mappings>
And output XSLT mapping:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" exclude-result-prefixes="xsi">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<!--Define Variable for toolOutput.xml-->
<xsl:variable name="toolOutputFile" select="'./ToolOutput/toolOutput.xml'"/>
<!--Copy complete source file to result file -->
<xsl:template match="@* | node()">
<xsl:copy>
<xsl:apply-templates select="@* | node()"/>
</xsl:copy>
</xsl:template>
<!--Modify a value of an existing node-->
<xsl:template match="/path/to/your/result">
<element>
<xsl:value-of select="document($toolOutputFile)/toolOutput/data/result1"/>
</element>
</xsl:template>
</xsl:stylesheet>
Please ensure to use the proper namespace for map (xmlns:map="http://www.rcenvironment.de/2015/mapping") in XML mapping files and the proper namespace for xsl (xmlns:xsl="http://www.w3.org/1999/XSL/Transform") in both types of mapping files.
The figure below illustrates how the additional features are used in the run process of an user-integrated CPACS tool.
Start RCE as Client
Open the Tool Integration Wizard by clicking the Integrate Tool... in the File menu.
You will always find further help by clicking the ? on the bottom left corner on each page of the wizard or by pressing F1.
Choose the option Create a new tool configuration from a template.
The CPACS templates delivered with RCE are designed to match the conventions of the old CPACS tool wrapper (respectively ModelCenter tool wrapper). Most of the properties are preconfigured and do not need to be changed.
Select one of the CPACS templates. Click Next.
Fill in the Tool Description page. Click Next.
On the Inputs and Outputs page you will find preconfigured static in- and outputs, that will match the old tool wrapper conventions. If your tool needs additional in- or outputs, feel free to configure.Click Next.
Skip the page Tool Properties by clicking Next since it is not relevant for tools that match the conventions of the old CPACS tool wrapper.
Add a launch setting for the tool by clicking the Add button on the Launch Settings page. Configure the path of the CPACS tool and fill in a version, click OK. If you would like to allow users of your tool to choose that the temp directory won’t be deleted at all after workflow execution, check the property Never delete working directory(ies). Not to delete the working directory can be very useful for users for debugging purposes, at least if they have access to the server’s file system. But this option can result in disc space issues as the amount required grows continuously with each workflow execution. It is recommended to check that option during integrating the tool and uncheck it before publishing the tool. Click Next.
The CPACS Tool Properties are preconfigured to match the folder structure defined for the old CPACS tool wrapper. In most cases you do not have to change this configuration. If you are using XSLT mapping, please select the corresponding mapping files. If your tool does not work with static tool specific input, please deselect this property. Click Next.
In the Execution command(s) tab on the Execution page, you need to define your execution command itself as well as optional pre and post commands. Commands will be processed sequentially line by line. An example for a typical Windows command including pre and post commands will look like the following:
rem pre-command pre.bat rem tool-execution YourTool.exe ToolInput/toolInput.xml ToolOutput/toolOutput.xml rem post-command post.bat
Click Save and activate and your tool will appear immediately in the palette and is be ready to use.
If not already done, do not forget to publish your tool (cf. Section 3.5, “Tool publishing and authorization” ) after testing it locally. To check if your tool is successfully published to the RCE network open the tab Network View at the bottom and checkout Published Components after expanding the entry of your RCE instance.
The way to integrate a CPACS tool on a server running RCE in headless mode is as follows: Perform the steps to integrate a CPACS tool on a client instance and make sure that the path of the CPACS tool configured on the Launch Settings page (step 8) matches the absolute tool path on your server system. Afterwards, you will find the configuration files inside your rce profile folder at the following location:
/integration/tools/cpacs/[YourToolName]
Copy the folder [YourToolName] to the same location inside the
profile folder running with your headless server instance. Use the "auth" commands
(cf. Section 3.5, “Tool publishing and authorization” ) to publish your tool. If
the server instance is already running, your tool will be available immediately
after publishing.
In this section we describe how to integrate a workflow containing multiple components as a component itself. This feature is currently experimental and not recommended for productive use.
Consider a disciplinary tool that computes the value of some function f_c(x) for some
parameter c and some input value x and assume that the user has already integrated this
tool as the component DiscComp. Furthermore assume that in multiple
workflows the user would like to fix some value for c and find a minimum of f_c. She
implements this use case via the structure shown in the following figure.
In that workflow, the user opted to provide the parameter c via an input provider, while she used an optimizer to determine the optimal value of x. That optimal value is then written via an output writer. The user now wants to use this workflow as part of other, more complex workflows.
One approach would be to simply copy the part of the workflow that implements the
actual computation (i.e., the components Optimizer and
DiscComp) and paste it whenever she requires this functionality in
other workflow. This approach, however, is neither scalable nor maintainable: While this
example requires only copying of two componentes, one can easily imagine situations in
which the functionality to be copied is implemented via dozens of components, which
leads to severe cluttering of the workflows in which the functionality is used.
Furthermore, if the user changes the original workflow, e.g., if she uses another
algorithm for the optimization, she would have to re-copy the changed parts to all
workflows that use the original parts.
Instead of manually copying and pasting, the user may instead opt to integrate the
workflow shown in the above figure as a tool to be used in other workflows. This allows
her to hide the details of the implementation (i.e., the use of an optimizer and of
DiscComp) from users of her component and to easily update that
implementation.
In the following, we first show how to integrate an existing workflow as a component before detailing the technical backgrounds of executing a workflow as a component. Finally, we discuss caveats and common questions about this feature. In all these sections, we will refer to an "inner" workflow and an "outer" workflow. These refer to the workflow that is integrated as a component and to the workflow in which that component is used later on, respectively.
Before integrating the workflow shown above, we assume that you have already
constructed a workflow that implements the behavior that you want to provide to
other users as a component. Moreover, we assume that this workflow contains some
input providers that feed initial data into the workflow and some output writers
that persist the results of the computation implemented by the workflow. In the
figure above, these input providers and output writers are situated to the left of
the component DiscComp and to the right of the optimizer, respectively.
Finally, the workflow to be integrated must not contain any placeholders (cf. Section 3.3.3.1, “Configuration Placeholder Value Files” ). Otherwise user input would be required at
execution time in order to assign values, which would prevent automated execution of
the integrated workflow.
You can easily determine whether your workflow contains placeholders by opening the workflow execution wizard (either via the green arrow in the upper bar in the GUI or via the shortcut Ctrl + Shift + X). If there exist any placeholders that are to be assigned values before the start of the execution, the wizard will show a second page that displays all such placeholders. If no such page exists, the workflow does not contain placeholders and is ready for integration as a component.
Integrating a workflow consists of nothing more than determining endpoints of
components in the inner workflow that will be exposed to the outer workflow by the
resulting component. In this case, we opt to expose the input c of
DiscComp as well as the output x_output of
Optimizer. In general, inputs will be exposed as inputs on the
component in the outer workflow, while outputs will be exposed as outputs. It is not
possible to expose an input of a component in the inner workflow as an output to the
outer workflow, or vice versa.
In order to integrate the above workflow as a component, we first remove the input providers and output collectors that handle the inputs and outputs that are to be passed into the inner workflow by the outer workflow. In the example above, we simply deactivate the two components (e.g. via the keyboard shortcut Ctrl + D) and obtain the workflow shown in the following figure.
While previously, all endpoints of all components were connected, now there exist
two unconnected endpoints: The input c of DiscComp as well as the
output x_optimal of Optimizer. The workflow is now ready
for integration as a component.
Integration of workflows is performed via the command console and, in particular,
via the command wf integrate. This command has the following general
form:
wf integrate [-v] <component name> <absolute path to .wf file> [<exposed endpoint definition>...]
The optional argument -v enables verbose mode. If this parameter is
set, the command outputs detailed information about the endpoints that are exposed
to the calling workflows. This does not change the behavior of the command.
The parameter component name determines the name of the component
that is integrated, i.e., the name that will appear in the pallet and in the
workflow editor. Since in our example the purpose of the new component in our
example is to determine some optimal parameter x, we opt to call the component
FindOptimalX.
The parameter absolute path to .wf file is self-explanatory and
denotes the path on your local file system where the workflow file describing the
workflow to be integrated is located. In our example we assume that the workflow
file is located at /home/user/workflow.wf.
Recall that you can obtain the absolute path to any workflow file in the project explorer via a right click on the workflow and selecting Copy Full Path.
Furthermore, recall that parameters in the command console are separated
by spaces unless the parameter is surrounded by quotation marks . Hence, if
the path to your workflow contains spaces, enclose it in quotation marks.
Finally, recall that backslashes must be escaped, i.e., the path C:\My
Folder would have to be entered as "C:\\My
Folder".
Each succeeding parameter is interpreted as the definition of an exposed endpoint. Each such definition is of the following form:
--expose <component name>:<internal endpoint name>:<exposed endpoint name>
Here, component name refers to the name of the component in the inner
workflow whose endpoint is to be exposed. The parameter internal endpoint
name denotes the name of the endpoint of the component that is to be
exposed, while the parameter exposed endpoint name determines the name
of the endpoint on the resulting component. Make sure that each exposed
endpoint name is unique within the context of the resulting component, as
the behavior of a component with multiple inputs or outputs of the same name is undefined.
Instead of the names of the component and the endpoints that are displayed in the workflow editor, you may instead use the internal identifiers of these nodes and endpoints, respectively. These are not currently shown in the GUI of RCE but can, e.g., be determined by inspecting the workflow file via some text editor. While this should not be necessary when integrating workflows manually, it may prove useful when automating the creation and integration of workflows.
Recall that you do not need to specify whether the endpoint is exposed as an input or as an output, but that the underlying endpoint determines the configuration of the endpoint on the resulting component: Inputs are only ever exposed as inputs, whereas outputs are only ever exposed as outputs. This principle extends to the configuration of inputs: If the endpoint on the component in the inner workflow is, e.g., configured be required for component execution and to only expect a constant value, then the endpoint on the resulting component is configured analogously.
Furthermore recall that we want to expose the input c of
DiscComp as well as the output x_output of
Optimizer. We want the former input to retain its original name,
while we want to expose the latter input as optimalX. In order to
integrate the example workflow prepared above as a component, we thus issue the
following
command:
wf integrate FindOptimalX "/home/user/workflow.wf"--expose DiscComp:c:c --expose Optimizer:x_optimal:optimalX
When enabling verbose mode via the switch -v, RCE writes the
following
output
Input Adapter : c --[Float,Constant,Required]-> c @ 03b5b758-3b44-4a53-b832-be9991321285 Output Adapter: x_optimal @ 402cac5e-2206-48cc-a62f-803bd320a15a --[Float]-> x_opt
where 03b5b758-3b44-4a53-b832-be9991321285 and
402cac5e-2206-48cc-a62f-803bd320a15a denote the IDs of the
component DiscComp and of Optimizer, respectively.
Once the execution of the command has finished, a new component named
FindOptimalX with a single input named c and a single
output named optimalX will be available for use in all other
workflows.
Recall that each workflow that RCE executes is controlled by some particular instance, i.e., by the workflow controller. Since executing an integrated workflow executes the underlying workflow, RCE requires a workflow controller for doing so. That workflow controller may or may not be the same as the one executing the outer workflow. Currently, the instance executing the component serves as the workflow controller for the execution of the inner workflow. That instance will execute a copy of the workflow that has been created when the workflow was integrated, i.e., changes made to the workflow after integration will have no effect on the behavior of the integrated component.
Futhermore, since the publishing instance serves as workflow controller for the
execution of the integrated workflow, the execution of the integrated workflow will
show up in the datamanagement of the publishing instance under the name
<component name> running as component '<node name>' of workflow
'<outer workflow>', where <component name> denotes the
name as which the publishing instance published the component, <node
name> denotes the name under which the component is used in the outer
workflow, and <outer workflow> denotes the name under which the
calling workflow is stored in the data management of its workflow controller.
Nesting workflows, i.e., integrating workflows as components that already contain workflows integrated as components, can easily lead to unreadable names of workflow executions that are stored in the data management. This may significantly inhibit manual inspection of the resulting data. Keep this in mind when designing workflows.
Technically, before starting the integrated workflow, the workflow controller injects two additional components into the workflow, one so-called input adapter and one so-called output adapter.These components are not accessible by the user when constructing workflows and are only used in transport data from the inputs to the component on the side of the calling workflow to the exposed inputs as well as data from the exposed outputs of the workflow to the outputs of the component in the calling workflow, respectively.
Upon execution of the integrated component in the calling workflow, the instance
publishing the component first injects the input and the output adapter as described
above. It subsequently executes the workflow and collects the results via the output
adapter. The execution takes place as if the workflow were executed using the
command wf run.
Since the integration of workflows as components is currently under development and only released as a beta feature, there are some caveats and known issues that you should be aware of. We have alluded to these limitations and caveats throughout this section, but briefly list them here again for the sake of readability.
Workflow files are "frozen" at integration time. Changes to an integrated workflow file after integration do not change the behavior of the component. If you want to apply changes to the workflow file to the component, you will have to re-integrate the workflow.
Currently, no placeholder files (cf. Section 3.3.3.1, “Configuration Placeholder Value Files” ) are supported, i.e., the integrated workflow must contain no placeholders. Moreover, the workflow is not checked for containing placeholders at integration time, but instead the execution of the the component will fail at execution time.
The user cannot specify a version of the integrated component. If there is
demand, we will add the command line switch --version in order
to allow the user to have multiple versions of the same workflow integrated
simultaneously. Also, the user can currently not specify an individual icon
to be used for the integrated component. This may also be added in future
versions.
If some adapted output is written to multiple times during a single run of the integrated workflow, only the final values written to that output are forwarded to the calling workflow.
Due to this new implementation, there is doubled functionality between the
command wf integrate and the command ra-admin
wf-publish. After the full release of the integration of
workflows as component, the latter command will be deprecated and its output
replaced by a message asking the user to use wf integrate
instead.
If the underlying workflow is paused during execution, this pause state is not reflected in the calling workflow. Instead the component is shown as running. Similarly, if the integrated workflow includes some result verification and the results are rejected, the component simply fails instead of indicating the rejection of results.
Component names passed to the command wf integrate are not
checked to satisfy the rules on component names. This will be fixed before
release and integration of a component with an invalid name will be refused
with an informative error message.
Furthermore, there are some common questions that may occur in the context of integrating a workflow as a component. We collect and answer these questions here again for the sake of readability.
The integration of a workflow as a component is
stored in a profile in the folder
integration/components/workflows.
Yes, this is possible, since the integration folder contains a copy of the workflow file which was produced at integration time. Also, you can publish integrated workflows to other instances just as you can publish common tools.
In that case the component is still available as long as the instance publishing it is available. The availability of the components contained in the integrated workflow is only checked at execution time. If a component is unavailable at that time, the execution of the component fails.