Basic Components
Nodes work with data records. They can produce, fetch, send and collect data or organize the flow. Each node has at least two parameters: Name
and Description
. Name has to be unique in a scenario. Description is a narrative of your choice.
Most of the nodes have both input and at least one output flow. Source and sink nodes are an exception.
Sinks and filters can be disabled by selecting Disable
checkbox.
Variable
A Variable component is used to declare a new variable. In its simplest form a variable declaration looks like the example below. As the data record has been read from a data source, the #input
variable holds the data record's value. After that, the record's (#input
) value is assigned to a newly declared myFirstVariable
variable.
As you can see in the variable
configuration form below, Nussknacker has inferred the data type of the #input
variable. Nussknacker can do this based on the information available from the previous components.
In the next example #input
variable is used to create an expression returning a boolean value. If the input source contains JSON objects and they contain an operation
field, the value of the field can be obtained using the following pattern:
#input.operation
Note that internally Nussknacker converts the JSON object into a SpEL record.
RecordVariable
The specialized record-variable
component can be used to declare a record variable (JSON object)
The same outcome can be achieved using a plain Variable
component. Just make sure to write a valid SpEL expression.
Filter
Filters let through records that satisfy a filtering condition.
You can additionally also define an additional false sink
. Records from the source
which meet the filter's conditions are going to be directed to the true sink
, while others end up in the false sink
.
The Expression field should contain a SpEL expression for the filtering conditions and should produce a boolean value.
Choice
Choice is a more advanced variant of the filter component - instead of one filtering condition, you can define multiple conditions in some defined order. It distributes incoming records among output branches in accordance with the filtering conditions configured for those branches.
After a record leavessource
it arrives in choice
, the record's attributes' values are tested against each of the defined conditions. If #input.color
is blue
, the record ends up in blue sink
.
If #input.color
is green
,the record is sent to the green sink
. For every other value, the record is sent to sink for others
because condition true
is always true.
Order of evaluation of conditions is the same as is visible in the configuration form - top to bottom. You can modify the order using the drag & drop functionality.
Order is also visible on the designer graph in an edge's (the arrow connecting the nodes) description as a number. Be aware that the layout button can change displayed order of nodes, but it has no influence on order of evaluation.
Split
Split node logically splits processing into two or more parallel branches. Each branch receives all data records and processes them independently and in parallel.
In the Request - Response processing mode you can use this feature to paralellize and hence speed up the processing. You must use a sequence of Union and Collect nodes to merge parallelly executed branches and collect the results from these branches. A discussion of Request - Response scenario with multiple branches can be found here. In the Streaming processing mode the most typical reason for using a Split node is to define dedicated logic and dedicated sink for each of the branches.
Example: (Streaming processing mode) - every record from the source
goes to sink 1
and sink 2
.
Split node doesn't have additional parameters.
ForEach
for-each
transforms the incoming event to N events, where N is number of elements in the Elements list.
This node has two parameters:
- Elements - list of values over which to loop. It can contain both fixed values and expressions evaluated during execution.
- Output Variable Name - the name of the variable to which current element value will be assigned.
For example, when:
- Elements is
{"John", "Betty"}
- Output Variable Name is
outputVar
,
then two events will be emitted, with #outputVar
equal to John
for the first event and Betty
for the second.
The #input
variable is available downstream the for-each
node.
Union
Union merges multiple branches into one branch.
In the Streaming processing mode events from the incoming branches are passed to the output branch without an attempt to combine or match them.
In the Request - Response processing mode only one response sink can return value. If you have parallel branches of processing the Union node is used to merge them and then Collect node is used to collect results of processing in each of the merged branches. Check Introduction to Scenario Authoring for details on how to interpret the scenario graph in different processing modes.
The #input
variable will be no longer available downstream the union node; a new variable will be available instead, which is defined in the union node.
Branch names visible in the node configuration form are derived from node names preceding the union node.
Example:
Entry fields:
- Output Variable Name - the name of the variable containing results of the merge (replacing previously defined variables, in particular
#input
). - Output Expression - there is one expression for each of the input branches. When there is an incoming event from a particular input branch, the expression defined for that branch is evaluated and passed to the output branch. The expressions defined for respective branches need to be of identical data type. In the example above it is always a record containing fields
branchName
andvalue
.
Note, that the #input
variable used in the Output expression field refers to the content of the respective incoming branch.