Deployment Manager configuration
Deployment Manager deploys scenarios from the Designer to the engine on which scenarios are processed. Check configuration areas to understand where Deployment Manager configuration should be placed in Nussknacker configuration.
Below you can find a snippet of Deployment Manager configuration.
deploymentConfig {
type: "flinkStreaming"
restUrl: "http://localhost:8081"
# additional configuration goes here
}
type
parameter determines engine to which the scenario is deployed. It is set in the minimal configuration file (docker image, binary distribution) and in the Helm chart - you will not need to set it on your own.
Kubernetes native Lite engine configuration
Please check high level Lite engine description before proceeding to configuration details.
Please note, that K8s Deployment Manager has to be run with properly configured K8s access. If you install the Designer in K8s cluster (e.g. via Helm chart) this comes out of the box. If you want to run the Designer outside the cluster, you have to configure .kube/config
properly.
Except the servicePort
configuration option, all remaining configuration options apply to
both streaming
and request-response
processing modes.
The table below contains configuration options for the Lite engine. If you install Designer with Helm, you can use Helm values override mechanism to supply your own values for these options. As the the result of the Helm template rendering "classic" Nussknacker configuration file will be generated.
If you install Designer outside the K8s cluster then the required changes should be applied under the deploymentConfig
key as any other Nussknacker non K8s configuration.
Parameter | Type | Default value | Description |
---|---|---|---|
mode | string | Processing mode: either streaming or request-response | |
dockerImageName | string | touk/nussknacker-lite-runtime-app | Runtime image (please note that it's not touk/nussknacker - which is designer image) |
dockerImageTag | string | current nussknacker version | |
scalingConfig (Streaming processing mode) | {tasksPerReplica: int} | { tasksPerReplica: 4 } | see below |
scalingConfig (Request - Response processing mode) | {fixedReplicasCount: int} | { fixedReplicasCount: 2 } | see below |
configExecutionOverrides | config | {} | see below |
k8sDeploymentConfig | config | {} | see below |
nussknackerInstanceName | string | {?NUSSKNACKER_INSTANCE_NAME} | see below |
logbackConfigPath | string | {} | see below |
commonConfigMapForLogback | string | {} | see below |
ingress | config | {enabled: false} | (Request-Response only) see below |
servicePort | int | 80 | (Request-Response only) Port of service exposed |
scenarioStateCaching.enabled | boolean | true | Enables scenario state caching in scenario list view |
scenarioStateCaching.cacheTTL | duration | 10 seconds | TimeToLeave for scenario state cache entries |
scenarioStateIdleTimeout | duration | 3 seconds | Idle timeout for fetching scenario state from K8s |
Customizing K8s deployment resource definition
By default, each scenario is deployed as the following K8s deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
nussknacker.io/scenarioVersion: |-
{
"versionId" : 2,
"processName" : "DetectLargeTransactions",
"processId" : 7,
"user" : "jdoe@sample.pl",
"modelVersion" : 2
}
labels:
nussknacker.io/nussknackerInstanceName: "helm-release-name"
nussknacker.io/scenarioId: "7"
nussknacker.io/scenarioName: detectlargetransactions-080df2c5a7
nussknacker.io/scenarioVersion: "2"
spec:
minReadySeconds: 10
selector:
matchLabels:
nussknacker.io/scenarioId: "7"
strategy:
type: Recreate
template:
metadata:
labels:
nussknacker.io/scenarioId: "7"
nussknacker.io/scenarioName: detectlargetransactions-080df2c5a7
nussknacker.io/scenarioVersion: "2"
name: scenario-7-detectlargetransactions
spec:
containers:
- env:
- name: SCENARIO_FILE
value: /data/scenario.json
- name: CONFIG_FILE
value: /opt/nussknacker/conf/application.conf,/runtime-config/runtimeConfig.conf
- name: DEPLOYMENT_CONFIG_FILE
value: /data/deploymentConfig.conf
- name: LOGBACK_FILE
value: /data/logback.xml
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: touk/nussknacker-lite-runtime-app:1.3.0 # filled with dockerImageName/dockerImageTag
livenessProbe:
httpGet:
path: /alive
port: 8080
scheme: HTTP
name: runtime
readinessProbe:
failureThreshold: 60
httpGet:
path: /ready
port: 8080
scheme: HTTP
periodSeconds: 1
volumeMounts:
- mountPath: /data
name: configmap
volumes:
- configMap:
defaultMode: 420
name: scenario-7-detectlargetransactions-ad0834f298
name: configmap
You can customize it adding e.g. own volumes, deployment strategy etc. with k8sDeploymentConfig
settings,
e.g. add additional custom label environment variable to the container, add custom sidecar container:
spec {
metadata: {
labels: {
myCustomLabel: addMeToDeployment
}
}
containers: [
{
#`runtime` is default container executing scenario
name: runtime
env: [
CUSTOM_VAR: CUSTOM_VALUE
]
},
{
name: sidecar-log-collector
image: sidecar-log-collector:latest
command: ["command-to-upload", "/remote/path/of/flink-logs/"]
}
]
}
This config will be merged into the final K8s deployment resource definition. Please note that you cannot override names or labels configured by Nussknacker.
Overriding configuration passed to runtime.
In most cases, the model configuration values passed to the Lite Engine runtime are the ones from
the modelConfig
section
of main configuration file.
However, there are two exception to this rule:
- there is application.conf file in the runtime image, which is used as additional source of certain defaults.
- you can override the configuration coming from the main configuration file. The paragraph below describes how to use this mechanism.
In some circumstances you want to have different configuration values used by the Designer, and different used by the
runtime.
E.g. different accounts/credentials should be used in Designer (for schema discovery, tests from file) and in Runtime (
for the production use).
For those cases you can use configExecutionOverrides
setting:
deploymentConfig {
configExecutionOverrides {
special_password: "sfd2323afdf" # this will be used in the Runtime
}
}
modelConfig {
special_password: "aaqwmpor909232" # this will be used in the Designer
}