Skip to main content
Version: 1.14

Installation

Nussknacker relies on several open source components like Kafka, Grafana (or optionally, Flink), which need to be installed together with Nussknacker. This document focuses on the configuration of Nussknacker and its integrations with those components; please refer to their respective documentations for details on their optimal configuration.

Nussknacker (both binary package and docker image) is published in two versions - built with Scala 2.12 and 2.13. As for now, Flink does not support Scala 2.13 (see FLINK-13414 issue), so to use Nussknacker built with Scala 2.13 some tweaks in Flink installations are required. Nussknacker built with Scala 2.12 works with Flink out of the box.

Docker based installation

Nussknacker is available at Docker Hub. You can check an example usage with docker-compose at Nussknacker Quickstart repository in docker directory.

Please note, that while you can install Designer with plain Docker (e.g. with docker-compose) with Lite engine configured, you still need configured Kubernetes cluster to actually run scenarios in this mode - we recommend using Helm installation for that mode.

If you want to check locally Streaming processing mode with plain Docker and embedded engine just run:

docker run -it --network host -e KAFKA_ADDRESS=localhost:3032 -e SCHEMA_REGISTRY_URL=http://localhost:3082 touk/nussknacker:latest

Note: --network host only works on linux and is used to connect to existing kafka/schema registry. In case of other OS you have to use different methods to make it accessible from Nussknacker container (e.g start Kafka/SR and Nussknacker in a single docker network) If you want to see Nussknacker in action without Kafka, using embedded Request-Response processing mode (scenario logic is exposed with REST API), run:

docker run -it -p 8080:8080 -p 8181:8181 touk/nussknacker:latest

After it started go to http://localhost:8080 and login using credentials: admin/admin. REST endpoints of deployed scenarios will be exposed at http://localhost:8181/scenario/<slug>. Slug is defined in Properties, and by default it is scenario name.

More information you can find at Docker Hub

Base Image

As a base image we use eclipse-temurin:11-jre-jammy. See Eclipse Temurin Docker Hub for more details.

Container configuration

For basic usage, most things can be configured using environment variables. In other cases, can be mounted volume with own configuration file. See configuration section for more details. NUSSKNACKER_DIR is pointing to /opt/nussknacker.

Kubernetes - Helm chart

We provide Helm chart with basic Nussknacker setup, including:

  • Kafka - required only in streaming processing mode
  • Grafana + InfluxDB
  • One of the available engines: Flink or Lite.

Please note that Kafka (and Flink if chosen) are installed in basic configuration - for serious production deployments you probably want to customize those to meet your needs.

You can check example usage at Nussknacker Quickstart repository in k8s-helm directory.

Configuration with environment variables

All configuration options are described in Configuration.

Some of them can be configured using already predefined environment variables, which is mostly useful in the Docker setup. The table below shows all the predefined environment variables used in the Nussknacker image. $NUSSKNACKER_DIR is a placeholder pointing to the Nussknacker installation directory.

Because we use HOCON, you can set (or override) any configuration value used by Nussknacker even if the already predefined environment variable does not exist. This is achieved by setting the JVM property -Dconfig.override_with_env_vars=true and setting environment variables following conventions described here.

Basic environment variables

Variable nameTypeDefault valueDescription
JDK_JAVA_OPTIONSstringCustom JVM options, e.g -Xmx512M
JAVA_DEBUG_PORTintPort to Remote JVM Debugger. By default debugger is turned off.
CONFIG_FILEstring$NUSSKNACKER_DIR/conf/application.confLocation of application configuration. You can pass comma separated list of files, they will be merged in given order, using HOCON fallback mechanism
LOGBACK_FILEstring$NUSSKNACKER_DIR/conf/docker-logback.xmlLocation of logging configuration
WORKING_DIRstring$NUSSKNACKER_DIRLocation of working directory
STORAGE_DIRstring$WORKING_DIR/storageLocation of HSQLDB database storage
CLASSPATHstring$NUSSKNACKER_DIR/lib/:$NUSSKNACKER_DIR/managers/Classpath of the Designer, lib directory contains related jar libraries (e.g. database driver), managers directory contains deployment manager providers
LOGS_DIRstring$WORKING_DIR/logsLocation of logs
HTTP_INTERFACEstring0.0.0.0Network address Nussknacker binds to
HTTP_PORTstring8080HTTP port used by Nussknacker
HTTP_PUBLIC_PATHstringPublic HTTP path prefix the Designer UI is served at, e.g. using external proxy like nginx
DB_URLstringjdbc:hsqldb:file:${STORAGE_DIR}/db;sql.syntax_ora=trueDatabase URL
DB_DRIVERstringorg.hsqldb.jdbc.JDBCDriverDatabase driver class name
DB_USERstringSAUser used for connection to database
DB_PASSWORDstringPassword used for connection to database
DB_CONNECTION_TIMEOUTint30000Connection to database timeout in milliseconds
AUTHENTICATION_METHODstringBasicAuthMethod of authentication. One of: BasicAuth, OAuth2
AUTHENTICATION_USERS_FILEstring$NUSSKNACKER_DIR/conf/users.confLocation of users configuration file
AUTHENTICATION_HEADERS_ACCEPTstringapplication/json
FLINK_REST_URLstringhttp://localhost:8081URL to Flink's REST API - used for scenario deployment
FLINK_ROCKSDB_ENABLEbooleantrueEnable RocksDB state backend support
KAFKA_ADDRESSstringlocalhost:9092Kafka address used by Kafka components (sources, sinks)
KAFKA_AUTO_OFFSET_RESETstringSee Kafka documentation. For development purposes it may be convenient to set this value to 'earliest', when not set the default from Kafka ('latest' at the moment) is used
SCHEMA_REGISTRY_URLstringhttp://localhost:8082Address of Confluent Schema registry used for storing data model
GRAFANA_URLstring/grafanaURL to Grafana, used in UI. Should be relative to Nussknacker URL to avoid additional CORS configuration
INFLUXDB_URLstringhttp://localhost:8086URL to InfluxDB used by counts mechanism
MODEL_CLASS_PATHlist of strings["model/defaultModel.jar", "model/flinkExecutor.jar", "components/flink/flinkBase.jar", "components/flink/flinkKafka.jar"]Classpath of model (jars that will be used for execution of scenarios)
PROMETHEUS_METRICS_PORTintWhen defined, JMX MBeans are exposed as Prometheus metrics on this port
PROMETHEUS_AGENT_CONFIG_FILEint$NUSSKNACKER_DIR/conf/jmx_prometheus.yamlDefault configuration for JMX Prometheus agent. Used only when agent is enabled. See PROMETHEUS_METRICS_PORT

OAuth2 environment variables

Variable nameTypeDefault value
OAUTH2_CLIENT_SECRETstring
OAUTH2_CLIENT_IDstring
OAUTH2_AUTHORIZE_URIstring
OAUTH2_REDIRECT_URIstring
OAUTH2_ACCESS_TOKEN_URIstring
OAUTH2_PROFILE_URIstring
OAUTH2_PROFILE_FORMATstring
OAUTH2_IMPLICIT_GRANT_ENABLEDboolean
OAUTH2_ACCESS_TOKEN_IS_JWTbooleanfalse
OAUTH2_USERINFO_FROM_ID_TOKENstringfalse
OAUTH2_JWT_AUTH_SERVER_PUBLIC_KEYstring
OAUTH2_JWT_AUTH_SERVER_PUBLIC_KEY_FILEstring
OAUTH2_JWT_AUTH_SERVER_CERTIFICATEstring
OAUTH2_JWT_AUTH_SERVER_CERTIFICATE_FILEstring
OAUTH2_JWT_ID_TOKEN_NONCE_VERIFICATION_REQUIREDstring
OAUTH2_GRANT_TYPEstringauthorization_code
OAUTH2_RESPONSE_TYPEstringcode
OAUTH2_SCOPEstringread:user
OAUTH2_AUDIENCEstring
OAUTH2_USERNAME_CLAIMstring

Binary package installation

Released versions are available at GitHub.

Please note, that while you can install Designer from .tgz with Lite engine configured, you still need configured Kubernetes cluster to actually run scenarios in this mode - we recommend using Helm installation for that mode.

Prerequisites

We assume that java (recommended version is JDK 11) is on path.

Please note that default environment variable configuration assumes that Flink, InfluxDB, Kafka and Schema registry are running on localhost with their default ports configured. See environment variables section for the details. Also, GRAFANA_URL is set to /grafana, which assumes that reverse proxy like NGINX is used to access both Designer and Grafana. For other setups you should change this value to absolute Grafana URL.

WORKING_DIR environment variable is used as base place where Nussknacker stores its data such as:

  • logs
  • embedded database files
  • scenario attachments

Startup script

We provide following scripts:

  • run.sh - to run in foreground, it's also suitable to use it for systemd service
  • run-daemonized.sh - to run in background, we are using nussknacker-designer.pid to store PID of running process

File structure

LocationUsage in configurationDescription
$NUSSKNACKER_DIR/storageConfigured by STORAGE_DIR propertyLocation of HSQLDB database
$NUSSKNACKER_DIR/logsLocation of logs
$NUSSKNACKER_DIR/conf/application.confConfigured by CONFIG_FILE propertyLocation of Nussknacker configuration. Can be overwritten or used next to other custom configuration. See Configuration document for details
$NUSSKNACKER_DIR/conf/logback.xmlConfigured by LOGBACK_FILE property in standalone setupLocation of logging configuration. Can be overwritten to specify other logger logging levels
$NUSSKNACKER_DIR/conf/docker-logback.xmlConfigured by LOGBACK_FILE property in docker setupLocation of logging configuration. Can be overwritten to specify other logger logging levels
$NUSSKNACKER_DIR/conf/users.confConfigured by AUTHENTICATION_USERS_FILE propertyLocation of Nussknacker Component Providers
$NUSSKNACKER_DIR/model/defaultModel.jarUsed in MODEL_CLASS_PATH propertyJAR with generic model (base components library)
$NUSSKNACKER_DIR/model/flinkExecutor.jarUsed in MODEL_CLASS_PATH propertyJAR with Flink executor, used by scenarios running on Flink
$NUSSKNACKER_DIR/componentsCan be used in MODEL_CLASS_PATH propertyDirectory with Nussknacker Component Provider JARS
$NUSSKNACKER_DIR/libDirectory with Nussknacker base libraries
$NUSSKNACKER_DIR/managersDirectory with Nussknacker Deployment Managers

Logging

We use Logback for logging configuration. By default, the logs are placed in ${NUSSKNACKER_DIR}/logs, with sensible rollback configuration.
Please remember that these are logs of Nussknacker Designer, to see/configure logs of other components (e.g. Flink) please consult their documentation.

Systemd service

You can set up Nussknacker as a systemd service using our example unit file.

  1. Download distribution as described in [Binary package installation](/documentation/docs/1.14/installation/#Binary package installation)
  2. Unzip it to /opt/nussknacker
  3. sudo touch /lib/systemd/system/nussknacker.service
  4. edit /lib/systemd/system/nussknacker.service file and add write content of Systemd unit file
  5. sudo systemctl daemon-reload
  6. sudo systemctl enable nussknacker.service
  7. sudo systemctl start nussknacker.service

You can check Nussknacker logs with sudo journalctl -u nussknacker.service command.

Sample systemd-unit-file

[Unit]
Description=Nussknacker

StartLimitBurst=5
StartLimitIntervalSec=600

[Service]
SyslogIdentifier=%N

WorkingDirectory=/opt/nussknacker
ExecStart=/opt/nussknacker/bin/run.sh
RuntimeDirectory=%N
RuntimeDirectoryPreserve=restart

SuccessExitStatus=143
Restart=always
RestartSec=60

[Install]
WantedBy=default.target

Configuring the Designer with Nginx-http-public-path

Sample nginx proxy configuration serving Nussknacker Designer UI under specified my-custom-path path. It assumes Nussknacker itself is available under http://designer:8080 Don't forget to specify HTTP_PUBLIC_PATH=/my-custom-path environment variable in Nussknacker Designer.

http {
server {
location / {
proxy_pass http://designer:8080;
}
location /my-custom-path/ {
rewrite ^/my-custom-path/?(.*) /$1;
}
}
}

Configuration of additional applications

Typical Nussknacker deployment includes Nussknacker Designer and a few additional applications:

Nussknacker components

Some of them need to be configured properly to be fully integrated with Nussknacker.

The quickstart contains docker-compose based sample installation of all needed applications (and a few more that are needed for the demo).

If you want to install them from the scratch or use already installed at your organisation pay attention to:

  • Metrics setup (please see quickstart for reference):
    • Configuration of metric reporter in Flink setup
    • Telegraf's configuration - some metric tags and names need to be cleaned
    • Importing scenario dashboard to Grafana configuration
  • Flink savepoint configuration. To be able to use scenario verification (see shouldVerifyBeforeDeploy property in scenario deployment configuration) you have to make sure that savepoint location is available from Nussknacker designer (e.g. via NFS like in quickstart setup)