ML model inference
Make ML models production-ready with zero-code effort. Empower your teams to visually design, integrate, and refine machine learning pipelines in a low-code interface.
What makes shipping ML models a headache?
Deploying ML models is complex, requiring multiple tools, from data pipelines to CI/CD workflows, making MLOps a challenge. Data scientists excel at building models, not managing infrastructure, yet they’re often pulled into deployment hurdles—troubleshooting dependencies and aligning with evolving business needs.
Why Python isn't a cure for model inference?
While Python is a powerful and accessible tool for machine learning, it falls short when it comes to production complex deployment & model inferencing. From integration challenges to scaling constraints, real-world ML requires more than just Python scripts.
Python machine learning key challenges
- error handling: lacks of native mechanisms for fault tolerance and automated recovery in production environments,
- limited tooling: observability, logging, and performance tracking require additional integrations and custom setups,
- external integrations: connecting Python-based models with real-time data sources, APIs, and message queues demands extra effort,
- infrastructure: scaling, optimizing resource usage, and ensuring high availability require DevOps expertise beyond typical data science workflows,
- data scientists: python simplifies model development, but deployment requires a different skill set—let data scientists focus on models, not infrastructure.
So maybe configuration over coding?
To simplify ML model management, many teams turn to solutions like Seldon, shifting from coding to pipeline configuration using YAML/JSON constructs. While this reduces scripting, it introduces new complexities. In the end, YAML-based automation still demands technical expertise, proving that true no-code ML deployment remains an unsolved challenge.
Weaknesses of config-based ML model automation
- complexity: writing and managing long YAML-based pipelines can be just as complex as coding,
- rigidity: predefined structures limit flexibility, making real-time adjustments difficult,
- steep learning curve: YAML-based automation still requires technical expertise, making adoption challenging,
- debugging & error handling: troubleshooting issues in static configurations can be time-consuming,
- data scientists: python simplifies model development, but deployment requires a different skill set—let data scientists focus on models, not infrastructure.
Why stick to the old way? When there is an alternative for ML model inference & deployment with real-time
Let people do what they love
Data Scientists should focus on what excites them most—training and experimenting with models. With tools like Jupyter Notebook for exploration and MLflow for tracking experiments and publishing model versions, they can work efficiently, compare results, and refine models without distractions.
Nussknacker handles ML model inference seamlessly with ML enrichers, supporting in-process or dedicated ML runtime execution.
Effortless integrations
Inferencing ML models has never been easier! Nussknacker seamlessly integrates with various processing types—streaming, batch, and request-response, allowing flexible deployment for any use case.
It supports multiple data sources, including data warehouses, Apache Kafka, IoT devices, databases, REST API and many more, ensuring smooth and efficient model execution. With Nussknacker, connecting your ML models to real-world data is fast, scalable, and hassle-free.
Drag-and-drop ML inferencing
Nussknacker makes ML model management incredibly easy with its drag-and-drop interface, allowing users to deploy models without writing complex code, while the Nussknacker engine takes care of smooth inference execution.
Simply load your trained model, set up the workflow visually, and let Nussknacker handle the real-time inferencing.
Support for multiple technologies
Python-based inference
Nussknacker's architecture enables the use of any Python-based ML model, seamlessly integrating with popular frameworks like TensorFlow, PyTorch and Scikit-learn.
This flexibility allows Data Scientists to work in their preferred environments without requiring extensive model modifications, ensuring smooth deployment and effortless inference.
Simple integration with ML Models
MLflow Enricher
Specifically designed for integrating ML models, this enricher allows users to invoke models directly within their scenarios. It provides type hints and error checking to ensure that the correct data is passed to the model
Automatic Detection of Model Changes
When new models are added to the repository, Nussknacker automatically detects these changes and updates the building blocks available for use in scenarios. This minimizes errors related to input types mismatches and accelerates the deployment process by allowing rapid adjustments to decision logic
Inputs and Outputs Validation
When a model is integrated into a scenario, Nussknacker checks for data type mismatches and ensures that the correct input are provided based on the model's signature. This automatic validation process minimizes errors during deployment
ML feature computation
This is crucial for scenarios where immediate decisions are required. As part of a scenario, users can define steps to compute additional features based on incoming data. For instance, in a credit scoring system, real-time features might be calculated based on user behavior or transaction history before being fed into the ML model
Nussknacker utilizes enrichers to enhance incoming data streams with additional features necessary for ML model inference.
These enrichers can access external data sources to retrieve relevant information, which is crucial for making informed predictions.
Key types of enrichers include:
- SQL Enrichers: These allow users to execute SQL queries against databases to fetch pre-calculated features or other relevant data, enabling real-time enrichment of the input data.
- OpenAPI Enrichers: These facilitate integration with external APIs, allowing the system to gather additional data points that may be required for model inputs.
ML models use cases
Credit Scoring with ML inference
Using Nussknacker with MLflow, enabled data scientists to deploy ML models directly while letting business analysts manage credit rules. This resulted in faster updates, reduced developer dependency, and more efficient credit risk assessment for their 13 million customers.
Real-Time ML Recommendations
Nussknacker simplifies the integration of machine learning models into streaming data processes. Software teams can now build intelligent recommendation systems using Nussknacker, Snowplow, and MLflow.
ML models inference in fraud detection
How to simplify the integration of ML models into business applications, automate many of the technical complexities, and support advanced techniques like A/B testing and ensemble models. A fraud detection example
Blog
Telecom's credit scoring system with ML inference
Using Nussknacker with MLflow, enabled data scientists to deploy ML models directly while letting business analysts manage credit rules. This resulted in faster updates, reduced developer dependency, and more efficient credit risk assessment for their 13 million customers.
Streaming SQL alternative
Many streaming applications require significant domain knowledge and continuous updates, however SQL is neither up to the task nor user-friendly for domain experts
Real-Time Recommendations: Using Machine Learning in Clickstream Processing Pipeline
Nussknacker simplifies the integration of machine learning models into streaming data processes. Software teams can now build intelligent recommendation systems using Nussknacker, Snowplow, and MLflow.