Can domain experts be so happy with a tool that the experience can be likened to nirvana? This provocative question is intended to start a discussion about the features needed to make real-time data products accessible to ordinary (non-technical) people. Zbigniew Małachowski presents his own list of required features. But are they enough to trigger nirvana-like relief?
Webinar video: Real-time data processing for the people
Abstract:
Acting on data in real time, such as fraud detection, next best action, streaming ML, clickstream analysis, IoT sensor analysis, etc., requires algorithms that are rarely trivial enough to be created with no code: just drag and drop.
In the streaming world, the reason why domain experts find themselves forced to go into deep technical details of Kafka, Flink, Spark, REST, etc is that while many platforms are quite successful in abstracting Kafka, Flink, Spark, etc, they are just simple visual overlays on the (streaming) SQL and do not allow for much more.
There needs to be more to author a serious actions-on-data algorithm.
What features are needed for these tools to truly succeed?
Building Iceberg Lakehouse with AWS Glue
Leverage Nussknacker to build a modern data lakehouse on S3 using Apache Iceberg and AWS Glue.
Understanding Event Time in Nussknacker
Learn why Event time is crucial in stream processing and how Nussknacker leverages it for reliable stateful computations.
Generation-Augmented Retrieval (GAR) for Product Recommendation
How to quickly launch a solution based on LLM/Foundational Models to engage customers.
Feel free to ask any questions
Nussknacker can make your data processing use case more agile and easier to manage.
