Why Streaming SQL Is Not the Right Tool for Authoring Event-Driven Stream Based Algorithms

TL;DR - Streaming SQL is not the right tool for building complex event-driven applications. While it works well for analytics and simple data pipelines, it breaks down when facing real-time decision logic, stateful workflows, or domain-specific intelligence. Stretching SQL to handle these use cases leads to bloated, unreadable logic structures and fails to unlock the full potential of streaming for broader audiences.

👉 Note on scope:
While this article focuses on streaming SQL in general, it uses Apache Flink SQL as the reference point - currently the most advanced and expressive implementation of stateful streaming SQL. Other platforms like KSQL, Spark Structured Streaming or RisingWave offer a narrower or less mature feature set. Therefore, the limitations discussed here reflect the best-case scenario, not the worst.

 

Streaming Isn’t One-Size-Fits-All

Not all streaming use cases are created equal. Broadly speaking, we can divide them into three types:

  1. Real-time analytics - computing KPIs, aggregations, dashboards.
  2. Streaming data pipelines - filtering, reshaping, or enriching data in motion.
  3. Event-driven applications - taking actions based on specific event patterns, typically with stateful, conditional, and time-sensitive logic.

 

Event-Driven Stream Based Algorithms

SQL-based tools can often handle the first two categories reasonably well - especially when the data is tabular and the desired transformations are relatively declarative. However, for event-driven applications - things like:

  • Recommendation systems
  • Anomaly detection
  • Fraud detection
  • Algorithmic trading
  • Hyper personalization <-> surveillance
  • Real-time rating / billing (telco, cloud services, ..)
  • Streaming ML

…SQL starts to fall apart. These applications demand procedural constructs, explicit state and time control, fine-grained observability - things fundamentally outside SQL’s comfort zone.

This is the core thesis of this article: while streaming SQL is useful for many tasks, it is inadequate as a primary tool for building expressive, stateful, and observable event-driven systems. Worse, our persistent efforts to stretch streaming SQL beyond its natural boundaries - by bloating it with constructs alien to its tabular roots - are unlikely to succeed. We will not turn it into a Swiss Army knife for building modern event-driven applications.

The more likely outcome is this: Flink’s DataStream API will remain the unavoidable choice for serious event-driven use cases, while approachable, low-code development of streaming logic - especially by domain experts - will remain elusive. As a result, acting on real-time data will continue to be far less widespread than acting on static data via tools like spreadsheets.

 

The Great SQL Paradox

So why do we keep trying to make SQL the answer to everything?

There’s a cultural belief - rooted in decades of data tooling - that SQL is simple, familiar, and safe. For batch processing and reporting, that belief made sense. But streaming changes the rules. As the use cases become more dynamic, time-sensitive, and stateful, SQL’s declarative model starts to show its limits.

SQL is simple, familiar, and safe

Still, the streaming community clings to SQL’s legacy reputation. We tell ourselves: “SQL is easy, so let’s just extend it.”

The result is something caught between worlds. A language that looks like SQL but behaves like a procedural DSL. A syntax extended with new clauses - HOP, MATCH_RECOGNIZE, changelog hints, PROCTIME - that few understand without reading engine internals. See for example this deep dive into changelog behavior in Flink SQL 

Individually, these features might seem manageable. But collectively, they create a growing divergence from SQL’s original strengths: clarity, approachability, and declarative abstraction. In trying to retrofit SQL with procedural power, we lost the simplicity without gaining true expressiveness. Instead of gaining a new universal tool, we end up with something neither elegant nor complete.

This is the paradox: in trying to make SQL do everything, we’ve made it worse at the things it once did well - and still not good enough for the things it was never designed to handle.

The result? A language that looks like SQL but behaves like a leaky abstraction over a complex stream processing engine.

sql is easy let's extend it to streaming

This raises an important question:

Can SQL, even in its streaming variants, serve as an effective language for authoring complex stream processing algorithms?

In this article, we’ll explore where streaming SQL excels, where it breaks down, and why - despite ongoing efforts to extend it - it remains poorly suited as a language for building expressive, observable, and maintainable stream processing algorithms.

 

What Streaming Applications Really Need

Authoring complex stream processing logic goes beyond basic filtering and joins. The following pain points show why SQL often fails to meet the needs of real-world event-driven applications.

To make the landscape clearer, we group these pain points into two categories:

  1. Fundamental limitations that prevent SQL from expressing, debugging, or observing complex algorithms. These are hard stops - if SQL could handle them well, it might have become a general-purpose language for algorithms. But it hasn’t.
  2. Structural mismatches that stretch SQL into areas it wasn’t designed for. These are gray zones — technically solvable, but with awkward or brittle results. Yet these areas — like observability, branching logic, or complex enrichment — are too central to stream processing to be left in a gray zone. They deserve first-class support, not gimmicky workarounds. A proper development tool should empower users to model such logic cleanly and fluently — not feel like you’re trying to carve marble with a spoon. When the tool fits the task, development becomes not only productive, but deeply satisfying.

Group 1: Hard Stops

Expressing Complex Algorithms with SQL Constructs

  1. Expressing Complex Algorithms with SQL Constructs ❌ One of SQL’s core limitations, especially in the context of stream processing, is that its conditional logic  -  typically expressed using CASE  -  can only return single scalar values of compatible types. You cannot use CASE to express divergent logic paths that yield different sets of values, let alone different record structures or processing steps. This makes SQL inherently ill-suited for modeling real-world algorithms, where different branches often involve distinct computations, transformations, or external calls. When nested queries, joins, scoped variables (see further down) enter the picture, the resulting SQL can easily become an incomprehensible tangle. It’s no surprise that one company we spoke with has a Flink SQL statement that spans 8000 lines.
  2. Scoped Variables for Reuse and Debugging ❌ SQL lacks scoped persistent variables for reuse across steps.

In algorithmic stream processing, not all intermediate values are meant to be final results in a table. Often, they serve a transient but crucial role - they represent sub-results of transformations or condition checks that guide the flow of logic.

Scoped variables - whether you call them “named expressions,” “let-bindings,” or simply local variables - are key to:

  • Avoiding repeated computation of expensive or complex expressions, especially in high-throughput environments.
  • Improving readability by naming intermediate results rather than nesting deeply.
  • Supporting debugging and testability by giving users a handle to inspect intermediate values directly.

SQL lacks any concept of scoped, reusable variables. Instead, it relies heavily on copy-pasting expressions across the query plan. This leads to:

  • Duplication of logic in multiple expressions (e.g., the same transformation repeated in a CASE, a WHERE clause, and a SELECT).
  • Reduced clarity, especially as transformations grow in size or require composition.
  • Slower iteration during debugging, since you can’t isolate and observe intermediate computations.

These are not edge cases - they are intrinsic to how people think about algorithms. Algorithms are not just pipelines of filters and projections; they involve internal steps, logical pivots, and reusable pieces of logic. SQL’s tabular, set-oriented nature doesn’t provide a clean place to hold or name these steps.

  1. Observability of Intermediate Steps ⚠️ Only possible via workarounds (e.g., materializing temp views or sinks); not natively supported.

Observability is a cornerstone of real-time data engineering. You don’t just want to know the final result - you want insight into how it was produced. Each step in a streaming algorithm should ideally serve as an observation point:

  • How many events passed through this filter?
  • What conditions matched or didn’t match?
  • What were the intermediate values used to make decisions?

In SQL, these answers are difficult to get. SQL queries are monolithic and opaque - there’s no native way to peek inside sub-expressions or inspect values at intermediate steps. The only option is to rewrite parts of your logic into temporary sinks, or repeatedly materialize views with instrumentation added - solutions that are both brittle and cumbersome.

The problem becomes more pronounced when dealing with scoped variables or intermediate computations. If you want to inspect how a derived field was computed - or why a condition matched - you can’t just “observe” it in SQL. You’d need to manually replicate the transformation and expose it as a SELECT column or extra join field, which defeats the purpose of having reusable logic in the first place.

By contrast, systems designed with observability in mind (for example Apache NiFi) allow developers to track data as it flows, inspect variables in-flight, and debug complex pipelines one node at a time. In those systems, observability is a first-class feature, not an afterthought.

 

Group 2: Where SQL Starts to Stretch

Where SQL Starts to Stretch

 

  1. State and Timer Handling (e.g., FLIP-440) At the heart of event-driven applications is the need to manage keyed state, timers, and event lifecycles. Whether you’re tracking sessions, timeouts, windows, or sequence patterns, this requires the ability to store and react to stateful conditions in a precise and controllable way.

Streaming SQL lacks primitives for this. There is no concept of setting a timer, maintaining per-key context across events, or reacting asynchronously to external conditions. These limitations severely restrict the kinds of logic you can model.

FLIP-440 is an ambitious proposal to bring native state and timer support into Flink SQL. Conceptually, it aims to make procedural capabilities like setting and reacting to timers first-class citizens in SQL. But it faces a fundamental tension: can you express inherently imperative, side-effect-driven logic in a declarative language?

Even if implemented, SQL with timers and keyed state will likely look and feel unlike traditional SQL. It will either require engine-specific extensions or bend SQL syntax into unfamiliar territory - resulting in something that is technically SQL, but not conceptually simple or portable.

This isn’t a knock on FLIP-440 - it’s a recognition that stateful stream processing is fundamentally about sequences, causality, and time-aware logic. These things are hard to tame within SQL’s original, declarative model.

This limitation is also acknowledged in the FLIP-440 proposal itself. Its Motivation section bluntly admits: “The SQL engine’s lack of extensibility leads to dead ends in SQL or Table API projects. [..]  Even basic stream processing operations that can easily be expressed in the DataStream API, force users to leave the SQL ecosystem.” 

  1. Integration with External Systems and ML Inference ❌ SQL-based systems have little to no support for integrating with external services or running real-time ML inference. Yet many event-driven applications critically depend on this.

In practical use cases, enriching a stream with context from external systems - such as REST APIs exposed via OpenAPI - is often essential. You may need to fetch user profiles, product availability, fraud signals, or recent transactions to make decisions. SQL has no natural abstraction for external service calls, especially under streaming constraints like latency, timeouts, retries, or rate limits.

Equally important is model inference: scoring an ML model per event to personalize offers, detect anomalies, or classify behavior. These models might be hosted elsewhere (e.g., via a model server or cloud function), and invoking them within a SQL pipeline is either unsupported or relegated to fragile UDFs with poor observability and error handling.

In short, many real-time decision pipelines need access to “intelligence” - external knowledge and predictive signals - and SQL offers no coherent way to plug that in.

  1. First-Class Support for Multilevel JSON ❌ While SQL can technically work with JSON using functions like JSON_EXTRACT, ->, or JSON_TABLE, these features are often verbose, engine-specific, and clumsy when dealing with real-world JSON structures. Filtering, projecting, or transforming JSON arrays and nested fields becomes a chore, especially across different engines (Flink, BigQuery, etc.).

Modern event-driven systems frequently operate on deeply nested, semi-structured data - Kafka messages, API payloads, or CDC change events are rarely flat tables. Yet SQL remains optimized for relational data. Even basic tasks like filtering an array of objects by a condition or transforming nested fields often require multi-step transformations and complex expressions.

  1. Abstraction over Changelog Modes / Flink Internals ❌ Concepts like +I, -U, +U, -D leak internal state mechanics into the SQL layer.

Flink introduced changelog modes - +I, -U, +U, -D - to express how records are inserted, updated, and retracted during stream processing. While these mechanisms are essential to how Flink works internally, they shouldn’t be something an application author has to worry about.

And yet, when writing streaming SQL, you often must (this blogpost is a clear manifestation of it). Features like joins, group-by aggregations, deduplication, and temporal tables expose you to these low-level modes. This is not just an implementation detail - it changes how your queries behave and what results you get. Suddenly, instead of focusing on your domain logic, you’re reverse-engineering plan details and dataflow diagrams to understand retract streams.

This breaks a fundamental abstraction: developers should be able to think in terms of append-only or upsert streams - models that are intuitive and map closely to business semantics. A streaming query should describe what should happen when new data arrives, not how state is shuffled around under the hood.

The problem becomes especially pronounced when using temporal joins or working with sinks that don’t support retractions. A seemingly innocent query can fail or produce unexpected results unless you deeply understand Flink’s changelog semantics - something many users shouldn’t be forced to do.

In short, streaming SQL surfaces too much of the engine’s plumbing, making the language feel like a leaky abstraction. This adds friction and risk, especially for teams trying to move fast without becoming Flink internals experts.

 

Conclusion

For streaming ETL and real-time dashboards, streaming SQL is often sufficient. But building event-driven applications requires more than tabular logic and declarative syntax. It demands tools that treat state, time, branching, external integrations, observability, and iteration as first-class concerns. Ignoring this leads to brittle workarounds, false simplicity, and stalled innovation.

For developers, the DataStream API poses a steep and often frustrating learning curve. It requires deep familiarity with Flink internals — and the documentation is sparse. Moreover, experimenting with streaming algorithms typically involves hundreds of small changes. This is not the kind of work developers enjoy doing manually and blindly.

building event-driven applications requires more than tabular logic

Meanwhile, most domain experts will never be able to use the DataStream API. Very few have full-fledged development experience — and frankly, they shouldn't need it. Understanding Flink internals should not be a prerequisite for building streaming logic, just as one doesn’t need to understand CPU architecture to use a spreadsheet.

What the streaming world desperately needs is a set of tools that offer expressiveness, rapid iteration, explainability, and accessibility to a broader audience — much like how spreadsheets empower everyday number crunching, and Jupyter notebooks accelerate ML experimentation.

 

The Nussknacker Answer

At Nussknacker (nussknacker.io), we’ve built a tool that directly addresses the limitations outlined above - without abandoning the principles of simplicity and clarity.

  • We use the most natural way to represent algorithms: a visual flow-based model. Each step in the algorithm is fully observable - showing event counts, intermediate results, and precomputed values (i.e., variables).
  • These steps are composed from modular blocks (called components in Nussknacker), which encapsulate useful processing abstractions - like session windows, OpenAPI calls, or ML model inference. Components shield authors from internal details of Flink, REST APIs, etc., while still exposing exactly the controls they need.
  • We rely on SpEL (Spring Expression Language) to operate fluently on JSON. SpEL enables clean, expressive filtering, projection, and transformation of nested structures - where multiple lines of verbose SQL would otherwise be required. Expressions also guide component behavior, handling data shaping and conditional logic in a concise way.
  • Above all, we optimize for iteration. Testing, debugging, tweaking - everything is built for cycles as short as one minute.

You can find the real-world comparison of SQL and Nussknacker-based approach here: DEMO.

Final Take

Streaming SQL has its place - but using it as the primary authoring language for stream processing is likely a dead-end for anything beyond basic use cases. The complexity of state, time, and event-driven behavior demands more expressive and inspectable tools.

Just because it’s called SQL doesn’t make it simple anymore.

Interested in building smarter streaming apps without hitting complexity and limitations? Check out nussknacker.io.