When Does Data Become a Decision?

Table of Contents

For years, the mantra was simple: “Land it in the warehouse and we’ll tidy later.” That logic shaped enterprise data strategy for decades. Get the data in, worry about modeling, quality, and compliance after the fact.

The problem is, these days “later” usually means “too late.” Fraud gets flagged after the money is gone. A patient finds out at the pharmacy that their prescription wasn’t approved. Shoppers abandon carts while teams run postmortems. By the time the data looks clean on a dashboard, the moment it could have made an impact has already passed.

At some point, you have to ask: If the decision window is now, why do we keep designing systems that only prepare data for later?

This was the crux of our recent webinar, Rethinking Real Time: What Today’s Streaming Leaders Know That Legacy Vendors Don’t. The takeaway: real-time everywhere is a red herring. What enterprises actually need is decision-time: data that’s contextual, governed, and ready at the exact moment it’s used.

Define latency by the decision, not the pipeline

We love to talk about “real-time” as if it were an absolute. But most of the time, leaders aren’t asking for millisecond pipelines; rather, they’re asking to support a decision inside a specific window of time. That window changes with the decision. So how do we design for that, and not for some vanity SLA?

For each decision, write down five things:

  • Decision: What call are we actually making?
  • Window: How long before the decision loses value? Seconds? Minutes? Hours?
  • Regret: Is it worse to be late, or to be wrong?
  • Context: What data contributes to the decision?
  • Fallback: If the window closes, then what?

Only after you do this does latency become a real requirement. Sub-second pipelines are premium features. You should only buy them where they change the outcome, not spray them everywhere.

Satyajit Roy, CTO of Retail Americas at TCS, expressed this sentiment perfectly during the webinar. 

Three latency bands that actually show up in practice

In reality, most enterprise decisions collapse into three bands.

  • Sub-second. This is the sharp end of the stick: decisions that have to happen in the flow of an interaction. Approve or block the card while the customer is still at the terminal. Gate a login before the session token issues. Adapt the price of an item while the shopper is on the checkout page. Miss this window, and the decision is irrelevant, because the interaction has already moved on.

  • Seconds to minutes. These aren’t interactive, but they’re still urgent. Think of a pharmacy authorization that needs to be resolved before the patient arrives at the counter. Or shifting inventory between stores to cover a shortfall before the next wave of orders. Or nudging a contact center agent with a better offer while they’re still on the call. You’ve got a small buffer, but the decision still has an expiration date.

  • Hours to days. The rest live here. Compliance reporting. Daily reconciliations. Executive dashboards. Forecast refreshes. They’re important, but the value doesn’t change if they show up at 9 a.m. sharp or sometime before lunch.

Keep it simple. You can think of latency in terms of these three bands, not an endless continuum where every microsecond counts. Most enterprises would be better off mapping decisions to these categories and budgeting accordingly, instead of obsessing over SLAs no one will remember.

From batch habits to in-stream intelligence

Once you know the window, the next question is harder: what actually flows through that window? 

Latency alone doesn’t guarantee the decision will be right. If the stream shows up incomplete, out of context, or ungoverned, the outcome is still wrong, just… faster. For instance, when an AI agent takes an action, the stream it sees is the truth, whether or not that truth is accurate, complete, or safe. 

This is why streaming can’t just be a simple transport layer anymore. It has to evolve into what I’d call a decision fabric: the place where enough context and controls exist to make an action defensible.

And if the stream is the decision fabric, then governance has to be woven into it. Masking sensitive fields, enforcing access rules, recording lineage, all of it has to happen in motion, before an agent takes an action. Otherwise, you’re just trusting the system to “do the right thing” (which is the opposite of governance).

Imagine a customer denied credit because the system acted on incomplete data, or a patient prescribed the wrong medication because the stream dropped a validation step. In these cases, governance is the difference between a system you can rely on and one you can’t.

Still, it has to be pragmatic. That’s the tradeoff enterprise leaders often face: how much assurance do you need, and what are you willing to pay for it? Governance that’s too heavy slows everything down. Governance that’s too light creates risk you can’t defend.

That balance—enough assurance without grinding the system to a halt—can’t be solved by policies alone. It has to be solved architecturally. And that’s exactly where the market is starting to split. Whit Walters, Field CTO at GigaOm, expressed this perfectly while explaining this year’s GigaOm Radar Report.

A true decision fabric doesn’t wait for a warehouse to catch up or a governance team to manually check the logs. It builds trust and context into the stream itself, so that when the model or agent makes a call, it’s acting on data you can stand behind.

AI is moving closer to the data

AI is dissolving the old division of labor. You can’t draw a clean line between “data platform” and “AI system” anymore. Once the stream itself becomes the place where context is added, governance is enforced, and meaning is made, the distinction stops being useful. Intelligence isn’t something you apply downstream. It’s becoming a property of the flow.

MCP is just one example of how the boundary has shifted. A function call like get_customer_summary is baked into the governed fabric. In-stream embeddings show the same move: they pin transactions to the context in which they actually occurred. Small models at the edge close the loop further still, letting decisions happen without exporting the data to an external endpoint for interpretation.

The irony is that many vendors still pitch “AI add-ons” as if the boundary exists. They talk about copilots bolted onto dashboards or AI assistants querying warehouses. Meanwhile, the real change is already happening under their feet, where the infrastructure itself is learning to think.

The way forward

Accountability is moving upstream. Systems no longer sit at the end of the pipeline, tallying what already happened. They’re embedded in the flow, making calls that shape outcomes in real time. That’s a very different burden than reconciling yesterday’s reports.

The trouble is, most enterprise architectures were designed for hindsight. They assume time to clean, model, and review before action. But once decisions are automated in motion, that buffer disappears. The moment the stream becomes the source of truth, the system inherits the responsibility of being right, right now.

That’s why the harder question isn’t “how fast can my pipeline run?” but “can I defend the decisions my systems are already making?”

This was the thread running through Rethinking Real Time: What Today’s Streaming Leaders Know That Legacy Vendors Don’t. If you didn’t catch it, the replay is worth a look. And if you’re ready to test your own stack against these realities, Striim is already working with enterprises to design for decision-time. Book a call with a Striim expert to find out more.