It looks fine until it doesn't

Modern tools, clean pipelines, dbt running smoothly, Snowflake humming along. Everything looks great on the surface.

Then someone asks a new question - something slightly different from what the team has answered before - and everything grinds to a halt. Not because the data isn't there, not because the tooling is broken, but because the way the data is modeled can't support the question.

That's the thing about data models. They sit underneath everything. And when they're wrong - or just rigid - they quietly constrain everything built on top of them. Every dashboard. Every analysis. Every metric. All of it is shaped by decisions someone made about how to organize the data, often months or years ago, often without fully understanding how the business would evolve.

AI agents won't fix this

As AI agents start taking on more analyst workflows, there's a real risk those structural constraints stay buried instead of getting fixed. An AI agent can write SQL. It can generate dashboards. It can summarize data in plain English. But it can't look at your data model and say "this is the wrong shape for the questions you're trying to ask."

If anything, AI makes the problem worse. Because now you've got a fast, confident system generating answers on top of a foundation that was never designed for those questions. The answers will look right. They'll be formatted beautifully. And some of them will be subtly, dangerously wrong - not because the AI made a mistake, but because the underlying model was the wrong tool for the job.

One model doesn't fit all

I used to be a strict dimensional modeling person. Facts and dimensions. Star schemas. It's clean, it's proven, and it works beautifully when the question is "what was total revenue by product category by region last quarter?"

Then I had to model email engagement data tied to purchase behavior for attribution questions. I tried to force it into a dimensional model, and it was painful. Stitching together clicks, opens, and purchases into fact tables that were never designed for event-level behavioral data. The model technically worked. But it was fragile, hard to maintain, and required constant rework whenever a new request came in that hadn't been factored in initially.

That same problem became straightforward with an activity schema approach. Not because activity schema is inherently better than dimensional modeling. It's not. It was simply the right approach for that specific use case.

That judgment - knowing which modeling approach fits which problem - is still a fundamentally human skill. Any LLM can describe the difference between a star schema and an activity stream. But choosing the right model requires understanding the business context: who is asking the question, what decisions they're making with the answer, how the data will be consumed, and what tradeoffs are acceptable.

That understanding comes from conversations with stakeholders. Not from prompts.

One modeling approach isn't enough anymore

Sticking to a single modeling approach is becoming a liability. The people who do this well can move across approaches - dimensional, relational, activity schema, wide event tables, whatever - and know when each one fits.

This extends directly to AI in analytics. AI agents and semantic layers are only as good as the data models underneath them. A poorly chosen modeling approach doesn't just create technical debt. It limits the questions the business can ask. It limits what AI can reliably answer. It puts a ceiling on your entire analytics capability, and most people in the org never see it.

What this actually looks like in practice

When the data model is the bottleneck, it usually looks like this:

This is more common than most people realize. And nobody outside the data team can see it happening.

The skill that matters most

The job isn't writing the most SQL. It's looking at a business problem and saying: "here is the right way to model this data so that both humans and AI can get trustworthy answers from it."

We're not just modeling data for human readability anymore. AI agents are consumers of our data models too. The shape of your data determines not just what your analysts can build, but what your AI tools can reason about.

That is judgment. And judgment is still a human skill.

If the data model is the wrong shape, everything built on top of it has a ceiling nobody can see.

Keep Reading
AI Is Making Us Faster, But Not Smarter With Data