AI native data platform

The Fivetran-dbt Merger: Why Now and What Comes Next

The Fivetran-dbt merger marks a pivotal shift in data infrastructure. This post analyzes the reasons behind the merger, current industry trends, and argues it signals the end of the Modern Data Stack era and the rise of AI-driven platforms.

The Fivetran-dbt Merger: Why Now and What Comes Next

The data infrastructure landscape is undergoing a fundamental shift. Two forces are reshaping how companies build their data platforms: open table formats like Iceberg are breaking vendor lock-in, and AI is demanding entirely different capabilities from data systems. The Fivetran-dbt merger this week-combining $600M in ARR-is a direct response to these market dynamics. It's also a clear signal that the Modern Data Stack era is ending, and the race to define what comes next is accelerating.

Why Now?

The timing of this merger makes a lot of sense. With $600M in combined ARR, Fivetran and dbt have reached the scale that makes an IPO viable-and creates the scale their investors need for a successful exit. In a market where data infrastructure IPOs have been challenging, combining forces strengthens the story and the valuation.

But there's more than financial engineering at play. Last June, Snowflake and Databricks both announced moves toward end-to-end data platforms-adding native ingestion and transformation capabilities that directly compete with Fivetran and dbt's core products. When the giants start vertically integrating, smaller players face a choice: consolidate to compete or risk getting squeezed out.

The business logic is sound. Fivetran owns Extract and Load, dbt owns Transform-together they control the entire ELT pipeline. Add to that Fivetran's own data lake built on Apache Iceberg, and you have the foundation for an end-to-end data platform that could compete directly with Snowflake, Databricks, and other giants. For customers, that could mean tighter integration and simpler vendor management. For the companies, it means they can compete with platform players on scope, if not on scale.

The Bigger Context: What's Actually Shifting

To understand what this merger really signals, we need to step back and look at what the Modern Data Stack was designed to solve. The cloud era introduced a simple idea: move all your data to one compute vendor (Extract and Load), then do everything there (Transform). Build models for humans writing SQL, create dashboards for business users, and optimize costs by separating compute, storage, and transformation. This architecture dominated the past decade.

But two major forces are now reshaping this landscape. First, Iceberg and the open table format movement are challenging the "lock data in one warehouse" model. Second, and more fundamentally, AI is changing what data infrastructure needs to do. AI agents don't need predefined dashboards-they need flexibility. They don't just need tables-they need semantic understanding and business context to answer questions reliably. They need real-time access to business concepts, not just historical batch data.

This raises an important question: Can today's vendors like Snowflake, Databricks, and now the combined Fivetran-dbt-adapt their Pre-LLM / BI-era platforms to serve AI? Or does the industry need something AI-native, built for this shift from the ground up?

What would "AI-native" data infrastructure actually require? The promise of AI is smart agents that can ask questions, answer them, build analyses, and proactively take action on data. But most importantly, we need to trust and understand what these agents are doing.

This shifts the architecture fundamentally. Instead of messy data pipelines and hundreds of transformation scripts, you need clear, rich, organized business context. Instead of building dashboards for humans, you're enabling agents to work directly with data. That requires three core capabilities:

Semantic Context: AI agents need to understand business concepts-not just database schemas. Revenue, customer lifetime value, churn risk-these should be defined once, managed as code, with all their business logic and relationships encoded.

Governed Models: Business logic can't be scattered and duplicated across hundreds of transformation scripts. It needs to live in a central, governed layer where changes propagate consistently and everyone uses the same definitions.

Transparency & Trust: When an agent answers a question or takes an action, you need ways to verify its reasoning, trace its data sources, and trust its conclusions before they reach users or trigger decisions.

Whether ELT consolidation addresses these needs, or whether the industry needs something built differently from the start, is the question data leaders should be considering.

What Comes Next

The Adaptation Challenge

dbt Labs has proven they can build loved products-dbt Core became the industry standard for analytics engineering (although their semantic layer hasn't achieved the same adoption despite similar ambitions). The question is: Can the combined Fivetran-dbt successfully adapt their platforms to meet AI-native requirements?

The challenge runs deeper than adding features. dbt's data modeling philosophy-that anyone with SQL skills can build a model-works well when curating specific gold tables for dashboards. But it often results in duplicated logic scattered across hundreds of scripts. For AI agents, this creates a trust problem: if humans can't easily understand where logic lives and how it connects, how can AI reliably use it? Context and semantic understanding aren't small additions to existing platforms-they're new paradigms requiring purpose-built tools for agents, not dashboards. Similarly, trusting agent outputs demands evaluation frameworks, trust scores, and verification systems that don't exist in today's BI-era stacks.

The merger itself is just the first step. To truly serve AI workloads, Fivetran-dbt would need to merge, rethink their core data modeling philosophy, and rebuild significant portions of their offerings. Not to mention the operational challenges of integrating two tech companies with different cultures, teams, and codebases.

This isn't just about merging companies and codebases-it's about inventing a new discipline. Call it semantic engineering: managing business context, evaluation rules, and agent behavior with the same rigor data teams apply to pipelines and transformations.

Market Transition Dynamics

What comes next is likely what happens in most technology transitions: a period of parallel evolution. Established vendors will continue consolidating and adding AI features to existing platforms. Meanwhile, new entrants will build AI-native infrastructure from scratch. For a few years, both will coexist-some organizations extending their current stacks, others adopting new approaches. Eventually the market will settle on what actually works for AI at scale, and the Fivetran-dbt merger signals we're entering the transition phase.

But here's the key: current tooling and AI-native platforms can run alongside each other. Your BI workflows don't need to stop while you explore what AI can do. In a world changing this fast, the early adopters of AI-native platforms will be the ones who win-both as organizations building competitive advantages and as individual data practitioners leveling up with AI capabilities. The question isn't whether to wait for your current vendors to adapt. It's whether you can afford to wait while others are already building.

---

This merger signals more than vendor consolidation-it marks the transition from the Modern Data Stack to the AI Data Stack. For data teams willing to lead rather than follow, these are the most exciting times we've seen in years.

Redefine Possible
With Data and AI

Lynk empowers data teams with the tools to build and manage smart agents

Get Started