Published 31 Mar 2026

Build vs. Buy: why building your own marketing attribution is harder than it looks

Before you build your own attribution platform: read this first.

(Yes, we sell a measurement platform. We’ll come back to that.)

There’s a moment in almost every marketing organisation where someone, usually someone smart, usually someone frustrated, says: “Why are we paying for this? We could just build it ourselves.”

And honestly? It’s a reasonable instinct.

You know your business better than any vendor does. Your data engineers are talented. You’re tired of blackbox solutions you can’t fully control. And the price tag on enterprise marketing technology is hard to swallow when you’re convinced your team could replicate most of it in a few sprints.

We get it. We’ve heard this from hundreds of companies over 15 years. And we want to have an honest conversation about it, not to talk you out of building something, but to make sure you’re walking in with open eyes.

The elephant in the room

Let’s just say it directly: we are an attribution and marketing measurement company. We have an obvious commercial interest in you not building your own solution. You should factor that in as you read this.

What we’d ask in return is that you factor in your own biases too. The “build” decision often feels more strategic, more independent, more in-control. Sometimes that feeling is accurate. Often, it isn’t! And the real costs only become visible 12 to 18 months later, when the team that built it has moved on, the model hasn’t been updated in a quarter, and three new privacy regulations have landed that nobody’s had time to address.

So: we’re not neutral. But we’ve also watched this play out enough times to have some honest things to say.

Why companies decide to build, and why it makes sense on paper

The instinct to build your own measurement isn’t irrational. Here’s what usually drives it:

“It’ll be cheaper.” Licensing fees for enterprise software are real and significant. An internal tool built on top of existing infrastructure looks, at first glance, like a fraction of the cost.

“We’ll have more control.” Vendor lock-in is a legitimate concern. If your measurement logic lives entirely inside someone else’s platform, what happens when they change the model? When they get acquired? When they sunset a feature?

“Our data is special.” Every business has unique data structures, custom conversion events, offline touchpoints, specific channel mixes. Off-the-shelf tools don’t always fit cleanly.

“We can iterate faster.” Internal tools can be adjusted to business needs without a change request or a product roadmap negotiation.

All of these are real. None of them are wrong. The question isn’t whether these motivations are valid, it’s whether the full picture supports the same conclusion.

What the build decision actually involves

Here’s the analogy that keeps coming to mind: building your own attribution platform is a bit like deciding to build your own house. You can make every design decision yourself. No compromises, no standard layouts, no paying someone else’s margin.

What gets underestimated is everything you don’t know you don’t know: the structural details, the specialist knowledge, the hundred small decisions that only become visible when something goes wrong.

Somewhere around month eighteen, you realise the professional builder wasn’t just charging for bricks and labour. They were charging for sixteen years of knowing exactly where.

The difference with attribution: you also have to renovate every six months. Privacy regulations update the building code. Platforms change the foundations without notice. And the family living inside (your marketing, finance, and analytics teams) needs the lights to stay on the whole time.

What gets underestimated is everything you don’t know you don’t know.

— Katharina Thürer, Exactag

With attribution, the visible part is the initial build. The invisible part is everything else:

Ongoing model maintenance. Attribution isn’t a thing you build and deploy. Customer journeys change, channel mixes shift, platform APIs update without warning. A model that was accurate in Q1 may be quietly drifting by Q3, and you might not notice until the numbers stop making sense.

Privacy compliance (and it keeps moving). GDPR, consent management, cookieless environments, Data Clean Rooms, iOS restrictions, Walled Garden APIs. Every one of these requires not just technical updates but legal review, methodology adjustments, and ongoing monitoring. This is a full-time job that never ends.

Platform bias doesn’t go away just because you built the tool. If your in-house solution is still calibrated against Google- and Meta-reported conversion data, you haven’t removed the bias, you’ve just internalised it. The fundamental challenge of neutral measurement requires an independent data collection layer, not just a different interface on top of the same platform inputs.

True measurement independence means independence from platform self-attribution: from Google grading its own homework, from Meta deciding what a conversion looks like. That kind of independence isn’t achieved by building internally. It’s achieved by having a data collection layer that doesn’t rely on publisher-reported data at all (read more here). That’s a harder problem than it looks, and it’s one most internal builds don’t fully solve.

The people problem. Who owns this in two years? What happens when the data scientist who architected it moves on? Internal tools are often remarkably fragile in ways that only become apparent under succession.

The costs that don’t appear on the initial spreadsheet

When companies model the build vs. buy decision, the initial spreadsheet usually captures licensing fees on one side and engineering salaries on the other. What often doesn’t appear:

  • The opportunity cost of data engineering time that isn’t going towards other priorities
  • The cost of measurement gaps: upper-funnel channels going unmeasured, app and web journeys disconnected, non-consented users excluded from attribution entirely
  • The cost of delayed or wrong decisions made on a model nobody fully trusts
  • The cost of rebuilding when a major platform changes its API (which happens regularly)
  • The cost of re-explaining your methodology to Finance, Legal, or external auditors: every time someone challenges the numbers

None of these are hypothetical. They’re patterns we see repeatedly across organisations that went the build route and are now, a few years later, quietly re-evaluating.

So what are we actually saying?

We’re not asking you to trust us. We’re asking you to trust the complexity of the problem, and to make sure your build plan accounts for all of it.

If you want to pressure-test your thinking, whatever direction you’re leaning, we’re happy to have that conversation. No pitch, just clarity.

Exactag’s experts have specialised exclusively in marketing measurement and attribution since 2010, across 100+ brands and 115 markets.

FAQ

What does marketing attribution actually cost to maintain in-house? 

More than the initial build. The ongoing costs that rarely appear on the planning spreadsheet include: continuous model recalibration, privacy compliance updates (GDPR, cookieless, consent management), platform API changes, data quality monitoring, and the organisational overhead of explaining and defending your methodology internally. When teams account for all of this, the in-house route is rarely cheaper, it’s just that the costs are distributed differently and show up later.

What’s the difference between building on top of platform data vs. truly independent measurement?

This is one of the most important distinctions in the build vs. buy conversation. An internal tool built on top of Google- or Meta-reported conversion data inherits the bias of those platforms. True measurement independence requires a first-party data collection layer that tracks interactions independently, not a different interface on top of the same publisher-reported inputs. Most internal builds don’t achieve this, which means the “independence” they deliver is more apparent than real.

How do you handle measurement for non-consented users?

This is a question many internal builds never fully solve. With rising opt-out rates across Europe and beyond, a significant share of user journeys can’t be tracked at user level. Exactag applies a deterministic group-level methodology for non-consented users, maintaining approximately 90% accuracy compared to full user-level measurement. Without a dedicated solution for this, in-house builds often simply exclude this traffic, creating systematic gaps in attribution that compound over time.

What should we ask before deciding to build?

A few questions worth pressure-testing: Who will own and maintain this in two years? How will you handle privacy compliance as regulations evolve? What happens when a major platform changes its API? Can your solution measure upper-funnel and view-through impact, not just clicks? And critically: are you building on top of platform-reported data, or do you have a truly independent data collection layer?