Your MMM Provider Shouldn't Be Working Around Your Agency

Your MMM Provider Shouldn't Be Working Around Your Agency

The Measurement Silo Problem

Most marketing measurement engagements start the same way. A brand hires a measurement provider, shares some data files, and waits for results. The media agency, if they’re involved at all, might get a debrief at the end.

It sounds efficient. In practice, it’s a recipe for recommendations that miss the point.

The issue isn’t the model. Modern MMM is technically sophisticated and, when built well, genuinely accurate. The problem is context. The model can only explain what it can see. And the person with the clearest view of what’s actually happening in the ad accounts, the agency running the campaigns, is often the last one consulted.

What gets lost in the gap between measurement provider and media agency are the exact details that determine whether a recommendation is actionable or irrelevant.

What a Media Agency Knows That the Data Doesn’t Show

When a measurement provider looks at a brand’s marketing data, they see spend, impressions, clicks, and revenue. What they don’t see is why things were set up the way they were.

Take Google Search as a simple example. A brand might have two Search campaigns running: one targeting branded terms and one targeting generic, high-intent queries. From the outside, both show up as “Google Search spend.” An analyst who doesn’t know the structure of the account treats them as a single channel.

This becomes a problem when the model spits out a recommendation to scale Search, or to cut it. If the model is attributing branded search as incremental when most of it is just capturing demand already generated by other channels, the ROI looks inflated. If the analyst doesn’t understand the brand vs. generic split, they might recommend scaling spend that is largely recapturing existing intent rather than creating new demand.

A media agency knows this. They built the structure. They know which campaign is doing which job.

When the agency isn’t part of the measurement conversation, this context disappears entirely.

The Silo in Practice

Here’s how measurement silos typically play out. A brand engages a measurement provider directly. The agency gets a request for data exports. There’s a handover meeting where the data files are passed across, some high-level questions are answered, and everyone goes back to their separate lanes.

The measurement provider builds the model. The recommendations come back. The agency implements them, sometimes without being part of the conversation about why those specific changes were suggested.

When the results don’t match expectations, nobody has enough context to understand where things went wrong. The brand is caught in the middle, trying to translate between two parties who never really spoke to each other.

This structure doesn’t just produce suboptimal recommendations. It creates accountability problems. If the agency wasn’t consulted on the methodology and didn’t understand the rationale, they have no real reason to trust the output. And an MMM recommendation that the agency doesn’t trust will never get implemented.

A Better Model: Three Parties, One Table

The measurement setups that produce the most useful outcomes tend to have one thing in common: the measurement provider, the media agency, and the brand are working together throughout the process, not just at the handover.

This doesn’t mean the agency has to be in every technical modeling session. But it does mean:

  • The agency is present for the data review, explaining how campaigns are structured and why
  • The measurement provider can ask “what happened here?” and get a direct answer from the people who were running the campaigns
  • Recommendations are tested against agency knowledge before they’re finalised (“Does this hold up given what you know about this account?”)
  • The agency understands the methodology well enough to implement recommendations with confidence

This three-party structure, brand, agency, measurement partner, is harder to set up than a direct vendor relationship. It requires more coordination and a willingness from all three parties to share information across their usual boundaries.

But the output is measurably better. Recommendations that have been stress-tested against real account knowledge are recommendations that get acted on.

What Agencies Actually Want from Measurement

Performance agencies are often cast as the sceptical party in measurement conversations. And there is genuine scepticism, particularly around MMM, because the timelines are long and the outputs can feel disconnected from the day-to-day decisions agencies have to make.

But when agencies are genuinely involved in the process, the questions they ask reveal what they actually want from measurement. It’s not more data. Most agencies already have sophisticated internal dashboards with more data than they can act on.

What they want is a defensible, channel-neutral view of what’s driving revenue. Not platform-reported ROAS, which every platform calculates differently and which always flatters the platform reporting it. Not last-click attribution, which systematically over-credits the bottom of the funnel. A view that sits outside any single channel’s interest and tells you what’s actually working.

They also want predictive capability. If we move budget from Meta to Google, what happens to revenue? If we increase total spend by 20%, where does the marginal dollar go? These are questions platform data can’t answer. MMM, done well, can.

And they want to be able to ask those questions quickly. Not in a quarterly review cycle. In the middle of a planning conversation, or before a budget presentation to a client.

The Discovery Phase Is Where It Happens

One of the underrated parts of a well-run MMM engagement is the discovery phase. This is where the measurement provider asks detailed questions about how the marketing has actually been executed: what campaigns were running when, what the strategy behind each channel was, what changed and why.

For an agency, this is an unusual experience. Most measurement processes treat the agency as a data supplier. A thorough discovery process treats the agency as a knowledge source. The difference matters.

When an analyst asks “what happened in this spend period that would explain this spike?” and the agency can answer with specifics, the model becomes more accurate. When the model reflects the actual shape of the marketing activity, the recommendations are more likely to reflect reality.

The discovery phase also builds trust. When the agency sees that the measurement provider is actually trying to understand their work, rather than just processing a data file, the relationship shifts. The output starts to feel less like an external audit and more like a shared analysis.

The Takeaway

If your MMM engagement is a two-party conversation between measurement provider and brand, with the media agency occasionally looped in for data requests, you’re leaving accuracy on the table.

The measurement provider doesn’t have the account context. The brand doesn’t have the technical depth. The agency has both, but gets treated as a data supplier rather than a collaborator.

The brands that get the most value from MMM are the ones that build the three-way structure from the start. It takes more coordination upfront. The results are worth it.

Ready to Grow Your Business?

Join companies already using Seeda to accelerate growth and streamline operations.