When Meta and Performance Max Move Together, Measurement Breaks

Table of Contents
The Measurement Problem Nobody Warns You About
When a brand comes to MMM for the first time, they usually have one question: how much is each channel contributing to revenue?
It’s a reasonable question. It’s exactly what MMM is designed to answer. But there’s a scenario that comes up often enough to be worth flagging, where a technically sound model has to come back and say: we can’t answer that for two of your channels individually. We can only tell you what they’re doing combined.
That scenario is high channel correlation. And it’s more common than most marketers realise, particularly for D2C brands running both Meta and Performance Max.
What Channel Correlation Actually Means
In MMM, the model works by identifying which changes in spend correlate with changes in revenue, after controlling for seasonality, promotions, and everything else that moves revenue independently of paid media.
To do that reliably, the model needs variation. It needs to observe periods where Meta spend went up but Pmax didn’t, or where Pmax ran a burst and Meta was steady. Those contrasting moments are what allow the model to give each channel its own contribution estimate.
When two channels move in perfect lockstep, that variation disappears. Every week Meta spend increases, Pmax spend increases too. Every week one is scaled back, so is the other. From the model’s perspective, you have one channel, not two, because there’s no data point that lets it separate their individual effects.
A correlation coefficient of 0.85 or above between two channels is the point where most models give up trying to separate them. A correlation of 1.0 means they’re perfectly synchronised. Many brands are closer to 0.85 than they realise.
Why Meta and Pmax Are Particularly Vulnerable
Meta and Performance Max end up highly correlated for a structural reason. Both are lower-funnel, performance-focused channels that brands tend to ramp up and scale back together. Sale periods drive both channels up. Quiet periods bring both down. The marketing logic is often identical: more budget when there’s a conversion opportunity, less budget when there isn’t.
The result is a measurement problem. You’re running two channels. They’re operating on the same rhythm. The model sees one pattern in the data and can only assign one contribution number.
This isn’t a failure of MMM as a methodology. It’s a reflection of how the marketing was actually run. The model can only work with the variation that exists in the data. If the variation isn’t there, the answer genuinely can’t be separated.
What Grouped Measurement Looks Like
When the model groups two correlated channels, you get a combined contribution number: “Meta plus Pmax together drove X% of your revenue over this period.” The combined number can still be accurate and useful. You know the total contribution of that cluster of channels. You can still make budget decisions at the combined level.
What you lose is the ability to optimise between them. If the combined ROI is high, you know you could be spending more on that cluster. But if you want to know whether it’s better to put the extra dollar into Meta or into Pmax, the model can’t tell you. That question requires data that doesn’t exist yet.
For brands where Meta and Pmax are two of the largest spend channels, this is a meaningful limitation. The channels together might represent 50% or more of paid media budget. Not being able to distinguish their individual contributions makes precise optimisation difficult.
The Fix: Deliberate Variation
The only way to separate correlated channels in a future model run is to introduce variation in how you scale them. This means deliberately running them differently for a period of time, so the data has contrast to work with.
There are a few practical ways to do this:
Time-based testing. Run a period of four to eight weeks where you scale Meta but hold Pmax flat, or vice versa. The variation in outcomes during that period gives the model something to learn from.
Geographic splits. If the brand operates across multiple regions, run the channels at different ratios in different markets. The geographic variation provides the contrast even if the overall budget stays similar.
Campaign structure changes. If Pmax and Meta are currently managed with identical pacing logic (both go up and down at the same time), changing the pacing for one of them introduces variation without needing a formal test.
None of these approaches are zero-cost. Testing periods have uncertainty. Geographic splits add operational complexity. But the alternative is continuing to measure these channels only as a combined unit, which limits the precision of every budget recommendation that follows.
The Broader Lesson
Channel correlation in MMM is a symptom of a broader pattern: the way you run your marketing determines what you can learn from it.
Brands that run all their performance channels on identical logic, scaling everything together for sales periods and pulling everything back in quiet periods, are making a measurement trade-off whether they realise it or not. The operational simplicity of a unified pacing strategy comes at the cost of measurement granularity.
This doesn’t mean every brand needs a complex multi-channel testing programme. But it does mean that if you want to understand the individual contribution of each performance channel, the channels need to have been run with at least some independent variation.
A measurement provider can build the best possible model with the data they have. They can’t manufacture variation that wasn’t there. The signal has to come from how the marketing was actually executed.
What to Do Before Your Next MMM
If you’re planning to run MMM in the next 12 months and you’re running both Meta and Performance Max, now is the time to start introducing variation.
You don’t need a formal holdout experiment. Even modest differences in scaling behaviour between the two channels, over enough weeks, will give the model something to work with. Small adjustments to pacing logic or campaign structure can create the variation needed to get individual channel reads.
The brands that get the most precise MMM results are the ones that think about measurement before they need it, not after. The data you collect in the next six months is the data your next model will run on. Running it with measurement in mind makes a meaningful difference to what you can learn from it.