Your MMM Doesn't Know About Your PR Win - and That's a Problem

Table of Contents
Free Media Isn’t Free in a Model
When a brand gets unexpected media coverage, the instinct is to celebrate. A product goes viral. A promotional stunt gets picked up by national news. A food item becomes a meme. Sales spike. Everyone’s happy.
The problem comes later, when you’re trying to understand what drove that spike. If you don’t tell your MMM about the earned media event, the model will look at that week, see a sales increase, and try to explain it using the data it has. Whatever paid channels were running at the time will get the credit.
This isn’t a bug. It’s the model doing exactly what it’s supposed to do. But it produces a result that’s wrong: an inflated ROI for a paid channel that didn’t actually cause the spike.
How Control Variables Work
In MMM, a control variable is a data point that explains variation in your dependent variable (usually sales or revenue) that isn’t driven by your paid media spend. Common control variables include:
- Seasonality and public holidays
- Promotional periods and discount depth
- Price changes
- Store openings and closures
- Weather events
Earned media fits into this category. If a PR event, viral moment, or unexpected media appearance caused a real change in consumer behaviour, the model needs to know about it. Otherwise it will misattribute the impact.
The practical implication: every meaningful earned media event needs to be logged, dated, and provided to the measurement team before modeling begins. Not as a channel with spend, but as a flag: something happened here that affected sales independently of our paid activity.
A Real-World Example
Consider a national food brand that ran a small promotional stunt - roughly $10,000 in paid media to support it. The stunt got picked up by national television. For two weeks, the brand had coverage it couldn’t have bought.
If the modeling team doesn’t know about the TV coverage, they see a sales spike in that period and start looking for an explanation in the paid media data. Whatever channels were running that fortnight, Meta, Google Search, out-of-home, get assigned some of the credit for a spike that had nothing to do with them.
The next time the model makes a budget recommendation, those channels are carrying inflated ROI numbers. The recommendation over-allocates to them. The actual driver of that spike, the PR moment, is unmeasured and unoptimised.
Now imagine this happening repeatedly. Seasonal stunts. Product launches with organic pickup. Collaborations that generate press. Each unlogged event adds a small amount of noise to the model. Over two to three years, the noise compounds.
What Gets Distorted
The specific distortions depend on timing. If earned media tends to coincide with paid media bursts (which it often does, because brands usually do both at the same time), the paid channels running during those periods will appear more effective than they are.
If the earned media is more independent of paid activity, the distortion is more random. Either way, the ROI numbers for individual channels become less reliable.
There’s also a directional problem. If you don’t measure earned media, you can’t optimise it. You don’t know which types of PR activity drive the most commercial impact. You don’t know whether the $10K stunt was worth it relative to putting that budget into a paid channel. The measurement gap leaves value untracked.
The Discovery Process
A thorough MMM data review will specifically ask about owned and earned media activity. Not because these channels have spend to model, but because the events need to be accounted for.
This means the marketing team needs to have this information ready. Key questions include:
- Were there any PR placements, media appearances, or earned coverage during the modeling period?
- Were there product launches or brand collaborations that generated organic attention?
- Were there any viral moments, whether planned or unplanned?
- Is there an editorial or events calendar that captures major brand activities?
For each of these, the measurement team needs a date range and a rough sense of scale. “National television coverage for two weeks in July” is enough information to build a useful control variable. Exact viewing figures are helpful but not essential.
Why Most Brands Get This Wrong
The gap usually isn’t deliberate. Marketing teams don’t think to include earned media in MMM data requests because the model is perceived as a “paid media” tool. The measurement provider asks for spend data and impressions, not a list of PR events.
The result is that everyone assumes someone else is handling it. The measurement provider doesn’t know to ask. The brand team doesn’t know to tell. The PR team isn’t involved in the measurement process at all.
This is one of the reasons why the discovery phase of an MMM engagement matters so much. A good data review doesn’t just collect spend files. It actively tries to uncover everything that could have moved the needle during the modeling period, paid or otherwise.
Owned Media Has the Same Problem
The same logic applies to owned and organic channels. If a brand has a large and active email list, spikes in email engagement can drive sales independently of any paid activity. If organic social reaches drive real traffic, untracked posting activity can create unexplained variance in the model.
None of this means every brand needs a comprehensive content tracking system before they can run an MMM. But the brands that get the most accurate results are the ones that can provide a full picture of their marketing activity, not just the channels that appear on an invoice.
The model works with what it’s given. Give it the full picture, and it can tell you what’s actually working.
The Takeaway
Before your next MMM engagement, audit your non-paid activity for the modeling period. List every significant earned media event, major PR moment, brand collaboration, viral piece of content, and organic campaign that could have moved sales.
This doesn’t need to be comprehensive. It needs to be honest. The model can handle imperfect data. What it can’t handle is missing data - events that happened, moved the needle, and were never logged.
Your PR team knows about these. Your brand team knows about these. Getting that information into the model is the last step most brands skip.