The Channel Your CMO Wrote Off Might Be Your Best Performer

The Channel Your CMO Wrote Off Might Be Your Best Performer

The Problem with Channel Intuition

Every marketing team has a mental model of which channels work and which ones don’t. Radio feels outdated. Instagram feels like it’s for brand awareness, not conversions. Trade shows feel essential for B2B. Regional campaigns feel like they can’t compete with metro.

These intuitions are not random. They are built from years of experience, industry benchmarks, and platform-reported metrics. But when those intuitions meet actual econometric measurement, the results are consistently surprising.

Across multiple brands and industries, marketing mix modelling has revealed a recurring pattern: the channels that “feel” right and the channels that actually drive incremental revenue are often different. Sometimes dramatically so.

Surprise 1: Radio Still Working Weeks After It Airs

A multi-location franchise brand assumed radio was underperforming. The platform-reported metrics were uninspiring. There was no click-through rate to point to, no conversion pixel to validate. Radio felt like a legacy line item that existed because nobody had bothered to cut it.

The marketing mix model told a different story. Radio was generating revenue, but with a significant trailing effect. The impact of a radio spot did not peak during the week it aired. It continued to drive incremental transactions for weeks afterward, with a decay curve that stretched well beyond what most attribution windows would capture.

The standalone ROI was approximately 2x, which is modest compared to digital channels. But the trailing effect meant the total return was larger than it first appeared. The revenue attributed to radio was being spread across a longer window than anyone had assumed.

This does not mean radio was the brand’s best channel. It means the initial assumption, that radio was not working, was wrong. The channel was contributing. The question became whether that contribution justified its cost relative to alternatives, which is a very different question from “is it working at all?”

Why Trailing Effects Get Missed

Most marketing dashboards measure performance within a fixed window: seven days, 14 days, or 28 days after exposure. Channels with longer impact curves, like radio, TV, and out-of-home, get systematically undervalued because their effects extend beyond the measurement window.

Marketing mix modelling does not have this limitation. It measures the full decay curve of each channel, however long that takes. For some channels, the curve is steep and short (paid search peaks and fades within days). For others, it is shallow and long (brand advertising can influence behaviour for weeks or months).

The trailing effect is not unique to radio. But radio is the channel most likely to be cut based on short-window measurement, making it the most common source of surprise when a full model is run.

Surprise 2: Instagram Crushing Facebook Despite Lower Spend

A QSR brand allocated the majority of their social budget to Facebook. The logic was straightforward: Facebook had the larger audience, the more mature ad platform, and the longer track record of driving measurable results.

The marketing mix model showed that Instagram was delivering significantly higher ROI than Facebook, despite receiving a fraction of the spend. The difference was not marginal. Instagram’s incremental return per dollar was multiples higher.

The likely explanation was cultural audience alignment. The QSR brand’s core customer demographic skewed younger and was more engaged on Instagram than Facebook. The creative format, visual and short-form, was a better fit for the product. And because Instagram was receiving less spend, it was further from its saturation point, meaning each additional dollar was more productive.

The Spend Bias

This finding illustrates a common measurement bias: channels that receive more spend tend to look more important in platform dashboards simply because they generate more volume. But volume is not the same as efficiency.

A channel with $500,000 in spend and 3x ROI generates $1.5 million in revenue. A channel with $50,000 in spend and 8x ROI generates only $400,000 in revenue. The first channel looks like the star performer on a dashboard that shows total revenue. The second is actually the better investment on a per-dollar basis.

Marketing mix modelling measures efficiency, not just volume. That is why it consistently surfaces small-budget, high-ROI channels that get overlooked in favour of their higher-spend counterparts.

Surprise 3: Virtual Events Beating Trade Shows for Lead Gen

A B2B technology company had always treated trade shows as the backbone of their lead generation strategy. The events were expensive, requiring booth design, travel, staffing, and sponsorship fees. But the pipeline they generated was considered essential.

When the model measured the incremental contribution of trade shows versus virtual events (webinars, online workshops, and virtual conferences), the results challenged the conventional wisdom. Virtual events were generating more qualified leads per dollar than trade shows, with a shorter time-to-conversion and lower cost per lead.

Trade shows still contributed. They were particularly effective for enterprise deals where in-person relationships mattered. But for the mid-market segment that represented the majority of the company’s revenue, virtual events were the stronger performer.

Surprise 4: Regional Meta Outperforming Metro Meta

A multi-location retail brand assumed their metro campaigns were the growth engine. Metro areas had the largest audiences, the highest spend, and the most sophisticated targeting. Regional campaigns were treated as a secondary priority.

The model showed the opposite. Regional Meta campaigns were delivering roughly double the ROI of metro campaigns. The cost per incremental transaction was significantly lower in regional markets.

Several factors contributed to this:

  • Lower competition. Fewer advertisers were competing for the same regional audiences, which kept CPMs lower.
  • Higher relevance. In regional markets, the brand had stronger local presence and less direct competition, making the ads more relevant to the audience.
  • Less saturation. Metro campaigns were further up the saturation curve. Regional campaigns had more headroom before hitting diminishing returns.

This finding reshaped the brand’s budget allocation. Rather than treating metro as the priority and regional as the afterthought, they shifted spend toward the markets where each dollar worked hardest.

The Consistent Pattern

These four examples come from different brands, different industries, and different channel mixes. But they share a consistent pattern:

Assumptions about channel performance are rarely accurate. Whether the assumption is “radio doesn’t work,” “Facebook is better than Instagram,” “trade shows are essential,” or “metro is where the growth is,” the model frequently contradicts the conventional wisdom.

The channels that look best in platform dashboards are not always the most efficient. Platform-reported metrics favour high-spend, short-attribution-window channels. Channels with lower spend, longer impact curves, or indirect effects get systematically undervalued.

Small-budget channels often have the highest marginal ROI. Because they are further from saturation, each additional dollar produces more return. This is the opposite of what volume-based measurement suggests.

How to Stress-Test Your Channel Assumptions

List your assumptions explicitly

Before running any measurement, write down what your team believes about each channel. Which ones are working? Which ones are not? Which ones are essential? This creates a baseline against which you can compare the model results.

Measure efficiency, not just volume

Total revenue generated is an important metric, but it does not tell you whether a channel is efficient. ROI, cost per acquisition, and incremental contribution per dollar are better measures for budget allocation decisions.

Look at the full decay curve

If your measurement framework only captures effects within a 14-day or 28-day window, you are likely undervaluing channels with longer impact curves. Marketing mix modelling captures the full decay, which often changes the ranking of channel performance.

Challenge the high-spend channels hardest

The channels that receive the most spend are the most likely to be past their saturation point. They deserve the most scrutiny, not the least. If a channel’s ROI has been declining as spend increases, that is a signal worth investigating.

The Takeaway

The channel your CMO wrote off might genuinely be underperforming. But it also might be generating revenue in ways that your current measurement tools cannot detect. The only way to know is to measure it properly, with a framework that captures trailing effects, cross-channel interactions, and efficiency at different spend levels.

The brands that find the most value in marketing mix modelling are not the ones that confirm what they already believe. They are the ones willing to be surprised.


Seeda’s marketing mix models measure the incremental contribution of every channel in your mix, including the ones you have written off. Find out which of your assumptions the data supports and which ones it contradicts.

Ready to Grow Your Business?

Join companies already using Seeda to accelerate growth and streamline operations.