A High Marketing ROI Is a Warning Sign
Table of Contents
When a retail brand saw their first marketing mix model results, their head of marketing asked a reasonable question: “Is $3.90 good?”
They had been spending around $3 million a year on marketing. The model showed that for every dollar spent, they were generating $3.90 in revenue. On the surface, that sounds excellent.
The honest answer is: it depends, and it is probably not as good as it sounds.
Why High ROI Can Signal Underinvestment
Marketing ROI is not like savings account interest. It is not the case that the higher the return, the better the position you are in.
At low spend levels, the first dollars in any channel tend to be highly efficient. You are reaching the most responsive audiences, running your best creative, capturing the most in-market demand. Those early dollars can generate extraordinary returns.
The problem is that this efficiency decays as you spend more. As you exhaust the most responsive audience segments, as you reach people with less intent, as frequency rises, the return on each additional dollar falls. This is the saturation effect, and it is the central reality of how marketing channels actually work.
A $3.90 average ROI on a $3 million budget might mean that the last dollars spent are already returning only $1.50. Or it might mean that significant budget could still be added before hitting diminishing returns. Without saturation curve analysis, you cannot tell which situation you are in.
The Equilibrium You Are Actually Looking For
The goal of marketing budget optimisation is not to maximise ROI. It is to find the optimal scale: the point where you are spending as much as you can profitably deploy before marginal returns drop below the threshold of acceptable.
A channel showing a 6x return on a small budget is not a win. It is a signal that you are leaving money on the table. If every incremental dollar at that scale is generating 6x back, you should be spending significantly more until returns compress to a level you are comfortable with.
Conversely, a channel showing a 2x return on a large budget might be performing exactly as expected if the saturation curve analysis shows you are near the optimal point.
The marketing head at the retail brand asked the right follow-up question: “Which channels are overcooked and which are undercooked?” That is the right question. Not “is our overall ROI good?” but “where on the saturation curve are we sitting for each channel, and how should we be rebalancing?”
What the Numbers Actually Revealed
The model showed some channels performing well above the average: TV and radio were the biggest revenue drivers by a significant margin. Some smaller digital channels showed very high returns - but the explanation was exactly as described above. Low spend, early in the saturation curve. They looked efficient because the budget was small, not because the channel was inherently superior.
One channel - video-on-demand advertising - was generating roughly $1 in revenue per dollar spent. On a revenue basis, that breaks even. On a profit basis, it is almost certainly running at a loss. That is a channel to address.
The model could not cleanly separate two digital channels because they had been activated and deactivated at the same time across the measurement period. When two channels are perfectly correlated in your spending pattern, a model cannot distinguish their individual contributions. The result is that both appear in the same bucket with no visibility into which one is driving value.
This is a common data structure problem in marketing measurement. The fix is not a better model. It is a testing protocol that ensures channels are varied independently, at least occasionally, so the model has something to work with.
The Spend Data Problem That Changes Everything
There was a further complication in this case: the model initially showed TV spend of around $164,000 over two and a half years. The actual TV spend was closer to $800,000.
The discrepancy came from how the media agency reported its buys. TV and radio were booked together and often reported under a single combined line in invoices and data exports. When Seeda’s data team processed the inputs, a significant portion of the TV spend had been classified as radio, or was simply missing from the data extract.
This matters enormously for modelling. An ROI figure calculated on $164,000 in TV spend looks very different from one calculated on $800,000. More importantly, the model’s recommendations about how to reallocate budget between radio and TV are based on the data it received. If the TV numbers are wrong, the recommendations are wrong.
The lesson is one of the most consistent findings in marketing measurement work: the accuracy of a model’s outputs is bounded by the accuracy of its inputs. No amount of modelling sophistication compensates for upstream data quality problems. The best models are the ones built on clean data, and getting data clean is almost always the hardest part of the work.
One More Reason ROI Benchmarks Are Misleading
The retail brand asked another natural question: “What is a normal marketing ROI for a business like ours?”
The answer is that there is no reliable benchmark. Marketing ROI figures vary dramatically by industry, category, stage of brand development, media mix, and measurement method. An ROI benchmark from a competitor in a different category, measured using a different methodology, tells you almost nothing useful about what you should expect.
More importantly, any benchmark-based evaluation misses the point. The relevant comparison is not your ROI versus a market average. It is your current ROI versus your optimal ROI at a different budget level and channel mix. That comparison can only be generated from your own data.
The Takeaway
When you see a marketing ROI figure, resist the instinct to evaluate it in isolation. Ask three questions instead.
First: what is the return on the marginal dollar, not the average? Average ROI can look good while the last dollars spent are already working poorly.
Second: is the spend data underlying this analysis accurate? Bundled agency reporting, inconsistent channel naming, and missing data sources are all common enough that they should be verified before drawing conclusions.
Third: what are the saturation curves telling us about where we should be spending more and where we should be spending less?
The number itself, without this context, is not very useful. The analysis behind it is everything.