Total Revenue Is Declining. That Doesn't Mean Your Marketing Is Failing.

Total Revenue Is Declining. That Doesn't Mean Your Marketing Is Failing.

The Wrong Number

A national franchise with hundreds of locations started a marketing mix model engagement. When the data came in, total system revenue was showing a clear downward trend over two years. On the surface, it looked like a business in trouble.

The head of marketing knew this wasn’t right. Individual stores were performing well. Customer transaction volumes were up. The average spend per visit was growing.

The issue wasn’t marketing performance. It was store closures. The network had contracted, and fewer total stores meant lower aggregate revenue, regardless of how well each remaining store was performing.

Running a marketing model on total system revenue would have attributed a structural business change to marketing effectiveness. The model would have been answering the wrong question.


Same-Store vs. Total System: Why It Matters

For any multi-location business - retail, restaurants, franchises, service businesses with multiple sites - total revenue is a blended number. It captures performance from existing locations, new openings, closures, and everything in between.

When a network is stable, total revenue and same-store revenue move roughly together. But when the network is changing, the two numbers tell completely different stories.

A chain opening 20 new stores this year can show healthy total revenue growth while every individual location is underperforming. A chain closing ten locations can show declining total revenue while each remaining store is delivering record results.

Using total revenue to evaluate marketing effectiveness in either scenario produces misleading conclusions. The marketing team may look highly effective when the real driver is aggressive expansion. Or they may look like they’re failing when the business is actually performing better store by store.


What the Model Needed Instead

For the franchise in question, the modelling team made a simple but important decision: strip out any stores that had closed during the analysis period and model only on the stores that were open throughout.

This produced a like-for-like view of the business - true same-store sales data that removed the noise from network contraction. The downward trend disappeared. Same-store sales were actually growing, with a seasonal pattern the model could explain: school holiday uplift in July and a stronger end-of-year period driven by Christmas shopping centre traffic.

With the right KPI in the model, the question changed from “why is revenue declining?” to “which marketing activities are driving same-store growth, and how can we do more of them?”

Those are very different briefs.


The Modelling Decision Behind the Metric

Choosing the right KPI to optimise is one of the most consequential decisions in any marketing model. Get it wrong, and the model produces accurate answers to the wrong question.

For this franchise, the team also chose to model on transaction volume rather than revenue. Revenue is influenced by pricing changes, promotional discounting, and product mix shifts - factors that can move independently of marketing. A price increase will inflate revenue without any change in customer behaviour. A product mix shift toward lower-ticket items will suppress revenue even if transaction volume is growing.

Transactions are a cleaner signal of what marketing is actually doing: bringing customers through the door (or onto the delivery platform). Revenue can be reported separately, but the model optimises for the metric marketing actually controls.

The combination of same-store filtering and transaction-based modelling meant the model would measure genuine marketing contribution, not accounting artefacts.


The Broader Principle

This isn’t a problem unique to franchises or QSR. Any business where the unit count changes over time faces the same measurement challenge.

A retail chain that has closed underperforming stores may show declining total revenue while its average store productivity has improved significantly. A SaaS business that has churned low-value customers while acquiring higher-value ones can show revenue growth that understates the underlying improvement in customer quality.

The principle is the same: aggregate metrics reflect the composition of the portfolio, not just the performance of each unit within it. When the portfolio is changing, you need a metric that holds the composition constant to measure performance accurately.

For multi-location brands running marketing models or reviewing marketing effectiveness, the first question should always be: are we measuring performance per unit, or aggregate performance across a changing portfolio? The answer determines whether the analysis is useful.


A Practical Checklist

Before building a marketing model - or evaluating marketing performance for any multi-location brand:

  • Check for network changes. Have any locations opened or closed during the analysis period? If yes, total metrics are contaminated.
  • Define same-store criteria. Which locations have been open long enough to contribute clean data? A common rule: include only locations open for the full analysis period, or at least 12 months.
  • Exclude anomalies at the location level. A store that was closed for three months due to a natural disaster or renovation will distort like-for-like data. Flag these and apply appropriate exclusions.
  • Choose your KPI carefully. Revenue, transactions, customers, or conversion rate - each tells a different story. Model on the metric closest to the behaviour marketing is actually influencing.
  • Restate results in business terms. After modelling on transactions, translate back to revenue using average transaction values. Give stakeholders the number they care about, while keeping the model’s inputs clean.

The insight from this franchise was straightforward: total revenue was declining because the network was shrinking, not because marketing was failing. But without the same-store filter, the model would never have seen it.


Key Takeaways

  • Total system revenue for multi-location businesses reflects network size as much as performance - use same-store or like-for-like metrics for accurate analysis
  • Store closures, openings, and anomalies (weather, renovations) should be filtered before modelling to avoid attribution errors
  • Modelling on transactions rather than revenue removes noise from pricing changes and product mix shifts
  • The right KPI is the one closest to the behaviour marketing actually influences - not the one that’s easiest to report
  • For franchise and retail brands, the question is always performance per unit, not aggregate performance across a changing portfolio

Ready to Grow Your Business?

Join companies already using Seeda to accelerate growth and streamline operations.