More Leads, Worse Pipeline: The B2B Marketing Model Blind Spot

Table of Contents
The Recommendation That Worked
A B2B cybersecurity company ran a marketing mix model across their lead generation activity. The model identified that by reallocating their existing budget, they could increase weekly leads from 685 to 760 - an 11% improvement with no additional spend.
The primary recommendation: shift budget from display and search into Performance Max, which was generating more cost-efficient leads, and increase Reddit allocation.
The team acted on it. Leads went up.
Then the pipeline started degrading.
What the Model Didn’t Measure
The lead volume model was doing exactly what it was built to do: measuring which channels drove lead form fills and optimising the budget to generate more of them.
What it wasn’t measuring: the quality of those leads.
When PMax spend increased, bot traffic increased with it. Performance Max campaigns serve ads across Google’s entire network - search, display, Gmail, YouTube, shopping - optimised automatically by Google’s algorithm. In high-volume competitive B2B categories, PMax can attract significant non-human traffic. More impressions means more bots completing form fills.
The leads were up. Qualified pipeline was not.
The marketing team had to pause the PMax ramp-up, clean up the bot traffic, and restart at a lower baseline before building back up with tighter controls on form quality. The 11% lead increase materialised, but not the sales qualified leads the revenue team needed.
The Single-KPI Optimisation Problem
This is a well-documented failure mode in B2B marketing. When a model or campaign is optimised for a single metric - lead volume, cost per lead, click-through rate - it maximises that metric. The metric goes up. Everything that’s correlated with quality but not in the objective function gets worse.
In B2B specifically, the funnel has stages that require fundamentally different things from marketing:
Leads require reach, relevance, and a compelling enough offer to trigger a form fill. High volume, broad targeting, and incentivised content all help.
Opportunities require qualification. The lead needs to be the right company, the right person, the right budget, the right timing. Broad targeting that fills the form fill funnel may actively select against these criteria by reaching companies that are curious but not buying.
Revenue requires the opportunity to convert. Even a qualified lead from the right company can fail to close if the sales process, pricing, or competitive situation isn’t right.
Optimising for leads without accounting for what happens downstream is like optimising for website traffic without caring whether any of those visitors become customers.
What They Built Instead
The team moved to a dual-model approach.
The first model - already built - measured lead volume. Which channels generate the most form fills at the lowest cost? Which should grow? Which should be cut?
The second model was built for opportunity conversion. Of all the leads that came through, which ones became qualified opportunities? Were the channels driving high lead volume also driving high opportunity conversion? Or were some channels generating cheap leads that never progressed?
The insight that emerged: different channels have different lead quality profiles. A channel that’s efficient at generating lead volume may be attracting lower-quality buyers who fit the targeting criteria loosely. A channel with a higher cost per lead may be reaching buyers who are further along in their consideration and more likely to convert.
When the team combined both models - minimising cost per qualified opportunity rather than cost per lead - the optimal budget allocation looked different.
Some channels that looked efficient on a cost-per-lead basis looked less attractive when lead-to-opportunity conversion rates were factored in. Some channels that looked expensive per lead looked far more efficient once quality was accounted for.
The Practical Implication
For any B2B business running marketing models or optimising campaign budgets based on lead cost, there are two questions worth asking before acting on the data.
What are we actually optimising for? If the model objective is lead volume and your business objective is revenue, there’s a gap. Closing that gap requires either building a model that optimises further down the funnel, or applying a quality filter on top of the volume optimisation (for example, only counting leads from companies above a certain employee count, or from people with job titles that match your buyer persona).
Does the channel attract the right buyers? A channel can be efficient at generating form fills from audiences who are broadly interested in your category but not ready to buy. Events and intent-based channels (categories searched on LinkedIn, accounts visiting your site, people consuming competitive content) tend to have higher quality-to-volume ratios than broad awareness channels, even if the headline cost per lead is higher.
The Stellar Cyber example is instructive because the model was right. Shifting budget from display into PMax did increase lead volume. The model was measuring what it was supposed to measure. The problem was that lead volume was the wrong objective - or at least, an incomplete one.
Building the second model, the opportunity-quality model, changed both the insight and the recommendation. The channels that looked best on lead cost weren’t necessarily the same as the channels that looked best on pipeline quality.
In B2B, the lead is the beginning of the measurement problem, not the end.
Key Takeaways
- Optimising for lead volume can increase bot traffic and junk form fills, particularly with broad-match channels like Performance Max
- Single-KPI B2B models produce accurate answers to potentially wrong questions - build separate models for lead quantity and lead quality
- Different channels have different lead quality profiles: cost per lead and cost per opportunity are not the same metric
- Dual optimisation - minimum lead volume floor plus maximise opportunity conversion rate - produces a more accurate budget recommendation than either metric alone
- When implementing model recommendations, monitor quality signals (lead-to-opportunity rate, form fill source quality) before concluding the model was wrong