From Tunnel Vision to Funnel Vision: How Ad Costs and ROAS Differ at the Tactical Level

From Tunnel Vision to Funnel Vision: How Ad Costs and ROAS Differ at the Tactical Level

Todd Martin, Vice President of Customer Success

Published 08/29/2023

One of the critical things you learn when measuring marketing effectiveness is that “performance” at the top of the funnel doesn't always translate to results at the bottom.

From Tunnel Vision to Funnel Vision: How Ad Costs and ROAS Differ at the Tactical Level

Ignoring that lesson and treating all engagement on an ad platform as equal is what gets marketers into trouble with wasted ad spend. Pouring money into a campaign with poor lower funnel performance is like pouring water into a leaky bucket—there’s nothing left when you get to the bottom.

The truth is that some campaigns perform well at the top of the funnel but fail to deliver a positive return on investment (ROI). Other campaigns may suffer from poor engagement but contribute significantly to the bottom line. Full funnel measurement and optimization is how you protect yourself from making this mistake.

Let me share with you a real-life example.

Top of Funnel Performance

Take the following split test on the Meta ads platform:

Campaign ACampaign B
Cost per mille (CPM)$2.14$3.72
Click-through rate (CTR)0.37%0.25%
Cost per click (CPC)$0.58$1.48


You’d have to conclude that Campaign A is the clear winner on these metrics alone. It has a better CTR, which means its ads resonate better with the audience. Thanks to better quality ranking, Meta is rewarding it with a cheaper CPM. The combined result is a CPC that’s 60% cheaper! 

Middle of Funnel Performance

The seasoned marketers among you will know that’s not the full story. How do they perform further down the funnel?

Campaign ACampaign B
Add-to-cart rate (ATC)5.52%12.96%
Cost per add-to-cart (CPA)$10.52$11.45


This is where it starts to get interesting. Despite being significantly worse at the top of the funnel, Campaign B is driving far more qualified traffic when it comes to the mid-funnel. Campaign B is 2.3x better at getting people to add to the cart than Campaign A, which almost makes up for its far higher cost of traffic. 

Bottom of Funnel Performance

However, we shouldn't stop there. Adding to the cart is a valuable indicator metric, but it doesn’t make us money. What’s the impact on the bottom line?:

Campaign ACampaign B
Conversion rate (CVR)25%32%
Cost per order (CPO)$41.28$36.17


Campaign B has taken the lead and is our best-performing campaign on the metric that matters most to us: cost per order. Higher qualified traffic in the middle of the funnel also translated to 28% better conversion rates from Campaign B after visitors added a product to the cart. Orders are where this business makes money, and all the other inflated numbers are vanity metrics in comparison.

What Factors Affect Performance?

Many factors impact your campaign results, and almost every ad account I’ve reviewed has exhibited similar differences between the top and bottom of the funnel. In my experience, there can be any number of things that could explain this difference. Here are a few areas I’ve seen that explain discrepancies of this magnitude:

  • Targeting: Some audiences like to window shop but are unlikely to buy.
  • Messaging: The ad might grab attention but fail to explain product benefits.
  • Conversion: Even small differences in user experience can cause drop-offs.
  • Format: Video ads tend to have a lagged impact on sales versus static images.

All of these things and more can affect top and bottom-of-funnel results wildly. Much of the art and science of marketing is in understanding that not all traffic or awareness is created equal and ensuring you’re taking steps to protect yourself from wasting your advertising budget. Everything you change affects every other variable, so it’s important not to rely on outdated assumptions about what’s working (or not).

What Actually Improved Conversion Rate?

In this case, the only difference being split-tested was what the campaign was being optimized to. Campaign A was optimized to cost per add-to-cart, which had an 8.1% advantage. Campaign B was optimized to cost per order, which is the true goal of the business. Despite being worse on every other metric, it ultimately drove orders for 12.4% cheaper, which is what we care about most as a business. 

The learning here isn’t to rush out and switch to optimizing to cost per order—it may not perform as well if you aren’t spending as much (~$2,000 per day)—but to internalize the fact that performance at the top and bottom of the funnel aren’t always correlated. Sometimes, your best campaign at the top of the funnel will also win at the bottom, but it’s important to prove that, which requires measuring campaigns at the tactical level.

How Can You Improve Measurement?

Both campaigns were advertising the same product in this case, but if they weren’t, I’d advise going one step further and looking for a difference in average order value between the two. Additionally, Campaign A might have a higher or lower return rate than Campaign B or higher customer support costs. There may even be a difference in repeat purchase behavior or lifetime value that can be identified. The key is to run split tests wherever feasible and always monitor and investigate anomalies.

In this case, we didn’t run an incrementality geo-experiment, but if we did, we might have found even greater differences in actual incremental revenue driven. For example, if Campaign A is optimizing higher up the marketing funnel to people not yet ready to buy, it stands to reason they might come back later, outside the conversion window. This is something you can specifically test for with a well-designed experiment.

Tactical View of Measurement

We looked at two campaigns at every stage of the funnel. However, many of our brands run different campaigns for different funnel stages. For example, they may run a brand campaign for awareness, a prospecting campaign for consideration, and a remarketing campaign for conversion. Then, they look at channel-level incrementality to answer, “How are my campaigns performing?”

Only considering how a channel is performing as a whole doesn't give critical insight into whether you are over-invested in one tactic over the other. Platforms using last-touch attribution will almost always overreport on lower-funnel and under-credit upper-funnel. Since we measure incrementality at the tactical level, we get the real picture in both cases. Repeatedly, we've found that brands are often overinvested in retargeting and can greatly increase their return by shifting money to prospecting or awareness. 

Good Measurement Isn’t Easy

The inherent difficulty with optimizing performance further down the funnel is that it takes a lot of data to prove that one campaign performed better than another. For example, to reach statistical significance on the difference in conversion rate for this simple A/B test, a test duration calculator would tell you to wait at least 5 days. That’s only one test of one variable, with one test variation versus the control. In reality, you’d want to run many more tests than this.

From Tunnel Vision to Funnel Vision: How Ad Costs and ROAS Differ at the Tactical Level

Even for this simple test, you couldn’t in good conscience conclude within less than a full week, or you may risk bias from day-of-week trends. For example, perhaps one variation performs better on a weekend. Even that assumes that your conversions mostly happen within a few days of clicking on an ad, or you’ll have to account for that by extending your experiment further. If there’s the potential that one campaign has more of a long-term impact than the other, that also must be taken into account.

There are always tradeoffs and risks to consider, so you need to work with people on your team and vendors who can help you navigate these difficult decisions. You’ll never know for sure whether one campaign truly outperforms another in all conditions, but you can eliminate a lot of uncertainty by designing the right experiments. Marketers who have seen a lot of examples of what types of campaigns are more or less incremental in terms of performance can help you be more confident in your decisions, which is why we open up our benchmarking product for any of our clients to see what tactics are more or less likely to work for them, even before they run their first test.

Start measuring the right way. 

Measuring and optimizing channel performance at the tactical level is crucial in understanding which campaigns are really delivering conversions. Simply focusing on metrics like clicks, likes, or engagements at the top of the funnel may not translate to a return on investment or ad spend. 

Attributing all the lower-funnel conversions to retargeting campaigns based on correlation can also steer you wrong. Only incrementality can reveal the true contribution of campaigns to conversions at each level of the funnel. Schedule a demo with Measured today to learn more.