Are You Being Misled by Funnel Level Ad Metrics?

Nick Stoltz, Expert in Cross-Channel Measurement Strategy and Adoption

Published 01/31/2023

One of the critical things that you learn when you’re in the business of measuring marketing effectiveness, is that “performance” at the top of the funnel doesn't always translate to results at the bottom of the funnel.

In theory, any engagement from a potential customer is a good sign. In reality, engagements and clicks aren’t actually correlated with Return on Ad Spend (ROAS), as Meta’s own data confirms:

Image from Meta showing CTR does not correlate with ROAS

Ignoring that lesson and treating all engagement on an ad platform as equal is what gets marketers into trouble with wasted ad spend. Pouring money into a campaign with poor lower funnel performance is like pouring water into a leaky bucket: there’s nothing left when you get to the bottom.

The truth is that some campaigns perform well at the top of the funnel, but fail to deliver a positive Return On Investment (ROI). Other campaigns may suffer from poor engagement, but contribute significantly to the bottom line. Full funnel measurement and optimization is how you protect yourself from making this mistake.

Let me share with you a real life example:

Top of Funnel Performance

Take the following split test on the Meta ads platform:

On these metrics alone, you’d have to conclude that Campaign A is the clear winner. It has a better CTR, which means its ads are resonating better with the audience. Thanks to better quality ranking, Meta is rewarding it with a cheaper CPM. The combined result is a CPC that’s 60% cheaper!

Middle of Funnel Performance

The seasoned marketers amongst you will know that’s not the full story. How do they perform further down the funnel?

This is where it starts to get interesting. Despite being significantly worse at the top of the funnel, when it comes to the mid-funnel Campaign B is driving far more qualified traffic. Campaign B is 2.3x better at getting people to add to cart than Campaign A, which almost makes up for its far higher cost of traffic.

Bottom of Funnel Performance

However, we shouldn't stop there. Adding to cart is a valuable indicator metric, but it doesn’t make us money. What’s the impact on the bottom line?

Campaign B is the better performing campaign on the one metric that matters most to us: Cost per Order. Higher qualified traffic in the middle of the funnel also translated to 28% better conversion rates coming from Campaign B, after visitors added a product to cart. Orders are where this business makes money, and all the other inflated numbers are vanity metrics in comparison.

What Actually Improved Conversion Rate?

In this case the only difference was what each campaign was being optimized to. Campaign A was optimized to Cost per Add-to-Cart, in which it had an 8.1% advantage. Campaign B was optimized to Cost per Order, which is the true goal of the business. Despite being worse on every other metric, it ultimately drove orders for 12.4% cheaper, which is typically what we care about most as a business.

The learning here isn’t to rush out and switch to optimizing all your campaigns to Cost per Order, but to internalize the fact that performance at the top and bottom of the funnel aren’t always correlated. Sometimes your best campaign at the top of the funnel will also win at the bottom, but it’s important to prove that, which requires measuring campaigns at the tactical level with a common currency.

How Can You Improve Measurement?

As illustrated above, there are different metrics typically used to measure performance at various stages of the funnel. If awareness is the goal, then a click indicates you've achieved it. If conversions are the goal, then a sale should be a great indicator. The problem with this approach is that some of those sales being attributed to your bottom of the funnel retargeting campaigns would have happened anyway. Maybe they were already going to buy after seeing your prospecting campaign. The only way to truly understand the contribution of each campaign is to measure for incrementality.

We didn’t run an incrementality test in the case above, but if we did we may have found even greater differences in actual incremental revenue driven. For example if Campaign A is optimizing higher up the marketing funnel to people not yet ready to buy, it stands to reason they might come back to make a purchase later, outside of the conversion tracking window. A well designed incrementality experiment can reveal true contribution of each campaign to business outcomes.

Tactical View of Measurement

In this case we looked at two campaigns at every stage at the funnel to show how funnel-level metrics can be deceiving. In reality, most brands run different campaigns for different parts of the funnel. For example they may run a brand campaign for awareness, a prospecting campaign for consideration, and a remarketing campaign for conversion. All the more reason to be measuring the value of different campaigns with the same currency.

Measuring performance only at the channel level doesn't give the critical insight into whether you are over-invested in one tactic over the other. Platforms using last touch attribution will almost always overreport on lower-funnel and under credit upper-funnel. Since we measure incrementality at the campaign (tactical) level, we get to the bottom of what’s really happening. We've consistently found that brands are often way overinvested in retargeting and can greatly increase their return by shifting money to prospecting or awareness campaigns.

Good Measurement Isn’t Easy

The inherent difficulty with optimizing to performance further down the funnel is that it takes a lot of data to prove that one campaign performed better than another. For example, to reach statistical significance on the difference in conversion rate for a simple A/B test, a test duration calculator would tell you to wait at least 5 days. That’s only one test, of one variable, with one test variation. In reality you’d want to run many more tests with different variables, and for longer periods of time (to take into account that many conversions happen more than 5 days after exposure to an ad).

While you will never know for sure whether one campaign truly outperforms another, in all conditions, you can eliminate a lot of uncertainty with insight from carefully designed experiments. At Measured, we’ve run more than 25 thousand of these experiments across hundreds of channels and tactics. Based on this massive collection of results we've learned a lot about what types of campaigns, in which scenarios, are more or less incremental.

Through the Measured Incrementality Platform brands can access the industry’s only library of incrementality intelligence, apply their own performance data, and get reliable insights into the incrementality of their campaigns.  Then, they can compare different budget allocation scenarios across channels and tactics to optimize media for the best outcome, even before they run their first test.

Conclusion

In summary, measuring and optimizing channel performance at the tactical level is crucial in understanding which campaigns are really delivering conversions. Simply focusing on metrics like clicks, likes, or engagements at the top of the funnel may not translate to return on investment or return on ad spend. Attributing all the lower-funnel conversions to retargeting campaigns based only on correlation can also steer you wrong. Only incrementality can reveal the true contribution of campaigns to conversions at any level of the funnel.

Want to learn more? See how measuring campaigns for incrementality revealed where these brands were wasting money and which campaigns should get more investment:

Premium Home Goods Brand Finds Paid Search Overvalued by 94%

Shinola Finds The True Value of Facebook Awareness

 

Validate. Optimize. Maximize. Only with Measured

Explore the world’s smartest marketing attribution solution for optimizing your ad spend to maximize ROAS and revenue.

Get a demo