Why Platform Lift Tests Are Not “Good Enough” for Measuring Incrementality

Nick Stoltz, Expert in Cross-Channel Measurement Strategy and Adoption

Published 03/02/2023

“Why do I need an incrementality solution when Meta offers incrementality testing through the platform?”

It’s not an unreasonable question.

It’s true that some platforms have built incrementality testing solutions that are available to advertisers. And we’ve run enough independent geo experiments to verify that, when set up and executed properly, platform lift tests can provide some useful insight. That said, before you bet all your ad spend on the results of a Facebook lift test, it’s important to understand the limitations and potential risks associated with platform-provided incrementality testing.

Here are five reasons why platform lift tests are not good enough measurement for most ecommerce marketers:

  1. You’re trusting the platform to “grade their own homework”
  2. A difficult set up is easy to get it wrong
  3. You can’t compare performance across channels
  4. Platform measurement systems are fragile
  5. They can’t be used to estimate diminishing returns
  • You’re trusting the platform to “grade their own homework”You don’t have to be a conspiracy theorist to know that you should have a way to independently assess whether a platform is delivering on your goals. Meta and Google have built world-class measurement practices, but they are not flawless. In my experience, both companies are staffed by thousands of talented people trying to do the right thing, but you shouldn’t abdicate your responsibility to make sure what they’re reporting is accurate.Every major ad platform has had issues with mis-measurement, there are several high-profile cases of questionable platform behavior, and there are countless anecdotes from marketers who turned off campaigns on certain channels with no discernible impact on revenue. The only way to verify the accuracy of platform reporting and get to the truth about ad performance is through independent incrementality testing.
  • A difficult set up is easy to get wrongIt’s complicated work to get an incrementality test setup properly, even when it is provided and “automated” by the ad platform itself. It requires a lot of heavy lifting and one small oversight in a long series of steps can throw off the entire experiment.For example, ~60% of the Meta CAPI (Conversion API) implementations I’ve seen have been wrong in some way, with errors both present in the interface, and invisible waiting to be found. We also had a customer go through a painstaking process with Meta to set up a data cleanroom, only to find out after testing that it wasn’t set up correctly.Every platform is different. Different technology. Different policies. Different methodologies. Meta’s conversion lift methodologymay use a different approach or require different inputs than setting up similar tests on Snap. It’s not an easily repeatable process if you are hoping to measure incrementality on multiple channels.

    At Measured, data scientists meticulously designed our incrementality experiments for the unique nuances of each individual channel. Then, we built a platform that makes it easy for marketers to deploy these tests for any or all of the channels in their media portfolio. As platform technologies and policies inevitably change, our expert product team ensures that Measured still delivers, so our brands are never left in the dark.

  • You can’t compare performance across channelsRunning a platform lift test within Meta only reveals incrementality on Meta properties. To measure and compare performance across multiple campaigns and channels, you would need to set up incrementality testing on each one. As we discussed above, it is not a simple task to properly set up and execute testing on one channel, let alone multiple.In addition, despite the fact that Randomized Controlled Trials (RCTs) were used in marketing as far back as the 1940s, many ad platforms still don’t even offer incrementality testing. You cannot compare the results from multiple channels if they are not measuring and reporting using a common currency. For channels that don’t offer testing, marketers are stuck with flawed last touch metrics that cannot be reliably compared with results from other channels.Measured incrementality testing is independent of platform reporting and can be run on any channel that can be targeted by geo. With Measured, you can make apples to apples performance comparisons between channels and tactics, regardless of what measurement options are offered by the platforms. And test results are confirmed by your own sales transaction data so don’t just have to take our word for it.
  • Platform measurement systems are fragile
    We all saw what happened when Apple rolled out IOS 14.5. It broke platform measurement. Facebook’s issues got the most attention, but every digital ad platform experienced the nightmare that ensues when measurement comes to a screeching halt. It took almost two years for some of them to recover and resume providing measurement options that didn’t come with an accuracy disclaimer.Platform operations are built on a web of critical interdependencies that are often out of their control. What happens when a new data privacy regulation is passed? Or when Apple or Google or some other big tech platform throws another grenade into attribution frameworks across the industry? Why find out the hard way (again)?
  • They can’t be used to estimate diminishing returnsWhen a marketer learns that a particular campaign or channel is performing well, the question that always follows is, “how much more can I profitably spend in that channel?”If you continuously increase spend in a channel, eventually performance will begin to decline as you approach a saturation point and the law of diminishing returns kicks in. A platform incrementality test can only tell you how a campaign or channel performed at the spend level that was tested.To find out how much more they can scale into a channel or tactic, many brands will increase spend in small increments until results drop below the performance threshold they’ve set. This is inefficient. By gradually increasing spend, marketers risk wasting budget on oversaturation or missing out on conversions that would have happened if spend was already set at the optimal amount.

    Measured enables brands to find the ideal spend amount on a channel or tactic without risking a lot of budget. With geo-matched market experiments we can simulate 2X, 5X, or even 10X increases in spend to pinpoint saturation points and identify what investment will produce maximum ROAS. Much more efficient than sinking money into a guessing game.

Incrementality testing should make your marketing smarter

For a large brand that spends the vast majority of their budget on Meta, and has the time and resources to properly set up and execute the experiments, there may be an argument that platform incrementality testing is worthwhile. But, even in this scenario, the data should still be validated with independent testing once in a while to ensure the results being reported are indeed accurate.

For most marketers, free platform lift tests aren’t going to deliver the answers they need to make confident decisions about media investments. Ecommerce brands that use the Measured incrementality platform average a 20% increase in ROAS after just six months. Can platform reported insights drive those kinds of results? I wouldn’t bet my ad spend on it.

Want to see how marketers are deploying incrementality testing across their portfolios to find out what’s really happening with their digital ad campaigns? CONTACT US

Validate. Optimize. Maximize. Only with Measured

Explore the world’s smartest marketing attribution solution for optimizing your ad spend to maximize ROAS and revenue.

Get a demo