Poor Test Results: Your Next Secret Weapon

poor test results detail

Jim Greco, VP, Decision Science/Product Operations

Published 03/06/2024

For any advertiser who embraces causal measurement of media (like most of our clients), there always comes a moment when you get a result that reveals the cold, hard truth that your media isn’t working.  Many of you reading this have been there.

You are excited to see the result.

You are excited to proudly proclaim to your Marketing Director or CMO that your media is working harder than ever.  Maybe with that comes more budget …. or a pat on the back .. or maybe even a promotion?

But…Then you see it.  That $3.50 ROAS?  It’s really more like $1.15.  Your heart sinks.  At this point, a few things tend to happen.

You assume something must be wrong.  If your media agency is involved, they might feel threatened.  You start to question the measurement because the last thing you want to do is explain to the same Marketing Director or CMO that you’ve been wasting spend for the past several months (or longer).  You try to poke holes in the approach – looking for any reason to bring the results into question because you just told me my baby is ugly.

I’m here to tell you this is the wrong way to react.  

Disappointing Test Results? Relax.

Now, you’re probably thinking, “The guy who is writing this blog works for a measurement company … of course, he would say that.” And you wouldn’t be wrong.  

As a practitioner in the space for a couple of decades now, I (and Measured) accept full responsibility for making sure our measurement is robust, cleanly executed, and grounded in strong foundational approaches to causal inference geo-testing.  We have a world-class data science team that is working to continuously improve the quality and resilience of our approaches Geo causal experimentation – because we know significant media spending can hinge on the result.

But our responsibility is not to provide emotionally affirming results – it is to deliver the truth to the best of our ability – even if it isn’t what you want to hear.  And this is a good thing.  Because our success is more likely going to be attached to the advertiser’s success, we are making greater efforts (often successfully) to reveal WHY results did not meet expectations.  Of course, that often leads to course corrections – and then everyone can look good because you improved performance – without spending more money.

That is the lesson here, which I believe warrants taking a different mindset.  Every CMO implicitly realizes there is wasted spend likely lurking in the budget in any given quarter.  Marketers who use Geo testing (or any kind of media measurement capability) should take a more “business question-oriented” approach to testing.

Wasted Ad Spend? Voice Your Suspicions

Someone who has a hunch that Meta isn’t doing as well as it should is encouraged to voice their opinion – but they should be explicit about WHY they feel that way.  If they do, now you have a hypothesis that you can build a test around. 

It can also happen in the other direction – maybe your agency thinks there is a real upside in TikTok that is going unexplored.  If the hypothesis is clearly stated, it can be tested and proven using a small-scale Geo test in 10% of the country.  That will give you a clear answer in 30-60 days – which clearly beats fiddling around with trivial spend levels for six months (which personally drives me nuts).  

Time is money.  Get your answer, act on it, and move on.

I recognize that many of you reading this may be saying, “Yeah, easier said than done” … I get that.  But if you are committed to measurement via experimentation, you should accept that you inevitably will try things that don’t work.  Historically flagship programs may be called into question.  It happens to all marketers.  You just need to accept this reality (even if you are a CMO reading this).  

Viewed in this light, experimentation is really your secret weapon -- it reveals underperforming investments and allows you to redeploy budget to higher potential investments. Just think, if you didn’t have experimentation – you might never know what is truly working and what isn’t.

So when you get that next disappointing result?  You should be relieved – and grateful that it was discovered and addressed.  Then, you can move on to the next area of opportunity (or suspicion) and test it.

So the next time you experience a disappointing test result, stop and remind yourself: I just saved myself from future lost revenue, thanks to my commitment to testing.

Not sure why your poor test results aren’t turning into actionable insights? Don’t be shy - reach out to see how Measured can help.