Always Be…Conducting Experiments

always-experiments-detail

Nick Stoltz, Expert in Cross-Channel Measurement Strategy and Adoption

Published 03/07/2024

While we loudly, unapologetically extol the virtues of incrementality testing, the truth is that many marketers today need to rethink their entire approach to and philosophy around experimentation.

When it comes to understanding your media spend and where your ad dollars are making an impact, marketers today seem to have it backward: we see them trusting models such as attribution or MMM, for instance, and then dipping their toes into the waters of testing only if these models produce suspicious results.  

This script needs to be flipped immediately. 

The simple truth is that experimentation should be your go-to, your number one, your default, and the foundation of your measurement practice. Once you are running experiments to understand the impact of your ad spend, that’s when you supplement, either by adding or integrating with non-experimental methodologies, such as MMM.

Good vs. Bad Testing

Another issue we’ve noticed is that there is confusion and inconsistency around what constitutes a “good” test versus a “bad” test. The principal object of marketing testing is to understand the causal impact of your media execution on the business KPIs that matter. 

This means that a bad test is one that cannot reliably describe or quantify the presence or absence of a causal link. Test failure can have a variety of root causes, such as contamination of a control group, insufficient power, or poor design. If this is the case, the test results aren’t “good” because they simply aren’t usable. 

However, we all too often see marketers labeling a test as ‘bad’ because the results are counterintuitive or go against what they had hoped to prove. We cannot stress this enough: that is not a bad test! A test that gives you any meaningful results and insights -- even seemingly unfavorable ones (e.g., no lift from media tactic X) -- is still informative and actionable and, thus, a “good” test. 

The real value of testing comes from understanding why the results show you what they do and interpreting those results into actionable outcomes that help to better achieve your business objectives. 

Common Approaches to Testing 

If you’re looking for a way to get started with a more comprehensive and strategic testing program, we recommend you consider the following categories of tests: 

1. BAU Testing

For testing “business as usual” (BAU) campaigns, we recommend laying out a six- to twelve-month plan for testing the highest-spend channels and tactics in your marketing mix. A good benchmark for what constitutes such a channel or tactic would be one on which you’re spending at least 5% of your media budget, regardless of total budget size. 

Ideally, each of these channels should be tested at least 2-4 times a year for a thorough understanding of their incrementality and to account both for inherent seasonal fluctuations over the year as well as changing marketplace dynamics.

We also recommend full media holdouts to test the aggregate incrementality and sales impact of your channel mix at least 1x per year.  

2. Shock-Based Testing

When your business environment changes, the impact of marketing incrementality can be affected.  Changes, or “shocks,” can be exogenous or endogenous – that is, they can be triggered internally at the behest of the marketer (e.g., product update, price change, etc.) or externally (e.g., new competitive product launch, interest rate changes….or a global pandemic that shuts down supply chains). 

If there is a known upcoming shift, it’s best to plan a test around that.  Of course, this isn’t always possible, so it’s important to understand the types of changes that may require a reassessment of marketing performance.

3. Strategic Experimentation

Strategic experimentation is what separates incredible marketing programs from the rest.  

These tests should be designed to proactively answer meaty strategic questions that can help evolve and reshape a business. These are meant to be calculated “bold bets” that can result in quantum leaps of growth and make meaningful progress toward long-term business objectives (for instance, profitability, top-line growth, etc.). 

In these instances, it is imperative that testing experts work with marketing executives to help translate strategic questions into in-market test and control experiments.

Avoiding the Rabbit Hole: Testing for the Sake of Testing

Now that we have laid the foundation for good experimentation approaches, it’s time for a few warnings! Getting too exuberant about experimentation can be almost as bad as not experimenting at all.

Every proposed test should answer a question that lines up with an initiative in one of the three categories outlined above. If not, you’re likely to fall down the dangerous rabbit hole of conducting tests that are overly granular in nature and don’t answer a salient business question and deliver actionable outcomes.

This issue typically arises when experimentation roadmaps are designed “bottom-up” versus “top-down.”  For example, it’s very tempting for an overzealous marketer to immediately dive into a plethora of tests, asking questions like

  • What messaging converts more users?
  • What bid/targeting strategy reduces our CPA?
  • What channel mix should I use?

While these aren’t necessarily “bad” questions to ask, when generated in isolation, they may not ladder up to a bigger objective if they are tested outside of a strategic context.

A better way to devise a test roadmap would be to start with a top-line objective, such as ‘We need to drive product X,’ and then break that down into more granular hypotheses that ladder up to the main issue.

For example, “Product X resonates with XYZ users, so we need more of them” → XYZ users are found on platform A, so let's test that overweight spend on that platform → to resonate with XYZ users on A, we need Messaging 123 → to optimize the reception of Messaging 123, we need to pick the right bid/targeting strategy, etc.   

Planned out this way, we arrive at similar questions, but this time, they’re anchored in hypotheses we think will drive meaningful change in the business KPIs that matter. 

Don’t treat your testing methodology or strategy as you would treat A/B testing. The first is outright wrong, and the second is dangerous.  A/B tests are far more tactical by nature, but that mindset isn’t how you should run your incrementality tests - even if you decide to get tactical, always make sure your growth program is anchored in your broader business strategy.

Building a Good Learning Agenda

Learning agendas are a structured series of tests spanning each of the three categories mentioned above that serve as a roadmap.  Where is your business now, and where does it need to be? Learning agendas become a series of hypotheses and tests that chart the course from A to B.

There are three questions you need to ask yourself constantly when building out your learning agenda: 

  • What are the right questions to ask to support your business strategy?
  • When and how should you ask these questions?
  • How do you answer these questions to align with your strategy?

We can’t stress this enough - each question that builds out your learning agenda progresses the journey to your desired business state, and each one must support your larger business strategy.

To help you picture what this roadmap should look like, here’s an example of a learning agenda template.

*You don’t literally need to pre-think all of the possible outcomes.  Just have the thought process described in the flow chart above.  Namely, you need to anticipate a variety of outcomes of the test and understand what subsequent actions to take given the outcomes.  This is an opportunity for a decision scientist or statistician to become involved in the test roadmap so as to avoid the pitfalls of overly sequential testing.

The Magic of a Measurement Partner

How do you design, execute, and interpret a growth program efficiently? We’ll be honest - it’s difficult and can easily become overwhelming if you aren’t careful.

There are a lot of challenges to correctly setting up the strategic component of a high-quality learning agenda. Understanding which questions to ask to get from your current business state to the desired business state can be a long road. 

However, there are EVEN MORE challenges in actually setting up and deploying the marketing tests required to answer these questions. These operational challenges are the reason marketers are often hesitant to start testing – or dilute a robust learning agenda to something far more rudimentary that’s easier to manage operationally. The resources required are no joke, but the value of testing cannot be overstated. (Check out our insights on testing here). 

And that’s why we do what we do at Measured.

Executives shouldn’t be wasting their time tackling operational challenges, they should be focused on the strategic ones. Speak to an expert at Measured today to see how we can make the operational burden vanish and can even be your partner on the strategic side of things. 

We’ve been doing this for seven years, and frankly, that means we’ve made mistakes and learned from them, so you don’t have to. Let our experience guide your business strategy so you can focus your time on building the culture of experimentation necessary for your brand to thrive and for everyone in your company to understand just why testing is important.