Return to Austerity Calls for a Return to Test vs. Control Measurement
After a decade of low interest rates, a risk-free rate of return of 5% has brought about a paradigm shift in the corporate world. Out: revenue, at all costs. In: earnings.
If a business is going to demand capital today or is going to be valued at an attractively high multiple today it needs to make money….today. And, it needs to do so significantly enough over 5% to warrant investors taking on risk.
Multi-touch (MTA) and last-touch (LTA) attribution models were actually well suited for a low rate environment where the capital was splashing. These are attribution models that over-credit, sometimes significantly so, advertising. Think about it this way: even if an attribution model accurately tracked that there were, say, 8 advertising impressions delivered across 8 different platforms/tactics before a customer converted on your website, do we really believe that every single one of those deserve some credit? That they’re all batting 1,000%? MTA would lead you to believe that was the case.
The truth of course is that no….they are not all batting 1,000% and likely many of them are nowhere close.
LTA/MTA worked well to pump up an advertising industrial complex of more ads for ads' sake, which in theory helped advertisers, agencies and ad-tech to command more budgets and while not at all precise, generally in the aggregate drove revenue (and waste) up. More revenue was rewarded with more investment and more ad budget and around we went, everyone winning in the process. But the rise in interest rates, combined with Apple’s ATT, have sobered up the scene, quickly.
With a new demand for profit/earnings, we need a new attribution model to match the environment. One that pinpoints exactly which channels/tactics/campaigns are yielding customers that we wouldn’t have seen without the investment and weeds out channels/tactics/campaigns that only appear to perform but can be cut without impact to top & bottom lines.
Enter: incrementality. Just as randomized controlled trials in the pharmaceutical industry reveal precisely how impactful a drug is on preventing a disease, so too experiments in advertising tell us how impactful a channel/tactic/campaign is on sales, rather than merely tracking those campaigns without a rigorous examination of their true contribution.
And the proof is in the pudding. With incrementality measurement in place:
- Parachute Home’s team discovered that they could cut a 7-figure social media retargeting investment entirely, without any impact to their top-line. How many businesses could do with dropping a million dollars to the bottom line right now?
- Hammitt Handbags’ team discovered that they could cut their overall ad budget by 30%, while increasing sales.
- A home-goods retailer discovered they could reduce their brand search investment by 94% and SEO would capture all of that traffic.
- Or, an oldie-but-goodie, eBay discovering millions of dollars per month in Search ad investment yielding only 25 cents on the dollar invested.
That the eBay example came out back in 2013, when the fed funds rate was sub 1% all year, and was met with a resounding thud in the wider ad community perhaps speaks to the point. We’re no longer in Oz, but rather safely back in Kansas. Time to get back to the basics like profits and test vs. control experimentation.