Can I Measure Incrementality on Facebook?
Yes, the incremental impact of Facebook advertising on business outcomes like conversions, revenue and profit can be measured.
There are 4 possible advanced marketing measurement techniques that can be used:
- Design of Experiments (DoE): Carefully designed experiments that control for targeted audiences, audience overlap, facebook campaign structures, optimization algorithms etc., are the most transparent, granular and accessible way to measure impact of Facebook campaigns on business metrics like sales and revenue. Typically in a Facebook DoE a small portion of audience being targeted is held out for control treatment, while the rest of the campaign audiences are treated to regular campaign creative and campaign treatment. The differences in response rates between the two treatments is analyzed to calculate incrementality.
- Counterfactual studies: This approach is an alternate method for measuring incrementality by using an natural experimentation approach (vs designed experimentation approach like #1) to measure incrementality. This approach uses data generated by auction systems within Facebook to observe audiences who “match” the audiences targeted for a campaign, but were not targeted because of various factors (eg: budget, ad auction competitiveness, etc.) and treat them as though they were deliberately held-out for a control treatment. The campaign exposed audiences and the synthetically constructed hold-out audiences are then analyzed for their response rates to calculate incrementality.
- Marketing Mix Models (MMM): This approach uses aggregate data rolled up at a week or month-level into a time series which is then fed to a regression model for estimating the impact of Facebook on business metrics. Because of the nature of the approach, results tend to be very macro in nature, providing an average impact of Facebook investments over a quarter. It is not very useful in breaking down the impact estimation by Campaign or tactic so it’s less appropriate for short-term tactical planning. Also in practice, these models take a while to build and stabilize, which could mean 6-12 weeks of lag from end of a quarter to results reporting.
- Multi-touch Attribution (MTA): This approach uses user-level data collected via pixels on all ad exposures to construct consumer journeys which are then fed to a machine learning algorithm to decompose the impact of each ad exposure in driving business results. The strength of this approach is extreme granularity of the reporting and the insight into customer journeys. More recently with the advent of privacy regulation and Facebook outlawing user-level 3rd party tracking, the collection of this kind of data has been nearly eliminated except in very special cases. Even when this data was being collected, the measurement would only be correlational out-of-the-box.
For tactical and timely measurement therefore, DoEs and Counterfactual studies are the primary approaches preferred by marketers, especially performance marketers. In many cases marketers use both to get multiple reads and perspectives on the impact of a Facebook advertising investment on their business.
DoE – Pros & Cons:
DoE is typically executed by either the brand or by a 3rd party vendor like Measured. DoEs can be designed to be very granular and shaped to meet learning diverse objectives for marketers. It can be executed independent of Facebook’s account teams, and hence offers the highest levels of control and transparency in executing experiments that match marketers’ learning objectives. Since all of the observations are captured through normal campaign reporting methods, leaving marketers to make inferences about campaign performance without any opaqueness to the methods of data collection. It’s strengths therefore lie in being fully transparent and neutral, while preserving granularity of measurement (eg: ad set-level, audience-level measurements). However, to implement the control treatment, marketers have to serve up a PSA (public service announcement) ad from their Facebook account, which can cost ~1% of their total Facebook budgets. The PSA costs can be minimized by narrowing the learning objectives and focusing on testing the most important audiences.
Counterfactual studies – Pros & Cons:
Counterfactual studies are typically conducted by the platform themselves, in this case Facebook. This feature within Facebook is called Lift testing. The ad delivery systems within Facebook implement a version of what’s called the ghost ads framework to collect data about audiences who matched a campaign criteria but were not served an ad because of other constraints, like budgets and competitive bids, in the auction. These audiences are then synthesized into a control audience whose performance is reported alongside the audiences who were exposed to campaign creatives. This allows marketers to read the lift of a campaign without actually selecting control audiences and executing a control treatment.
The primary advantages of using counterfactual studies is that marketers don’t have to spend any PSA dollars to implement the control treatment (since control audiences are synthesized by Facebook). However, it introduces blind spots into the audience selection and data reporting process, which can be swayed easily by oversampling or undersampling of audiences within an ad set or campaign or date parts or other factors.
The lift reporting within Facebook is also not granular. It does not allow marketers to look at the reporting by week / month, or constituent ad sets and audiences within a campaign (unless it was set up deliberately to be read that way).
For many marketers though, the primary objection of Facebook counterfactual studies is neutrality: having Facebook grade its own homework.