Why Google Analytics And Facebook Attribution Reports Will Never Line Up — And What You Can Do About It
The impending demise of user-level tracking across platforms and devices will make complex advertising measurement models like multi-touch attribution (MTA) not just difficult but impossible. As these increasing limitations push advertisers back to privacy-compliant first-party data for attribution, conflicting results reported by different platforms and analytics tools will be a challenge for marketers hoping to make informed decisions about media investments.
Advertisers are now challenged with unifying the data they can collect from all available disparate sources and extrapolating actionable insights, but who should they trust when Google Analytics (GA) and publisher reporting inevitably report wildly different results? The propensity for each platform to give itself more credit than it deserves is what moved us all away from last-click metrics in the first place — and into a decade of chasing the elusive holy grail of measurement promised by MTA.
For example, I worked with an e-commerce business last year that invested $1.6 million in Facebook advertising over the course of a month. Facebook claimed its ads were responsible for 50% of the brand's total conversions for that month. GA, on the other hand, reported that Facebook drove less than 1% of total conversions. That is 140 versus 70,000 conversions attributed to Facebook. The reports will always be different, and this drastic discrepancy is not uncommon.
Which Report Is Correct?
The answer is somewhere in between. We'll give the platforms the benefit of the doubt and say that neither is intentionally providing inaccurate results or deceptively inflating performance numbers. What they report is accurate, based on what they measure and how they measure it. Each report provides some level of insight marketers can use — but, on their own, neither can tell you exactly which media contributed to sales and by how much.
Channel-level reporting (e.g., Source-Medium reporting in GA or Channel Performance reporting in Adobe Analytics) offered by web analytics platforms are based on site-side tracking. Next to your own CRM system, they are the most accurate way to measure total conversions. GA is extremely effective for web analytics like understanding site performance and measuring total business impact, but site-side analytics are not ideal for understanding media performance at the channel level.
The breakdown in GA’s ability to accurately assign conversion credit comes down to referrer URLs, the link a user clicked on that sent them to your page before they made a purchase. GA uses these links to categorize conversions. But, if GA is unable to trace the link back to Facebook for some reason, it will tag it as organic search or something else that conveniently gives the credit to Google by default.
These miscategorizations wouldn't be a huge deal if they were rare occasions, but this critical part of GA measurement fails often for a multitude of reasons. UTM codes, unique snippets of text added to the end of URL to indicate the source, are complicated to write and track and they also break all the time. Even when the UTM is pristine, other factors can impact GA’s reporting. If a visitor lands on your page and immediately gets redirected or clicks away before the GA tag gets loaded, you lose the credit. If someone comes to your page from Facebook, leaves and then comes back through search to make a purchase, Google gets the credit. The more channels you add to the mix, the less effective GA is at reporting at the channel level.
On the other side of the first-party coin, Facebook Ads Manager and other attribution tools provided by publishers rely on completely different datasets for measurement. Facebook can tell you how many people saw your ad on its platform and how many of them resulted in conversions. This data is useful for determining whether your targeting is working or your creative is landing. What it cannot tell you is how many of those people who bought something would have made the purchase anyway even if they they had not seen your ad — so Facebook claims credit for all of them.
How To Manage Conflicting Reports
To reconcile the reporting discrepancies and determine the true contribution of different media channels and tactics to conversions, advertisers can compare the results of incrementality testing run on a platform like Facebook with total conversion data from GA.
Incrementality measurement uses experiments to measure how many people on Facebook would have converted regardless of whether they encountered your ad. By withholding the ad from a statistically significant portion of your target audience and calculating what percentage of them still made a purchase, you can determine what percentage of total conversions to credit to the ad. Hold that up against the overall business impact reported on GA, and you can tie your Facebook investments directly to revenue.
While I used Facebook and Google for the example above, incrementality testing can be applied to just about any marketing channel or tactic. Because each platform operates on a slightly different framework and provides access to different types of datasets, experiments need to be customized and nuanced for each specific environment. It can be a grueling process, but once you have ongoing experimentation in place on your top platforms, incrementality results from different sources can provide cross-channel insights for investment decisions and more.
It can be disconcerting to receive conflicting performance reports from various sources, but I am here to tell you that your site-side analytics and platform metrics will never match up. What's important to understand is what each platform actually measures and how you should and should not use the data. Layering on incrementality testing can then tease out additional actionable insights for making the most informed decisions about your media investments.