5 Ways to Make the Facebook Algorithm Work For You
For marketers, maximizing return on ad spend (ROAS) is all about maximizing efficiency. It’s what makes media measurement such a critical element of any growth marketing strategy. By measuring media outcomes and tweaking budgets and tactics based on the results data, marketers can allocate their budgets more efficiently and get the most out of every dollar.
For DTC brands spending a significant amount of their media budgets on Facebook ads, even little improvements in efficiency can add up to notable improvements in performance. In addition to ongoing testing and continuous optimization, there are several adjustments within the campaign set up that could improve Facebook efficiency even before making any changes to budget allocation.
Below are five best practices marketers can implement right now to help the Facebook algorithm run more efficiently.
Problem: Machine learning overload
Facebook has created a very powerful tool in Ads Manager, but it relies heavily on machine learning to optimize campaigns. When a $1,000 daily budget is spent on a campaign that has five ad sets, and six ad creatives underneath each ad set, Facebook essentially has 30 different combinations of ads to evaluate. Too many possibilities will result in a few ads getting almost all of the budget and the others not getting enough spend to count them as tested. If marketers find that one or two ads are receiving 90% of the spend, chances are the campaign was overloaded.
Fix: Limit number of ad sets and ads
Accounts that limit their campaigns to two to four ad sets and two to four creatives see better overall performance, regardless of budget. Think of it like fewer options means less for Facebook to figure out. To make the ad account run more efficiently, keep on the best performing audiences/ creative.
Problem: Audience overlap
Understandably, brands often end up creating multiple audience segments that include overlap of the same people. Facebook has a safety put in place that assures a brand doesn’t bid against itself in an auction. This is helpful because you don’t need to worry about targeting the same audience in multiple ad sets and driving up your own cpm, but the catch is that Facebook will only ever send one ad to each auction - and often it chooses the top performer with previous history that it has gauged as the most likely to get that conversion. This hurts performance because Facebook will always work to spend the budget you give it, but the lowest hanging fruit might be taken by an ad set that's already been live and has exited the learning stage.
Fix: Use exclusion rules to avoid audience overlap wherever possible
You can see the audience overlap effect on a given ad set by clicking “Inspect” on at an ad set level. This feature is only available for nondynamic ad sets. If you find an adset that's seeing audience overlap over 20% you should either turn off the ad sets not performing or work to add exclusions to ad sets that are overlapping.
- Without Exclusions:
- ad set 1 = Look-a-like 1% Purchases;
- ad set 2 = Look-a-like 5% of purchases.
- With Exclusions:
- ad set 1 = Look-a-like 1% Purchases;
- ad set 2 = Include Look-a-like 5% of purchases AND EXCLUDE 1% of purchases.
Problem: Campaign objective is hurting conversion rate
Occasionally marketers set objectives as lower funnel conversions in hopes of achieving a blended result - driving high intent potential customers to the site and getting conversions. These campaigns tend to see a lower conversion rate than those with the objective of purchase.
Fix: Compare and choose the objective with the best cost per purchase rate
If the goal of the campaign is to get conversions, marketers should go into Facebook Business Manager and compare the cost per purchase for campaigns with the objective of “add to cart” or “view content” to the cost per purchase of those with the objective of “website purchases.” This gives an idea of whether the campaign objective is hurting the conversion rate.
For brands that use Measured Incrementality to measure Facebook outcomes, they can look at campaigns in the Measured dashboard to see if the conversion rate would have been higher if the objective was set to “website purchases”.
Problem: Audience is seeing too many ads
Frequency is the number of times on average your ad has been shown to your audience. But, a frequency of three could mean some people saw the ad once, while others may have been served the ad five or six times. Facebook only lets brands set frequency limits for reach campaigns. For any other type of campaign, frequency has to be carefully watched by the account manager and managed with the budget. High frequencies are often seen in retargeting campaigns where the audience is too low and the budget is too high.
Facebook generally suggests a frequency of between 1-2x a week but it’s a pretty generic suggestion. The ideal frequency varies by brand and by tactic. Brands that use our Facebook testing can get a better grasp on the ideal frequency for each audience type by comparing conversion rates between the control and test groups. Without always on Facebook testing from Measured, a brand will have to monitor the conversion rate at different frequencies for that audience.
Problem: Campaign is not set to optimal attribution windows
Facebook offers multiple different attribution windows (examples: 1 day click or 7 day click / 1 day view), which has marketers questioning which attribution window is going to help ads perform best. It’s important to remember that while the company goal might be to have last-click attributed sales, using a click only or one day click attribution window is going to limit the conversions Facebook can learn from.
The Fix: Set to 7 day click / 1 day view, then test
Measured has tested thousands of ad sets at each part of the funnel and has seen that 7 day click / 1 day view attribution windows almost always have the highest conversion rates. This is especially true since the release of iOS 14.5, because shorter attribution windows and iPhone users opting out of tracking mean Facebook has a lot less to optimize towards. Measured Facebook Experiment users can test their ad account to see if it is an outlier and to better calibrate campaigns with the optimal attribution window.
Measured Incrementality can reveal ongoing insights for advertisers to continuously improve media outcomes, but setting up campaigns properly to also optimize efficiency is important for getting the most out of Facebook investments. Marketers should confirm that campaigns are fully optimized in these five areas, then implement in-market experiments to reveal what adjustments to spend allocation and tactics will further improve efficiency and increase ROAS.
If you aren’t already using Measured Incrementality to calculate media impact on business outcomes, learn more here or contact us today for a demo.