The famous saying by John Wanamaker goes, "Half the money I spend on advertising is wasted; the trouble is, I don't know which half." Wanamaker is most likely referring to the challenge of accurately attributing credit to advertising. Before the online advertising age, measuring Billboard or television ads' causal effect was tedious. Today, we are in a better position to measure the true impact that your ads have on the audience you are targeting to convert.
Conversion Lift Tests (or lift tests or lift studies) help you measure how many of your conversions are caused by your ads. While it is next to impossible to find the cause of any individual conversion (why did you buy a new phone — because you saw an ad or because a friend recommended it?), we can measure the total effect quite accurately with a randomized controlled trial (RCT). At a high-level lift tests work like this:
With an ever-growing network of social platforms, advertisers have more social media placements to showcase their products and get their message across to millions of users. Consequently, advertisers target ads to the same or partly overlapping audiences across social channels, making it increasingly difficult to attribute conversions accurately.
Attribution models are notoriously difficult to build, and advertisers often face challenges in choosing the correct model. The rule of thumb is to strive for an attribution model that mimics incrementality the closest. You want a model that only attributes the conversions caused by the ad to which you are attributing. Marketers that solely rely on last-click and other rule-based attribution models provided by the advertising platform or third party providers, surely capture some of the conversions mistakenly.
So how does the difficulty of attribution relate to incrementality? The key is to understand that measuring incremental conversions is free of attribution models. In a Lift Test, we are not reporting on conversions along with a particular click and view-through attribution window. Technically, that would be impossible for the control group since they don't see ads that could have conversions attributed to them. The results of a Lift Test report all conversions that happened by users eligible to see ads according to targeting specs assigned to their respective groups of the controlled trial.
Hence, incrementality is the single most important metric to follow and used to compare performance between social channels, audience segments, and funnel steps.
While Cost per Action metrics attributed to clicks and views may show that retargeting to your most loyal customers is highly efficient, the story may be opposite if you compare results in terms of lift in purchases instead of cost per purchase. You can test differences in incrementality on, for example:
When marketers have accurate and systematically measured (e.g. once per quarter) incrementality data, they are better equipped to justify budget allocation between user segments and funnel steps. Instead of comparing Cost per Action metrics, marketers look at Cost per Incremental Conversion (iCPA) to decide where to spend their advertising dollars.
Funnel Step Lift Tests - example results:
Lift Test A: Retargeting Campaign
Lift Test B: Prospecting Campaign
By looking at traditional attributed CPAs, it could seem that Retargeting is outperforming Prospecting. However, looking at iCPA, the Prospecting campaigns end up being more efficient in finding incremental conversions.
If you are new to Lift Tests and measuring your advertising incrementality, a recommended first step is to understand how your defined funnel steps compare in terms of incrementality. In practice, you should conduct separate Lift Tests for your funnel steps (e.g., prospecting and retargeting) to gather an overall read on how efficiently you are currently spending your advertising dollars.
Smartly.io has the tools you need to plan, execute, and analyze incrementality on Facebook and Instagram with our Lift Test tool.
Plan
Execute
Analyze