What Is Conversion Lift and Why Does It Matter?

Lina Hagström
February 11, 2021

The famous saying by John Wanamaker goes, "Half the money I spend on advertising is wasted; the trouble is, I don't know which half." Wanamaker is most likely referring to the challenge of accurately attributing credit to advertising. Before the online advertising age, measuring Billboard or television ads' causal effect was tedious. Today, we are in a better position to measure the true impact that your ads have on the audience you are targeting to convert.

What Is Incrementality and How Is It Measured?

Conversion Lift Tests (or lift tests or lift studies) help you measure how many of your conversions are caused by your ads. While it is next to impossible to find the cause of any individual conversion (why did you buy a new phone — because you saw an ad or because a friend recommended it?), we can measure the total effect quite accurately with a randomized controlled trial (RCT). At a high-level lift tests work like this:

  • The audience is split into a treatment group with normal targeting, and a control group that doesn’t see any ads.
  • The advertising platform measures the total number of conversions in both groups.
  • Your ads cause incremental conversions in the treatment group.
Lift Test results (simplified)

Why Is It Important to Measure Incrementality?

With an ever-growing network of social platforms, advertisers have more social media placements to showcase their products and get their message across to millions of users. Consequently, advertisers target ads to the same or partly overlapping audiences across social channels, making it increasingly difficult to attribute conversions accurately.

Attribution models are notoriously difficult to build, and advertisers often face challenges in choosing the correct model. The rule of thumb is to strive for an attribution model that mimics incrementality the closest. You want a model that only attributes the conversions caused by the ad to which you are attributing. Marketers that solely rely on last-click and other rule-based attribution models provided by the advertising platform or third party providers, surely capture some of the conversions mistakenly.

So how does the difficulty of attribution relate to incrementality? The key is to understand that measuring incremental conversions is free of attribution models. In a Lift Test, we are not reporting on conversions along with a particular click and view-through attribution window. Technically, that would be impossible for the control group since they don't see ads that could have conversions attributed to them. The results of a Lift Test report all conversions that happened by users eligible to see ads according to targeting specs assigned to their respective groups of the controlled trial.

Hence, incrementality is the single most important metric to follow and used to compare performance between social channels, audience segments, and funnel steps.

How to Decide What to Test?

While Cost per Action metrics attributed to clicks and views may show that retargeting to your most loyal customers is highly efficient, the story may be opposite if you compare results in terms of lift in purchases instead of cost per purchase. You can test differences in incrementality on, for example:

  • Social channels (e.g. Facebook, Instagram, Snapchat, Twitter, Pinterest, Google Search)
  • Funnel steps (e.g. prospecting, mid-funnel, and high intent retargeting)
  • Audience segments and demographics (e.g. custom audiences, age groups, geolocations)
  • Creatives (e.g. high-production quality video, click-attractive link ad, top selling products carousel ad)
  • Optimization goals (e.g. LTV data, intent-to-buy, purchases)
  • Product sets (e.g. top sellers, new products)

When marketers have accurate and systematically measured (e.g. once per quarter) incrementality data, they are better equipped to justify budget allocation between user segments and funnel steps. Instead of comparing Cost per Action metrics, marketers look at Cost per Incremental Conversion (iCPA) to decide where to spend their advertising dollars.

Funnel Step Lift Tests - example results:

Lift Test A: Retargeting Campaign

  • Spend: $2,000
  • (All) Conversions in treatment: 400
  • Conversions in control: 350
  • Incremental conversions: 50
  • iCPA: $2000 / 50 = $40.00
  • Attributed CPA: $2000 / 374 = $5.35
  • Conversions in reached treatment group: 380
  • Lift %: 50 / (380-50) = 15.2%

Lift Test B: Prospecting Campaign

  • Spend: $2,000
  • (All) Conversions in treatment: 350
  • Conversions in control: 280
  • Incremental conversions: 70
  • iCPA: $2000 / 70 = $28.57
  • Attributed CPA: $2000 / 291 = $6.87
  • Conversions in reached treatment group: 300
  • Lift %: 70 / (300-70) = 30.4%

Lift Test comparison

By looking at traditional attributed CPAs, it could seem that Retargeting is outperforming Prospecting. However, looking at iCPA, the Prospecting campaigns end up being more efficient in finding incremental conversions.

If you are new to Lift Tests and measuring your advertising incrementality, a recommended first step is to understand how your defined funnel steps compare in terms of incrementality. In practice, you should conduct separate Lift Tests for your funnel steps (e.g., prospecting and retargeting) to gather an overall read on how efficiently you are currently spending your advertising dollars.

Plan, Execute, and Analyze Incrementality, Smartly

Smartly.io has the tools you need to plan, execute, and analyze incrementality on Facebook and Instagram with our Lift Test tool.

Plan

  • Our power analysis tool helps you plan the lift study - how many conversions should you plan to get and correspondingly for how long should you run the test
  • Our platform makes it simple to create single and multi-cell studies, even across ad accounts!

Execute

  • Our UI gives you constant feedback on whether a given objective has accumulated enough information, so you know if you can already stop the test, or if you should continue running to get significant results

Analyze

  • Our UI makes it easy to evaluate each hypothesis you might have, be that evaluating lift against a control group, comparing lifts between cells of a multi-cell test or differences in lift between audience demographics
  • You can also go back in time to check what the results were during the test. If something unexpected happens during the test, like you run out of inventory, you can still recover results from the time before this happened
  • For the most advanced use cases, we offer result exports for any objectives or breakdowns.

Related Content