Breaking down incrementality: What mobile marketers need to know

Wednesday, June 26, 2019
By: Jeremy Levitan & Caitlin McGovern

Each quarter, MoPub’s Marketer Program hosts an exclusive Marketer Thought Exchange event that brings together mobile app marketers to share and discuss insights, challenges, and learnings with peers. At our June event in San Francisco, Jeremy Levitan, Director of Ad Innovations at Twitter, joined us to share his perspective on the hot topic of incrementality. Jeremy is a true expert in the field, with a background including running performance marketing agency Acquisition Labs, a CEO role at Click Harmonics, and a PhD and two post-docs from MIT — and we’re excited to share takeaways from his talk below. (Interested in attending a future Marketer Thought Exchange? Get in touch with our team!)

Put enough mobile app marketers in one room, and eventually the conversation is bound to turn to one of adtech’s latest buzzwords: incrementality. Everyone’s talking about it. No one’s really mastered it. So what is it, why is it important, and how can mobile marketers use it to their advantage?

What is incrementality testing?

At its simplest level, incrementality testing is an approach to measuring the impact of advertising by looking at a treatment group (those who saw the ad campaign) vs. a control or holdback group (those who did not) in order to determine relative lift, or incrementality. To achieve this, app marketers typically take a user or device ID pool and split it into treatment and control groups, with the goal of measuring a specific outcome: did the campaign result in any sort of measurable lift? This could be lift in conversion rates, specific in-app actions, revenue, return on ad spend (ROAS), retention, or another measurable metric. 

 

Why has incrementality become a hot topic?

Acquisition paths have become increasingly complicated. 15 years ago, there may have been a single click or touchpoint on the path to conversion. Today, there are often more than 10 interactions, across multiple channels and platforms, where marketers interact with a user before the eventual conversion.

We know that neither “first touch” nor “last touch” models are the answer to attribution accuracy; we also know that not all campaigns or advertising channels are effective in driving positive ROAS. Fractional attribution, which weights different interactions differently (for example, 70% first touch, 30% last touch), is a step in the right direction. But how do we measure the value of a single touchpoint on the path to purchase? That’s incrementality.

Let’s get into the details: Designing your incrementality test.

Marketers looking to do incrementality testing first need to think about four key experimental design considerations. 

  1. Is my baseline conversion rate sufficient to measure based on the media weight in market? If your sample size or budget is too small, the test may not be accurate.

  2. Is there a dark period prior to the test? If you’re running campaigns up until the day before your testing period starts, these may have a lingering impact on users that taints your results.

  3. What’s the homogeneity of experiences for the treatment vs. the control groups? If the control group isn’t seeing the ad for your campaign, are you tracking the impressions that are serving to them instead? Are frequency and media weight the same for both groups, even though they’re seeing different things?

  4. Will you measure pre-auction or post-auction? It’s easier to implement pre-auction holdbacks, but post-auction holdbacks can give you a better measure of true incrementality; however, it does take more coordination with your platforms to do this. 

Measuring pre-auction: easier to implement.

Split user or device ID pool into control and treatment groups. The control group sees no ads. The treatment group sees ads via bidder.

 

Measuring post-auction: tougher, but more accurate.

In this case, your user pool isn’t split into treatment and control groups until after the auction. Both groups go through the bidder; the control group is shown a different ad (not for the campaign that you’re testing). It’s generally best for this control group to receive a “benign” ad, like a PSA.

 

Because the experience for both groups is more homogeneous, post-auction holdbacks can give you a more accurate assessment of whether or not you’re truly getting an incremental outcome from every new dollar spent in market. 

The goal of incrementality testing

When done right, incrementality testing doesn’t just tell you that a certain campaign or channel is providing X% lift for your KPIs. It prescribes a curve that can help guide your media spend.

As shown above, below a certain amount of media weight, there’s no lift for your KPIs. This could mean that the number of impressions per user is too low, you’re not driving the right message, there’s no brand affinity, etc. 

At some critical amount of media weight, we should start to see lift. The hyperlinear region is where marketers want to be. In this sweet spot, for every dollar you put into market, you’re seeing outsized impact and gain in lift.

Then again, at some point, your campaign saturates the market and moves to the sublinear region, where lift starts to decay. At this stage you may be inundating users, which could lead to negative brand sentiment. 

A common question: if you’re seeing zero lift as you start testing, how do you know if it’s due to insufficient media weight or if the channel is just not incremental? Unfortunately there’s no easy answer here; basically, you need to do some ad spending and explore media weight in market to figure this out. Make sure you’re sampling at least a couple portions of the above curve. Yes, there are times when it turns out the channel or campaign is not going to be incremental, and the curve is just going to be flat — which is exactly the kind of knowledge marketers need to make smart ad spend decisions (which in some cases means turning off a channel or partner). 

Takeaways

Incrementality testing is complicated. The methodology isn’t easy, and it requires both time and ad spend commitment in order to get an accurate sample size and test. In the end, though, marketers need to be able to measure lift on their ad campaigns and channels — otherwise, they’re spending blindly. Take the time to investigate incrementality testing for your campaigns; the learnings can be invaluable to your business.

Are you a mobile marketer looking for insights from programmatic experts and opportunities to connect with your peers in the marketer community? Get in touch with MoPub’s Marketer Program team today to join us at our next Thought Exchange. 

 

Sign up for our newsletter to stay up-to-date with our articles, news, events and more.