Home
Solutions โ–ผ
Industries โ–ผ
Resources โ–ผ
Regions โ–ผ
Pricing โ–ผ
Company โ–ผ
Sign In Get Started

Incrementality Testing Guide: Measure True Marketing Impact

The CFO's question made my stomach drop.

"If we turned off all marketing tomorrow, how many of these users would still come to us organically?"

I had a beautiful attribution dashboard. Last-click, multi-touch, view-throughโ€”I could show her any model she wanted. What I couldn't tell her was whether any of our marketing was actually working, or if we were just spending money to claim credit for conversions that would have happened anyway.

"I don't know," I admitted. "The attribution tells me who touched what. It doesn't tell me what actually mattered."

That conversation led me down a rabbit hole that fundamentally changed how I think about marketing measurement. It turns out, the difference between what we attribute and what we actually cause is enormousโ€”and most companies never bother to find out.

The Uncomfortable Truth About Attribution

Incrementality measures the causal impact of your marketingโ€”the conversions that would NOT have happened without your advertising. It separates true lift from conversions that would have occurred organically whether you advertised or not.

Why Attribution Was Lying to Us

After we started running incrementality tests, we discovered our attribution was systematically misleading us:

๐Ÿ“Š The Number That Shocked Us

Studies show that 20-60% of attributed conversions would have happened anyway without the ad exposure. When we ran our first incrementality test on our "best performing" retargeting campaign, we found our true incremental rate was 23%. We were paying for 77% of conversions we would have gotten for free.

The Testing Methods That Work

Randomized Controlled Tests: The Gold Standard

The most rigorous method: randomly assign users to test groups (see ads) or control groups (don't see ads), then measure the difference.

How We Run Them

  1. Define your test hypothesis and success metrics before you start
  2. Calculate required sample size for statistical significance (usually thousands)
  3. Randomly split audience into test and controlโ€”true randomization is critical
  4. Run ads to test group only, while control sees nothing
  5. Measure conversion difference between groups
  6. Calculate lift and confirm statistical significance

Ghost Bids: The Clever Workaround

For auction-based environments where you can't easily withhold ads, you can participate in auctions but not actually show ads to the control group:

This eliminates selection bias because both groups were "eligible" for the adโ€”only one actually saw it.

Geo Experiments: When User-Level Doesn't Work

Sometimes you can't randomize at the user level. Geographic experiments use regions as test and control groups:

The Key Requirements

"The best incrementality test is one you can actually act on. A statistically perfect result that's too expensive to implement helps nobody. Design for actionable insights, not academic purity."

How to Design Tests That Actually Work

Start With a Real Hypothesis

Not "does marketing work?" but something specific and testable:

The Sample Size Problem

Most tests fail because they don't run long enough to achieve statistical significance:

Duration Matters

Reading Results Without Lying to Yourself

The Metrics That Matter

The Significance Trap

The Pitfalls That Ruined Our First Tests

Contamination

When your control group gets exposed to treatment anyway, your test is worthless:

Selection Bias

When your test and control groups aren't truly comparable from the start:

Implement Incrementality Testing

ClicksFlyer's measurement team can help you design and execute incrementality tests that reveal the true impact of your campaigns.

Get Started

What We Did When the Results Hurt

Our first real incrementality test showed that our "best" channel had a 23% incremental rate. The attribution dashboard said ROAS was 400%. True incremental ROAS was 92%. We were losing money on every dollar spent.

When Lift Is Lower Than Expected

Building an Incrementality Culture

One test isn't enough. You need ongoing measurement:

Incrementality testing requires investment. It requires patience. It often delivers uncomfortable truths. But it's the only way to know whether your marketing is actually workingโ€”or whether you're just paying to take credit for conversions that were going to happen anyway.

The CFO never asked me that question again. Now I have real answers.