Marketing
February 27, 2025

Picking Your First Incrementality Test

Greg Dale
Before Eppo, Greg was the CEO of Tech for Campaigns where he led large consumer advertising campaigns and experimentation programs

Your first marketing incrementality test can feel like stepping into a void. Big budgets are at stake, there’s politics, there’s opportunity costs, and a lot of unfamiliar jargon. It might be tempting to gravitate toward channels that are low-stakes, but that approach often wastes precious testing bandwidth and doesn’t deliver the impact that can get a robust testing program off the ground. The smarter play is prioritizing tests that answer critical business questions, maximize learning opportunities, and sidestep common traps that could dilute your findings.

Let’s break it down into key factors that can help guide your decision. Here's how to prioritize your first incrementality test:

First, Look at Your Big Decisions

What’s the next major marketing decision on your plate? Maybe you’re asking yourself:

  • Should we double our TikTok budget?
  • Are these TV ads actually worth the spend?
  • Can we justify the amount we’re putting into the branded search?

Start here. A test that directly informs your next million-dollar decision will always be more valuable than one that merely satisfies curiosity. If your test results have the potential to shape tangible actions, that’s a sign you’re on the right track.

Then, Match Channel Impact to Statistical Power

When designing your test, it’s essential to account for statistical power, which refers to the likelihood of detecting a true effect if one exists. Tests with more pronounced expected effects are more likely to yield detectable results. Testing a small budget on a rarely-used channel might feel "safe," but you risk inconclusive results that could start your program off on the wrong foot. Aim for tests where you’re confident the data will support actionable conclusions.

Look at Where You’re Spending Big Money

Big budgets need the most attention. But here's the thing - it's not just about how much you're spending but how well each extra dollar works.

Think of it like watering a plant. The first bit of water helps a lot. But at some point, more doesn’t help and starts flowing over the side of the pot. When a new Meta campaign hits escape velocity on optimization, you want to keep it going. Still, Meta’s auction will show your ad first to the most qualified users, and beyond a certain point, it’ll be getting more and more speculative. Eventually, you hit a point where spending more doesn't help much.

As an alternative to raising budgets and seeing if it was worth it after significant funds were spent, you could do a scale-up or saturation curve test to evaluate whether higher levels are worth it before committing.

Account for Test Duration

Some tests are inherently quicker than others. Testing a performance marketing tactic like branded search might yield results within a week, but an offline brand awareness campaign may take considerably longer to show its impact. When selecting your test, don’t just think about the resources it will consume; factor in the test's required time. Will you have actionable results before your next budgeting cycle? For high-priority decisions, you may need to prioritize faster-moving tests that deliver insights within your timeline.

Check When You Last Tested

It probably needs a fresh look if you haven't tested a channel in six months (or ever). Competitors are always trying new things; consumers get ad fatigue. Time spent shifts from platform to platform. 

Seasonality is another important thing to manage. If your last test was during a peak period, you might want a non-peak test or vice versa. If your seasonality is gradual, you may want to run a quarterly test for your primary channels.

Evaluate Potential ROI

Not all tests are created equal in terms of the value of their findings. Ask yourself: What’s the potential return on investment for this test? A test that uncovers inefficiencies in a major channel or validates a planned spend increase is far more impactful than one that optimizes a small, low-budget campaign. Prioritize tests that hold the potential to deliver insights with significant financial or strategic payoffs.

Look for When Your Data Disagrees With Itself

Red flags pop up when:

  • Different tracking systems show different results
  • Meta says it’s crushing it, but sales aren't up
  • Your brand surveys or ‘How Did You Hear About Us’ tell a different story than click attribution
  • Your MMM is uncertain about a channel’s contribution

When your data tells different stories, that's usually a sign you need to test.

Be Mindful of Cross-Channel Effects

Incrementality tests rarely happen in a vacuum. Some channels – notably, at Google – influence others. If you are running YouTube or search campaigns alongside Performance Max or Shopping, if you change budgets on YouTube or Search, the “black box” Performance Max or Shopping campaigns might start buying that same inventory. This means that even though you think you’ve turned the channel off, it’s still running, which will harm the test.

Don’t Forget to Align with Business Objectives

A great incrementality test doesn’t just give you interesting data—it should directly align with your company’s broader goals. Whether you’re trying to improve customer acquisition efficiency, validate a new advertising channel, or justify continued investment in an existing one, tying your test results to business objectives strengthens stakeholder buy-in. It ensures the outcomes lead to meaningful decisions.

Start Simple

If this is your very first incrementality test, resist the urge to chase perfection. Start with a simpler design that’s easier to execute and interpret. Even basic tests can yield significant insights while building organizational confidence in your testing processes. Mastering the basics first will open the door to more complex and nuanced tests down the road.

Consider Seasonality and External Factors

You’ve already accounted for seasonality, but it’s also worth considering broader external factors. Weather, competitor activities, or even global events (think supply chain issues, a viral TikTok trend, or economic shifts) can skew test performance. Accounting for these external variables increases the reliability of your conclusions.

Channel particulars

The perfect test isn't perfect if you can't run it. There are channel-level considerations:

  • Channel targeting: Some channels do not offer self-serve geographic targeting at the preferred DMA level in the US and may offer only state or country-level targeting.
  • Offline media takes planning: Channels like TV and Out of Home require market reach and planning, longer buying cycles, and are more set in stone once launched.
  • Availability of outcome data: If your point of sale system doesn’t match the regional targeting of your channel, it’ll be tough to tie them together.

Putting It All Together

Here’s a real-world example. Imagine your Meta prospecting campaigns:

  • Consume 40% of your marketing budget.
  • Haven’t been tested in 8 months.
  • Show conflicting data across multiple metrics.
  • Require a decision on increasing budgets next quarter.

Meanwhile, your Google Shopping ads are new, high-performing, and relatively low-stakes. Even though both are "testable," it makes more sense to start with Meta. The stakes are higher, the discrepancies more pronounced, and the outcomes more urgent. This should rise to the top of your list.

What to Do Next

When choosing your first test, consider these factors:

  • What major decisions it supports.
  • How much money it involves.
  • Statistical power and sample size.
  • How fresh its data is.
  • Whether your data disagrees with itself.
  • Whether you can run a clean, interpretable experiment.

Remember, you don’t need a perfect test. You just need one that helps you spend smarter and improve confidence in your investments. Start simple, stay focused on what matters, and you’ll be well-positioned to make better marketing decisions from the get-go.

If you’d like guidance picking the right test or setting it up, we’ve helped countless companies refine their approach. Reach out, and we’ll help you get started.

Table of contents

Ready for a 360° experimentation platform?
Turn blind launches into trustworthy experiments
See Eppo in Action

Ready to go from knowledge to action?

Talk to our team of experts and see why companies like Twitch, DraftKings, and Perplexity use Eppo to power experimentation for every team.
Get a demo