Marine Morales

Popularizing Data. Empowering Analysts. Elevating Insights.

Menu
  • Home
  • Data Analytics
  • Data Storytelling
  • Fostering Success
  • Creative Corner
  • About
Menu

Anchoring Incremental Experiments in the Marketing Calendar

Posted on June 18, 2025December 1, 2025 by Marine Morales

As a marketing analyst, I spend half my week answering the same three questions: “What’s working?”, “Should we put more budget into it?”, and “Can you prove it?”. Last month, I explained that to really address digital marketer’s performance questions, Return on Advertising Spend (ROAS) wasn’t the best guidance and that instead of reporting on attributed revenue, I should be reporting on incremental revenue. Though, I also know that one-off incrementality tests don’t change how budgets are set. If incrementality isn’t wired into the marketing calendar, it becomes a side project: ad hoc lift tests or half-baked holdouts with no justified impact on next quarter’s spend. The real unlock is not just choosing the right method, it’s knowing exactly when in the year to trigger which method so that experiments actually drive planning, channel strategy and executive decisions.

Think of your year as a rhythm: annual portfolio planning [1], quarterly strategy [2], pre-launch setup [3], launch [4], in-flight optimisation [5], then business-as-usual [6]. You should not wait to be asked for a test, you should attach the right causal design to each of those moments by default. This way, CMOs can take your causal evidence into planning and into the boardroom. If you don’t anchor incrementality into this marketing calendar, you end up with sporadic tests, underpowered designs, and “nice decks” that never move the budget. Hence, the goal is simple: when the calendar moves, an incrementality method fires.

1. Retro and Porfolio Planning – Annually

At year-end, you are not just closing the books, you are deciding what lives and dies next year. You consolidate learning and feed back into planning in order to split next year budget and high-level targets on revenue, CAC and profit across your channels: search, shopping, PMax, App, YouTube, Meta, TikTok, affiliates, podcasts, CRM, offline …

This is where you close the annual tests loop. Use Marketing Mix Modeling (MMM) to set the baseline ROI by channel at a macro level. You review how each channel and program contributed over the past one to two years, including seasonality, pricing, promotions and macro shocks. But MMM is not the oracle; it is one input. You also need to use historical results from every incremental test to challenge your MMM and refine expectations.

Your resulting evidence can now be summarised in a playbook of “things that are empirically incremental” vs “things that just look good in platform”. Also provide a minimal but sharp pack to leadership: one view of incremental ROAS per channel based on experiments, one view of MMM response curves, and a simple “stop-fix-scale” recommendation per major line of spend. All these insights will generously shape the media mix, feed the forecast and budget allocation for the next cycle.

2. Strategic Planning – Quarterly

Each quarter, the marketing team has to turn the annual portfolio into concrete bets. They agree on objectives by channel and by initiative: new or improved PMax structures, new Meta funnels, app growth pushes, podcast bursts, affiliate changes, new CRM journeys.

This is the moment to lock the measurement strategy before anything launches. In the quarterly roadmap, you decide which materially funded initiatives will run as standard campaigns with basic optimisation, and which will be set up as structured learning assets requiring a causal read and, if causal, which method you will use. For each major new tactic, you choose between a platform conversion lift test and a manual A/B test with a true no-ads control. For each always-on channel, you decide whether to maintain or introduce a persistent holdout as a governance tool. For big market-level bets, like a major brand repositioning in a core market or a heavy media investment in a single channel, you design a geo experiment or, in the rare case where only one or two markets will be treated, you plan for a synthetic control later.

The mindset is simple: if an initiative is important enough to present to the CMO or the board, it is important enough to have a clean causal measurement plan attached. Conversely, if you cannot articulate how a channel initiative can be measured causally, it should be parked, downsized or re-scoped until it becomes testable; you do not let channels hide behind “we’ll look at ROAS later”. Once these decisions are made, you formalise them into a test backlog that sits alongside the quarterly plan. Every major line item on the roadmap gets an experiment ID, a chosen causal method, a clear definition of success and failure in incremental terms, and an explicit decision rule written in business language.

3. Pre-Launch Setup – One to Two weeks before Kick-Off

One to two weeks before kick-off, campaigns, audiences, creatives and tracking are being configured.

At this point, the experiments are not running yet but you need to wire them into the actual execution layer and stack. For user-level methods, you configure conversion lift and A/B tests directly in platform experiment UIs: set test/control splits, verify eligibility, make sure the control arm is truly “BAU” or “no-ads” for the element you’re testing. For holdouts, you finalise the random assignment of users or accounts into the holdout cohort, and then hard-wire the exclusions in CRM, CDP and media platforms so that these users never receive the channel. For geo or other cluster experiments, you finalise the list of markets or segments, cluster them into comparable groups, randomly assign test vs control, and lock those assignments with the channel and local teams.

Careful at this stage, this is where most experiments are quietly sabotaged by sloppy execution, last-minute scope creep and tiny changes that destroy randomisation. As a result, you need to methodically work through a pre-launch checklist: correct splits, correct exclusions, consistent bidding strategies across arms, tracking in place on test traffic, no hidden overlap between test, control and holdout. You want to start the campaign knowing the design is intact because if you get this stage right, the rest of the process becomes about interpreting clean signals instead of reverse-engineering noise.

4. Launch and Smoke Test – First Days after Kick-Off

When the campaigns go live, the instinct of most organisations is to obsess over day-two ROAS and volume. That is precisely what you must suppress if you care about incrementality. The first days are for smoke testing, not decision-making.

In this window, your focus is simple: confirm that conversion lift tests and A/B experiments are correctly balancing traffic and that randomisation holds; confirm that the holdout population is actually excluded everywhere and that no other process or campaign is touching it; confirm that geo and cluster experiments are respecting the assigned treatment patterns, with no accidental spillovers or overrides. You are checking the plumbing, not interpreting results. The discipline here is more political than technical. As an analyst, you make it very clear: for the first 48 to 72 hours, you will only answer questions about design health and quality control, not about uplift. That one behavioural choice protects you from killing good tests based on meaningless early noise. At most, let the marketers have a quick look at the spend and a couple of basic KPIs to ensure them the system is indeed tracking something.

5. Learn and Optimize – Weeks Two to Six after Kick-Off

After a couple of weeks, you start to accumulate enough data to say something meaningful and Marketing wants to know if they they should start reallocating budget. This is where incrementality becomes operational: you’re recommending what to scale, what to redesign, and what to kill.

For campaign-level lift and A/B tests, you take a directional read once basic power conditions are met, clearly labelling it as provisional, then you take a final read at the end of the planned window. The only numbers that matter here are incremental metrics: additional conversions, additional revenue, incremental ROAS, incremental CPA. For holdouts, you track the divergence between exposed and holdout cohorts at channel level on outcomes that matter. This means business KPIs such as revenue per user, repeat orders, app usage, churn, time to second purchase … For geo experiments, you look at the divergence in trends between test and control markets, relative to the pre-period. For DiD, you start evaluating the impact of product, UX, pricing, or ops changes that were rolled out only to a subset of units.

Always pair your numbers with a business decision: “this tactic is incrementally strong and should be scaled”, “This one is incremental but weak and should be redesigned and re-tested”, or “this one is indistinguishable from zero and should be stopped cause it is just noise”. Try to force every tactic into one of those three buckets.

6. Business as Usual – Weeks Six and Beyond after Kick-Off, Ongoing for Always-On

Once campaigns have matured and channels are firmly in always-on mode, incrementality becomes governance rather than just testing. At this stage, you should already know which channels are incremental at baseline. Now the question shifts from “does this work?” to “how do we keep this honest and make it better?” so you can ensure the lift stays that way and that you allocate efforts where it actually compounds.

Here, holdouts become part of the standing operating model. For CRM, retargeting, brand search and similar programs, you maintain stable holdout cohorts and systematically report incremental revenue and other business KPIs versus those cohorts in your regular dashboards and reviews. Then, you treat lift as a recurrent health check. For large evergreen setups such as core PMax engine and Meta funnels or always-on app acquisition, you schedule and re-run periodic conversion lift tests every few months or after major structural changes to re-baseline their incremental contribution.

On top of that, you run continuous A/B and multivariate (A/B/n) tests inside proven incremental channels on creatives, formats, landing pages, funnels, UX flows, email templates. This is classic conversion rate optimization (CRO) to be anchored in winning channels. Once you have enough history of randomized controlled tests (RCT) and holdouts in high-volume channels and programs, like CRM or retargeting, you can start piloting uplift models. They will help you decide who to target and how often, with the explicit goal of cutting spend on low or negative uplift users.

Meanwhile, you stand ready to run difference-in-differences (DiD) tests every time product or operations rolls out a non-randomised change to a subset of users, stores or markets like a new fraud rule, a modified checkout, or a change in eligibility. This way you will be able to give your stakeholders a causal read instead of a vague before/after story.

At this stage you maintain a living map of where the causal evidence is strong, where it is weak, and where it is missing. The CMO will use that map to drive regular performance reviews, budget renewals, headcount justification and roadmap prioritisation already preparing ahead the next quarter.

Explore more

Have a read at last month’s post introducing the different incrementality tests : Stop Trusting ROAS and Start Measuring Incrementality

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Stop Trusting ROAS and Start Measuring Incrementality

    Stop Trusting ROAS and Start Measuring Incrementality

    May 29, 2025
  • Why Should we Ticket our Analytics Jobs?

    Why Should we Ticket our Analytics Jobs?

    April 25, 2025
  • Three Frameworks That Make Your Analysis Aim, Hit, and Trigger Action

    Three Frameworks That Make Your Analysis Aim, Hit, and Trigger Action

    March 16, 2025
  • My Essential Project Planning Shopping Cart

    My Essential Project Planning Shopping Cart

    February 22, 2025
  • Crack your Case Like an FBI Analyst: Secure the Win and Lock it Down

    Crack your Case Like an FBI Analyst: Secure the Win and Lock it Down

    January 13, 2025
FOLLOW ME
  • GitHub
  • LinkedIn
  • Twitter

ABOUT ME

Welcome to my little corner of the internet where we explore the wonderful world of Data Science and uncover hidden insights together. My name is Marine and I am a Data and Business Intelligence Analyst specialized in optimizing Marketing and Sales performances.

  • GitHub
  • LinkedIn
  • Twitter

Recent Posts

  • Stop Trusting ROAS and Start Measuring Incrementality

    Stop Trusting ROAS and Start Measuring Incrementality

    May 29, 2025
  • Why Should we Ticket our Analytics Jobs?

    Why Should we Ticket our Analytics Jobs?

    April 25, 2025
  • Three Frameworks That Make Your Analysis Aim, Hit, and Trigger Action

    Three Frameworks That Make Your Analysis Aim, Hit, and Trigger Action

    March 16, 2025

Topics

  • Creative Corner (1)
  • Data Analytics (16)
  • Data Storytelling (6)
  • Fostering Success (16)
©2023 Marine Morales