Category: Analytics 101

How to use your data to answer questions and look for patterns. How to read charts. And WTF is a pivot table anyway?

  • Lifecycle Marketing KPIs: How To Prove Impact

    Lifecycle Marketing KPIs: How To Prove Impact

    Are you demonstrating impact with your Marketing KPIs, or are your metrics just proving that something happened?

    Opens, clicks, and even conversions tell you activity exists, but they don’t tell you if your work is doing anything for the business as a whole. If you want to prove impact, if you want to prove your worth as a marketer, everything you do has to ladder up to two things: retention and lifetime value.

    That’s it. That’s the scoreboard your boss and their boss are looking at. And because of this, you need to know how you’re scoring on this metric and be able to talk about it. (And don’t forget to draw the connections for your leadership team.)

    But that doesn’t mean you ignore stage-level metrics. You will need them, but don’t confuse them with outcomes.


    First, fix your lifecycle stages (seriously)

    If your lifecycle stages don’t match how your business actually works, your metrics won’t either.

    Rename them. Break them. Combine them. You’re an adult, and no one is grading you on textbook definitions. You’re trying to understand behavior, not pass a certification exam.


    What I actually measure (by stage)

    Acquisition / Abandoned Signup

    The only KPI that matters here is: Did they start?

    • Trial starts
    • Account creation
    • Step-by-step drop-off rates

    Investigate; don’t just guess. Pull the funnel apart to understand where and why users hesitate.

    • Too many questions?
    • Asking for info they don’t have yet?
    • Credit card friction?

    Different drop-off points = different problems. Treat them differently.


    Onboarding / Activation

    Activation is where lifecycle proves its value.

    • Time to value
    • Activation rate
    • Trial to paid conversion

    If users don’t experience value quickly, nothing downstream matters. Do everything you can to get your customers to meaningful action faster.


    Engagement

    Now, you’re building habits. Engagement is where lifecycle stops supporting LTV and starts directly influencing it. The dotted lines you were connecting in onboarding are a lot shorter now.

    • Session frequency
    • Feature adoption
    • Expansion revenue
    • Renewals

    Retention / Churn Prevention

    If you’re only reacting after someone cancels, you’ve already lost.

    • Retention rate
    • Cohort behavior
    • Churn signals

    Winback / Reactivation

    A reactivation isn’t a win unless it sticks.

    • Reactivation rate
    • Returning purchases
    • Downstream retention

    Anyone can drive a one-time comeback. The real question is: Did you bring back a valuable user?


    Don’t forget to prove you’re the one who moved the needle.

    If you really want to answer: “Did lifecycle marketing actually drive this?” You need a control.  (Yeah, I said it again.) Without it, you’re reporting performance, and not proving causation.


    Low Hanging Fruit 🍓

    Here’s what you need to remember about lifecycle marketing KPIs:

    1. Track stage metrics to understand behavior.
    2. Use controls to prove causation.
    3. Use retention and LTV to prove value.
    4. And connect everything for leadership. 
  • UTMs Don’t Prove Lift. Control Groups Do.

    UTMs Don’t Prove Lift. Control Groups Do.

    While interviewing candidates to backfill a lifecycle role on our LatAm team, I posed a simple scenario:

    You’ve optimized a campaign and you’re seeing a nice lift in conversions. How do you know whether those conversions came from your changes, from seasonality, or from cannibalizing another channel?

    Almost everyone gave the same answer: check the UTM codes.

    A few went a step further and suggested comparing campaign performance to overall conversion trends to rule out seasonality. But fewer than half mentioned the thing that actually answers the question of causality: a control group.

    Why UTMs (and last-click attribution) can’t prove causality

    UTMs are great at showing correlation. They help you understand where traffic or conversions were attributed—but not why they happened.

    For example:
    You send an email to 100 people. Thirty of them click and make a purchase. On paper, that looks like a win.

    But what if those same people also saw a YouTube ad earlier that day? Or a paid social ad? Or searched your brand directly and then went hunting for a promo email before buying?

    Which interaction actually caused the conversion?

    Last-click attribution (and UTMs by extension) can’t answer that. They only tell you which touchpoint happened to be last.

    Now, let’s add a control.

    Say you withheld the email from 100 otherwise eligible users. Thirty people from that group also made a purchase. The “lift” from your email doesn’t look so convincing anymore.

    If none of the control group converted, that’s a very different story. Now you’re much closer to proving impact.

    (For the sake of simplicity, we’ll ignore statistical significance here. That’s a separate post.)

    The control group concept (eligible users, no message)

    I first learned about control groups in science class. I first saw them used properly while working agency-side.

    The concept is simple: you intentionally withhold messaging from a portion of the eligible audience and compare outcomes.

    Agencies, in particular, use controls aggressively. We ran:

    • Global controls that received no messaging from an entire campaign
    • Message-level controls to measure the impact of individual emails or pushes
    • Time-based controls where a group was excluded for a week or a month to account for seasonality

    Controls let you answer the question stakeholders actually care about: Did this campaign change behavior, or would it have happened anyway? That’s why they’re an agency’s secret weapon.

    What to watch out for

    Controls are powerful, but only if you set them up correctly.

    Sample size
    You don’t need a 50/50 split. For most small-to-mid-sized sends, 10–20% is plenty. If your audience is large (20k+), even 5% can be enough.

    Contamination
    If your control group can still see the message through another channel (like a sitewide banner, in-app message, or paid ad), you’ve contaminated the test. And once that happens, your results are no longer clean.

    Overlapping journeys
    If users can enter another lifecycle or promotional flow while they’re supposed to be held out, you’re no longer testing what you think you’re testing.

    Controls require discipline. But when done well, they turn “this performed well” into “this caused lift.” And that’s the difference between reporting and strategy.