A marketer ultimately manages three pillars of growth: engagement, efficiency, and profitability. Here, we focus on the first one: engagement. If you want to drive real traction, you need a KPI stack that connects user behaviour to business outcomes. The framework below is a structured measurement system in four tiers. Use it as an operating system for engagement by anchoring on business outcomes [1] to define what winning means for your business, by stress‑testing the growth engine [2] to check whether you are acquiring and converting the right traffic, and where the funnel is leaking, by losing the loop with experience quality [3] to understand how customer experience explains behaviour and results, and finally by executing with scoring and models [4] to decide who to engage, when, and how to lift engagement KPIs with minimal waste. This stack turns engagement from a vanity narrative into a measurable, causal growth engine you can manage, optimise, and scale. Across your engagement initiatives, try to design for incrementality via geo tests, holdouts, A/B tests, DiD, and lift studies. These will help clarify the signals. The metrics in this post are presented in order of business importance. Also, each tier is navigable across key cuts (geo, solution, segment, cohort, channel, campaigns) so you can drill down to where performance is won or lost.
Tier 1: Business Outcomes
Tier 1 is your scoreboard. It tells you whether engagement, adoption, and loyalty are translating into market traction and revenue. For navigation, the typical drill-down order is geo/market > solution > customer segment > cohort > channel.
Brand & Market Positioning
These KPIs capture how visible, competitive, and “mentally available” your brand is in its category.
SoM – Share of Market. It measures the real market share, in volume or in value, captured by the brand within its category. This KPI is for competitive positioning and business performance.
SoM = Brand sales / Total category sales
MSG – Market Share Growth. It reflects the relative growth of market share (volume or value) over a given period. It validates the effectiveness of the Go-To-Market strategy and the competitiveness of the offer. Always analyze MSG alongside the product penetration rate (AR), the overall category dynamics and the evolution of competitors’ shares (direct benchmark).
MSG = (Market share in period N – Market share in period N-1) / Market share in period N-1
SoV – Share of Voice. SoV captures the brand’s share of visibility (paid/owned/earned) versus competitors. A rule of thum to analyse it is :”SoV ≥ SoM → growth potential” vs “SoV < SoM → likely decline over time”.
SoV = Brand impressions or GRPs / Total category impressions or GRPs
GRP in the formula refers to Gross Rating Point. It measures the total volume of impressions, weighted by the size of the target. It represents your total advertising pressure on a given audience. It’s computed by multiplying Reach (%) x Frequency. While reach is the percentage of the target exposed at least once, the frequency is the average number of exposures per exposed individual.
SoS – Share of Search. It’s a proxy for the brand’s mental market share in its category. It measures how often people think of you when they search.
SoS = Brand query volume / Total category query volume
Incrementality for Brand & Market Positioning.
Adoption & Early Engagement
Once people know you exist, the next question is: do they actually adopt and engage with your product?
AR – Adoption Rate. It measures the real penetration of the product within the addressable target (sign-ups, trials, first purchases, or active users, depending on the model). This KPI is often referrend to as the “penetration rate”. Benchmark: a minimum of >20% indicates a healthy adoption baseline.
AR = Active eligible product users / Exposed target audience
Compared with core adoption, the following KPIs are early-adopter engagement metrics. They go deeper into how new users progress and retain over time.
OCR – Onboarding Completion Rate. The OCR is the a variant of AR applied to onboarding flows. It measures the proportion of users who complete the onboarding sequence.
FAR – Feature Adoption Rate. The FAR is also a variant of the AR at feature level. It measures the share of exposed users who adopt a given feature.
RET_Dx – Retention D1 / D7 / D30. This group of metrics mesures the product’s ability to keep new users over time. It is standard for apps and SaaS and is used as the “cohort survivability” metric. Benchmarks: D1 retention of 25–40% is good for apps and SaaS, D7 retention of 10–20% is good, and D30 retention of 2–10% is good for products with regular usage.
RET_Dx = Active users at day x / Users in the install or sign-up cohort
EVC – Event Conversion at user-level. It measures the share of users who reach a defined critical action – an activation or “first value” milestone. For example, a user publishes and activates their first Smart Campaign in a new Marketing Automation Platform (MAP). Benchmark: 30–60% is good.
EVC = Users who performed the key event / Active users
ST – Stickiness. It assesses the average usage frequency and dependency on the product over a month. For a daily-use product, the benchmark is >40%, while for a weekly-use product, the benchmark is >20%.
ST = DAU / MAU = Daily Active Users / Monthly Active Users
UF – Usage Frequency. This one is quite straightforward: it gauges the intensity of use of a product or solution.
UF = Number of uses in a given period / Users
IE – Innovation Effectiveness. This metric quantifies among the addressed target, the share of users truly engaged in the core behavior and the associated incremental business impact. In practice, it quantifies the gap between the initial innovation intent and how the market actually perceives it. In other words, it is the “incremental value creation” of that innovation.
IE = AR × ST × causal incremental lift
Incrementality for Adoption & Early Engagement. To attribute outcomes to changes, use the following experimental designs rather than relying on observational signals. First, perform product A/B tests on onboarding and guided tours to optimize EVC and RET_D30. Then, launch an advanced reporting or a collaborative workspace to measure incremental lift (UF or ST). Finally, deploy a feature or a new type of support only to a subset of accounts and compare the causal impact by top accounts cohorts or verticals with Difference-in-Differences (DiD).
Loyalty & Advocacy
Engagement becomes defensible traction when users come back, buy again, and advocate for you. The two following metrics are specific to e-commerce and retail.
RPR – Repeat Purchase Rate. It measures the proportion of customers who return and buy again. Benchmark: >25% in retail and >50% for recurring products.
RPR = Customers with ≥2 purchases / Total customers in the period
TBP – Time Between Purchases. It represents the real repurchase frequency and customer return dynamics.
TBP = Average time between two successive purchases per customer
Now, back to Loyalty & Advocacy KPIs that apply across all industries.
NPS – Net Promoter Score. This metric captures the likelihood to recommend and emotional loyalty.
It is also a good proxy for viral potential. Benchmarks: >30% is good, and >50% is excellent.
NPS = % Promoters – % Detractors
RS – Review Score. It gauges the public perception across app stores, review platforms, and social. This is effectively your “sentiment” KPI. Always combine it with the number of reviews to assess reliability. Benchmark: minimum score >4 out of 10.
RS = Average customer rating (on 5 or 10)
UGC – User Generated Content. This one measures the organic advocacy: posts, videos, mentions …
UGC = Number of user contents / period
Incrementality for Loyalty & Advocacy. This time, you can use CRM holdouts on lifecycle programs to measure the net effect of lifecycle communications on RPR or TBP.
Tier 2: Growth Engine & Funnel
Tier 2 is your growth engine dashboard. It covers acquisition, onsite behavior, and CRM performance. For navigation, the typical drill-down order is channel or traffic source > campaign or program > solution > geo > customer segment.
Acquisition & Media
You can’t drive engagement without reach, but reach must be efficient and high-quality.
REcov – Reach Coverage. It measures the share of the addressable market actually exposed to the brand. At the product market fit (PMF) stage, >10–20% of TAM is considered sufficient, while in the scale phase, >30% of TAM is the target benchmark.
REcov = RE / TAM
RE in the formula refers once again to the Reach. It measures the breadth of coverage on the target and more precisely, the number of unique exposed users and it needs to be compared to the Total Addressable Market (TAM).
TF – Traffic Volume. This is the volume of visits generated by media and CRM actions across all sources combined or by source: Paid Media (SEA, Social Ads, Display, DSP, Video), Organic Search (SEO), Direct (typed URL), Referral (partners, press, blogs, comparison sites, affiliates), Organic Social, Email / CRM / MAP (newsletters, nurtures, push), and Owned internal flows (website & landing pages, mobile app, portals & customer spaces).
TF = Number of sessions or users
FRQ – Frequency. It estimates the average pressure per exposed person. Benchmarks: 1.5–3 is optimal for branding, while >5 indicates a saturation risk.
FRQ = IMP / RE
We already know what RE stands for: Reach. As for IMP, it stands for Impressions and represents the raw volume of exposure as the total number of ads or creative displays. You should distinguish displays formats by source: paid (SEA, Social Ads, Display, DSP, Video), owned (website & landing pages, mobile app, emails & newsletters, push notifications, blog & customer space, SEO), and earned (organic social shares, press mentions, reviews & UGC, indirect SEO due to branding effect, reposts & word-of-mouth).
VW – Viewability Rate. It reflects the real quality of exposure. Standard: At least 50% of the creative must be visible for at least 1 second (or 2 seconds for video), in line with IAB norms.
VW = Viewable impressions / Served impressions
CTR_Search – Click-Through Rate (Search). This CTR metric measures the effectiveness of intent-based targeting and the creative relevance (ads + extensions). Try to track against internal benchmarks and historical performance. Benchmark: 2–6%.
CTR_Search = Clicks / Search Ad impressions
CTR_Social – Click-Through Rate (Social). This CTR metric assesses the performance of native creatives, audience targeting, and social formats (Facebook, Instagram, TikTok, LinkedIn …). Benchmark: 0.3–1.5%.
CTR_Social = Clicks / Social Ad impressions
CTR_Display – Click-Through Rate (Display). Next CTR assesses this time the performance of banners, native ads, and programmatic placements. Benchmark: 0.05–0.2%.
CTR_Display = Clicks / Display Ad impressions
CTR_Video – Click-Through Rate (Video). Finally, the last CTR in media reflects the effectiveness of sponsored videos in driving clicks. Benchmark: 0.05–0.3%.
CTR_Video = Clicks / Video Ad impressions
VTR – View-Through Rate. It quantifies the attention quality on video formats. Benchmark: 25–50% is standard, >50% is good, and >70% is excellent.
VTR = Views ≥X% of video / Video impressions
SER – Social Engagement Rate. This metric tries to calculate the depth of interaction with social content. Benchmark: 1–5% is standard, and >5% is very good.
SER = (Likes + Comments + Shares + Clicks) / IMP or RE
Incrementality for Acquisition & Media. To refine acquisition, run user-level lift tests on platforms such as Meta, TikTok, and Google where available. If not, set up your own controlled tests. For example, compare comparable accounts exposed to ABM campaigns versus a no-ads control group. You can also create holdouts across site visitors, trial users, and CRM contacts to measure incremental impact from 1st-party retargeting. Last but not least, for channels like YouTube or PMax, use geo-splits to isolate incremental effects on both brand and performance outcomes.
Audience Quality & Onsite UX
Volume without quality is noise. These KPIs show whether traffic converts and how users behave onsite.
CVtofu – Conversion Rate Top of the Funnel. This simple metric highlights the overall effectiveness of the top-of-funnel on the site. Benchmarks: 2–5% is good in B2C, while 5–15% is good in optimized B2B funnels.
CVtofu = Leads / Visits
BR – Bounce Rate. It measures the traffic quality and landing page relevance by tracking the percentage of sessions that land on a page and leave without meaningful engagement. Benchmarks: <70% is the minimum objective, and <40–50% is good.
BR = 1-page sessions / Total sessions
KECs – Key Event Conversion at session-level. This metric mirros the Event Conversion at user-level (ECV) metric we’ve seen earlier. Instead of measuing the share of users reach a critical action, this one meaasures the share of visits triggering a key event. In other words a non-monetary micro-conversion (Visit → Key event) like the start of an online quote, a completed simulation or a qualified contact request.
KECs = Sessions with key event / Total sessions
CIR – Content Interaction Rate. It captures the intensity of engagement with content beyond a simple page view. It can be measured as the share of sessions reaching a defined scroll depth, or the share of sessions with more than one content interaction (click, play, expand, download …). This reflects real consumption like reading, exploring, or taking action. Benchmarks: >20–30% is good, and >40% is excellent for editorial content.
CIR = Sessions with scroll ≥X% or with content interaction≥1/ Total sessions
TS – Time Spent on Site. The session duration represents the depth of site or app exploration. Always interpret this metric with funnel data: Long TS + no conversion (CV) means friction.
TS = Average session duration
PPS – Pages per Session. This metric captures the richness of navigation. It follows the same logic as the time spent on site (TS) : long navigation (PPS) + no conversion = friction. Benchmarks: 2–4 pages per session is typical for a simple funnel.
PPS = Page views / Session
NUR – New User Ratio. It represents the audience structure, i.e. the mix between new visitors and returning users over a given period.
NUR = New users / Total users
Incrementality for Audience Quality & Onsite UX. The fastest way to improve conversion (CRO) is to run A/B/n tests on key funnels such as demo-request landing pages, “Talk to Sales” pages, pricing pages and forms. For larger launches like a new UX feature, new pricing, paywall changes, or a “self-serve + sales assist” motion, use Difference-in-Differences (DiD) to isolate the impact. You can also use holdouts to measure uplift from onsite personalized experiences, such as industry- and role-based personalization.
CRM and Marketing Automation Platforms
Customer Relationship Management (CRM) and Marketing Automation Platforms (MAP) are your leverage points for lifecycle engagement and revenue expansion.
DR – Delivery Rate. It measures the basic list hygiene and deliverability quality, hence the benchmark should be over 98%.
DR = Delivered emails / Sent emails
CTR_email – Click-Through Rate (Email). The last channel we need to analyse the CTR about to assess the overall effectiveness of the message (subject + content). Benchmark: 2–6%.
CTR_email = Unique clicks / Delivered emails = OR × CTOR
OR – Open Rate. The first component of the email CTR formula calculates the subject line performance and targeting relevance. Benchmark: 20–30%.
OR = Unique opens / Delivered emails
CTOR – Click-to-Open Rate. The second component of the email CTR formula gauges the content quality once the email is opened. Benchmark: 10–20%.
CTOR = Unique clicks / Unique opens
Unsub – Unsubscribe Rate. This metric captures the perceived relevance and marketing pressure. Benchmark should be below <0.2–0.5%.
Unsub = Unsubscribes / Delivered emails
Spam – Spam Complaint Rate. It indicates a reputation and deliverability risk so best practice it to keep it below 0.1%.
Spam = Spam complaints / Delivered emails
Reac – Reactivation Rate. It quantifies the effectiveness of win-back reactivation campaigns. Benchmark: >2% is standard and >5% is good.
Reac = Previously inactive contacts who became active again / Targeted inactive base
Incrementality for CRM & MAP. Holdouts are the most accessible causal technique for CRM and MAP measurement because they can be consistently and permanently controlled. Setting up holdouts for email and push/SMS by program across trial onboarding, mid-funnel nurtures, expansion, and renewal reminders helps quantify incremental revenue and incremental engagement per program. You can also apply uplift modeling to dormant account reactivation programs.
Tier 3: Customer Experience & Service Quality
Tier 3 closes the loop by measuring how customers actually experience your journeys in practice. It’s not about brand image; it’s about the concrete quality of each interaction—real customer experience (CX), not perception by proxy. For navigation, the typical drill-down order is interaction > solution > customer segment > channel > geo.
CSAT – Customer Satisfaction Score. This score measures the immediate satisfaction after a specific interaction (ticket, purchase, feature use). Benchmark: the score should be at least of 4 out of 5.
CSAT = Σ satisfaction ratings / Number of responses
CES – Customer Effort Score. This other score gauges the perceived ease of completing a task like onboarding, support, or a purchase. Lower effort generally correlates with higher loyalty.
CES = Σ effort ratings / Number of responses
Incrementality for Customer Experience & Service Quality. Lastly, you can apply causal inference techniques in this section by starting with A/B tests comparing self-service vs. human support on low-complexity tickets to measure impact on CSAT, CES, and operational levers such as resolution time and cost-to-serve (core drivers of the Efficiency pillar). You can then run Difference-in-Differences (DiD) on product or support-process changes when randomization isn’t feasible, and use holdout groups on nurturing and onboarding emails to isolate the net effect of CX communications on satisfaction, effort, and downstream engagement.
Tier 4: Engagement Scoring & Models
The first three tiers tell you what is happening: who adopts, who engages, and how your journeys perform across media, onsite, CRM, and support. The next question is executional: who should we engage, when, and how, to lift those engagement KPIs with minimal waste. This is where engagement-focused scoring and models come in. They are not KPIs in themselves; they are decision layers that route attention, budget, and messaging to the users and accounts where engagement is most likely to move.
First, behavioral engagement scoring translates raw interaction signals into a prioritization framework: Intention Score, Readiness Score, and Qualification Score. The goal is to assign each user or account a probability or propensity to engage now or soon, then use that signal to orchestrate journeys and manage channel pressure (Next Best Journey model).
Intention Score – Are they buying now? This score estimates short‑term purchase or action intent based on hot signals such as a recent visit to high‑intent pages (pricing, demo, product detail, comparison pages), or online quote starts, trial sign‑ups, “Talk to Sales” clicks, but also high‑intent search queries about branded or category terms. This score will take into account the recency and frequency of these signals and can be calibrated on a 0–100 scale. It is typically trained to predict a near‑term action such as a demo request, opportunity creation, or first key event (EVC). Use cases here are plural : to prioritise Search, remarketing, and sales follow‑up on high‑intent leads; to trigger short, high‑pressure journeys for users with strong current interest ; to de‑prioritise or pause spend on low‑intent segments where incremental lift is low.
Readiness Score – Are they mature enough to convert? This other score measures the medium‑term maturity of a profile: how close it is to the profiles that typically adopt and engage deeply. Inputs commonly include the lookalike similarity to high‑value and high‑engagement customers, the product fit and solution coherence, long‑term engagement signals such as content consumed (CRI), events attended (CTR on attend), and feature usage (FAR & UF) and the lifecycle stage, and finally any prior response to marketing and sales. The score is again expressed on a 0–100 scale and is usually trained to predict mid‑term conversion, expansion, or sustained usage (RET_D30, EVC, or downstream opportunities). Use cases for the readiness score are more mid‑ to long‑term and strategic than the intention score’s. They involve deciding who to nurture vs. who to push harder toward sales, personalising messaging (education vs. comparison vs. pricing), and aligning channels efforts together around the most promising accounts.
Qualification Score – Is this lead worth the sales effort? This score is computed out of several other scores : Intention, Readiness and structural value signals such as the Ideal Customer Profile (ICP) fit and the Customer Lifetime Value (CLV) into a single commercial quality indicator. Its typical structure is as follows:
Qualification Score = f(Intention, Readiness, ICP Score, CLV segment)
It is not a pure engagement KPI but a decision score: which leads or accounts go to sales, which stay in nurture, and which are deprioritised because they are unlikely to engage or are structurally low value. This Qualification score requires to focus on three practical development use cases. First is lead routing and SLAs: which leads go to Sales Development Respresentatives (SDR) and Account Excecutives (AE), and in what delay. Second is about capacity planning and involves aligning sales capacity with lead quality and volume. And finally the last one requires to build a trial and error feedback loop to always better correlate Qualification with EVC, CVtofu, pipeline, and win‑rate.
NBJ – Next Best Journey Model. Once scoring is in place, you need a decision layer to operationalise it. This model answers the following question: given this user/account’s engagement scores, history, and context, what is the next best interaction we should trigger _ if any. To answer it, inputs typically include engagement KPIs (EVC, KECs, CIR, CVtofu, DR, CTR_email), the previous behavioral scores (Intention, Readiness, Qualification), and combine them to the channel eligibility (email, push, in‑product, paid media, sales touch) and guardrails (frequency caps, compliance, do‑not‑contact lists). By leveraging these inputs, the model can recommend journey assignment (heavy onboarding heavy, mid‑funnel nurture, sales assist, reactivation, …), with a correct channel and timing choice (in‑app prompt now, email in 2 days, call task this week, …) and conversely recommend clear suppression decisions like no additional touch for low‑intent, low‑readiness accounts. The NBJ layer directly influences Tier 2 and Tier 3 engagement KPIs and, through them, indirectly improves Tier 1 outcomes such as NPS, loyalty, and long‑term market performance.
Incrementality for Engagement Models & Scoring. As with channels and journeys, engagement scores and models must be held to a causal standard. The goal is to demonstrate that applying these scores and NBJ policies lifts engagement KPIs versus a business-as-usual baseline. The priority would be to perform score-on vs. score-off experiments to estimate incremental lift on EVC, RET_Dx, CVtofu, DR/CTR_email, or RE. Then, vary score thresholds (top 10% vs. top 30% for instance) while holding the NBJ logic constant to find the sweet spot between incremental engagement and the downsides of high-pressure journeys (unsubscribe, fatigue, CX degradation). Next, test the NBJ model itself: within a defined segment (like high Intention or medium Readiness), run Journey A vs. Journey B and compare both conversion and retention. Finally, once you’ve accumulated repeated experiments, train uplift models on engagement programs to refine who should receive which journey—maximizing incremental engagement while minimizing pressure.