<aside> <img src="/icons/home_gray.svg" alt="/icons/home_gray.svg" width="40px" />
</aside>
“All models are wrong, but some are useful.” Because no metric or attribution model is perfect, we need to prioritize which models we will trust over others. Different ways of measuring the success of our marketing efforts paint a different picture. To prioritize our metrics for actionable insights, we will use the BEATS framework:
Parameter | Business Metrics | Experiments | Analysis | Tracking | Surveys |
---|---|---|---|---|---|
Measurement Method | Marketing Efficiency | Incrementality Testing | Media Mix Modeling (MMM) | Multi-Touch Attribution (MTA) | Self-Reported Attribution |
Description | Measures the overall efficiency of marketing efforts to drive revenue. | Measures the lift or impact of a specific marketing activity by comparing it against a control group that was not exposed to the marketing intervention. | A statistical analysis that evaluates the impact of different marketing channels on overall performance, considering external factors like seasonality, economic trends, etc. | Tracks and assigns credit to multiple marketing touchpoints along the user journey. | A method of capturing the channel that was the most influential (in the customer’s mind) in driving them to purchase. |
Granularity | Company level | User- or aggregate-level / campaign or channel | Aggregate-level / channel | User-level / single conversion | Channel level |
Historical Data Required | Some / depends on length of sales cycle | Typically ~13 months to account for seasonality | Multiple years | Some / depends on length of sales cycle | Some |
Seasonality Considered | No | Not inherently, but seasonality effects can be isolated through the experimental design. | Yes, seasonality is a key factor considered in modeling. | No | No |
Requires User Level Data (Customer 360) | No | Yes (for user-level experiments), No (for aggregate experiments) | No | Yes | Yes |
Requires Experiments | No, it uses company level data such as revenue and marketing costs | Yes, controlled experiments (e.g., A/B tests, holdouts) are crucial for measuring true incremental impact. | No, it relies on statistical modeling rather than controlled experiments. | No, it uses observed data from multiple channels | No, it uses self-reported data captured on forms. |
Reporting Frequency | Typically monthly | Varies based on the experiment duration, often after each campaign or test | Typically quarterly or annually (depends on modeling efforts) | Near real-time or as soon as data is available | Near real-time or as soon as data is available |
Pros | • Provides insights on how effective current marketing efforts are | ||||
• Directly tied to business/financial metrics | |||||
• Provides an overall scorecard for marketing | • Directly measures causal impact of marketing campaigns | ||||
• Provides clear insights into the actual value of a marketing tactic | |||||
• Can isolate the lift effect and avoid wasted spend | • Considers broader factors like seasonality, economic changes, and external trends | ||||
• Does not require user-level data, making it easier to implement across multiple channels | • Can look at both channel and campaign level data | ||||
• Quick feedback loops | |||||
• Granular level insights into user journeys | • Can include offline channels | ||||
• Allows users to provide direct input on what influenced them the most | |||||
• Relatively easy to capture and report | |||||
Cons | • Can show if marketing is effective, but doesn’t show what to change if it isn’t | ||||
• It is a lagging measure that shows performance from the past few months/year | • Requires controlled experiments, which can be expensive and time-consuming | ||||
• Only measures lift for individual campaigns, not holistic performance | |||||
• Experimental setups can be complex and prone to errors | |||||
• May lack accuracy in non-digital channels | • Less granular; insights are more strategic than tactical | ||||
• Results take longer to produce and depend on significant historical data | |||||
• May not provide timely feedback for campaign adjustments | |||||
• Subject to bias from model assumptions | • Doesn’t show causation | ||||
• Easy to misinterpret | |||||
• Difficult to implement across channels that don’t have user-level data | |||||
• Can lead to all channels/campaigns appearing to work | • Relies on users answering accurately | ||||
• Only allows one value (doesn’t take into account multiple channels) | |||||
• Biased based on timing (users lean towards recent touchpoints) | |||||
• Doesn’t show data on specific campaigns | |||||
How to Best Use | Evaluating marketing effectiveness at a high-level. Use as a scorecard to know if major changes should be made vs incremental improvements or optimizations. | Determining whether a specific campaign or tactic is driving incremental growth or just cannibalizing existing demand. Often used when there are many touchpoints which makes it difficult to determine true causation. | Long-term strategic decisions like budget allocation across channels and assessing the impact of external factors (e.g., economy, competitor actions). | Optimizing digital marketing campaigns and budgets based on user journey insights. Works well in digital-only environments (e.g., paid search, display ads). | Works well with leads passed to sales. Use self-reported attribution in conjunction with MTA to see how much they agree. |
Type of Metric | Question | KPI(s) | Data Source(s) | Changes if KPI Missed |
---|---|---|---|---|
Business Metrics | Is our marketing efficient? | • Marketing Efficiency = Revenue/CAC |
◦ CAC includes marketing/advertising spend, marketing/sales salaries, martech, etc. (Anything associated with acquiring customers)
• Ideally we would use CAC IRR as our marketing efficiency metric, but it’s harder to calculate.1 | Financial data modeled in Power BI | Evaluate channels and campaigns using other metrics below to reallocate spend from low performing to high performing | | Experiments | What is the incremental impact of this campaign/channel/marketing effort? | Depends on the experiment. The following metrics could be evaluated by looking at the difference between the experiment and control groups: • Conversion rate • Revenue • Engagement rate • Pipeline generated • Incremental customer acquisition cost • Incremental ROI | Depends on the experiment. • Adobe Analytics • Salesforce • Financial data • Purchase data | Reallocate resources depending on the results of the experiment. | | Analysis | Which marketing channels are most effective in driving sales and how much should be invested in each channel to maximize return on investment? | • ROI per channel (revenue per dollar spent) • Incremental revenue | Media Mix Modeling analysis using the following data: • Business outcomes: Data on sales, revenue, or conversions • Marketing activities: Data on spend, impressions, or clicks | Reallocate resources to better performing channels | | Tracking | Where do our qualified leads come from? (First Touch) | • Number of qualified leads grouped by Lead Source | • SF Leads report grouped by Lead Source | Reallocate resources to better performing channels | | Tracking | Where do our qualified leads come from? (Last Touch) | • Number of qualified leads grouped by Lead Source Latest | • SF Leads report grouped by Lead Source Latest | Reallocate resources to better performing channels | | Tracking | How well do our leads from each source convert into customers? (First Touch) | • Number of Leads / Number of Closed Won Opportunities ◦ Grouped by Lead Source | • SF Custom report | • Change journey for low performing sources ◦ Change which forms auto MQL ◦ Change messaging ◦ Change marketing / sales touchpoints • Reallocate resources to better performing channels | | Tracking | How well do our leads from each source convert into customers? (Last Touch) | • Number of Leads / Number of Closed Won Opportunities ◦ Grouped by Lead Source Latest | • SF Custom report | • Change journey for low performing sources ◦ Change which forms auto MQL ◦ Change messaging ◦ Change marketing / sales touchpoints • Reallocate resources to better performing channels | | Tracking | Where does our pipeline come from? (First Touch) | • Pipeline grouped by Opportunity Source | • SF Opportunity report grouped by Opportunity Source | Reallocate resources to better performing channels | | Tracking | Where does our pipeline come from? (Last Touch) | • Pipeline grouped by Opportunity Source Latest | • SF Opportunity report grouped by Opportunity Source Latest | Reallocate resources to better performing channels | | Tracking | From this lead gen campaign, how much pipeline did we generate? And how efficiently did we generate it? | • Pipeline Generated with Primary Campaign Source (or Campaign Influence) ◦ How is Pipeline defined? Opportunities? Certain opportunity stages? • Campaign Cost / Pipeline Generated | • SF Opportunities / Campaigns reports | Change messaging / creative, budget, or channels. | | Tracking | How much revenue can we tie to this campaign? | • Offline ◦ Revenue Attributed to SF Campaign • Online ◦ Revenue Attributed to UTM_Campaign | • Primary Campaign Source on an opportunity. If we want to do multi-touch, we will need to use custom Campaign Influence • Revenue attribution in web analytics (need to choose attribution model) | Change messaging / creative, budget, or channels. | | Tracking | How effective are our campaigns at generating revenue? | • Offline ◦ Revenue Attributed to SF Campaign / Campaign Cost • Online ◦ Revenue Attributed to UTM_Campaign / Campaign Cost | • Primary Campaign Source on an opportunity. If we want to do multi-touch, we will need to use custom Campaign Influence • Revenue attribution in web analytics (need to choose attribution model) | Change messaging / creative, budget, or channels. | | Tracking | How much revenue can we tie to each channel? | • Offline ◦ Revenue Attributed to SF Campaign (grouped by Channel/Source) • Online ◦ Revenue Attributed to UTM_Source | • Primary Campaign Source on an opportunity. If we want to do multi-touch, we will need to use custom Campaign Influence ◦ How will campaigns be grouped? • Revenue attribution in web analytics (need to choose attribution model) | • Change messaging / creative, budget, or channels. • Reallocate resources to better performing channels | | Tracking | How effective are our channels at generating revenue? | • Offline ◦ Revenue Attributed to SF Campaign / Channel Cost (grouped by Channel/Source) • Online ◦ Revenue Attributed to UTM_Source / Channel Cost | • Primary Campaign Source on an opportunity. If we want to do multi-touch, we will need to use custom Campaign Influence • Revenue attribution in web analytics (need to choose attribution model) | • Change messaging / creative, budget, or channels. • Reallocate resources to better performing channels | | Surveys | Which channels do influence our customers the most to make a purchase? | Revenue attributed to “How did you hear about us?” question on Closed Won opportunities | “How did you hear about us?” field on opportunities. | Reallocate resources to better performing channels |