Reference · Glossary
Attribution glossary.
Plain-language definitions of the vocabulary we use on audits, rebuilds, and strategy calls. Bookmark it. Send it to the team member who keeps asking what EMQ is.
Attribution
The practice of assigning credit for a conversion to one or more marketing touchpoints.
Attribution covers the methods, models, and infrastructure used to decide which channel, campaign, ad, or interaction caused a customer to convert. Different attribution models distribute the credit differently. The honest framing is that attribution is always an estimate, not a measurement.
See also: Multi-Touch Attribution · Marketing Mix Modelling · Last-click attribution
Multi-Touch AttributionMTA
The family of attribution models that distribute conversion credit across multiple touchpoints on the buyer journey.
MTA depends on stitching the same user across sessions, devices, and platforms. It worked well in the third-party cookie era and degraded sharply after iOS 14.5. In-platform MTA inside Meta or Google Ads is still useful. Cross-platform MTA in third-party tools rarely is, in 2026.
See also: Attribution · iOS 14.5 · Path stitching
Marketing Mix ModellingMMM
Statistical modelling that estimates each channel's contribution to revenue using aggregate spend and outcome data.
MMM does not require user-level tracking. It infers contribution from spend variance, time series, and external controls. It is the right tool for strategic spend allocation at scale, the wrong tool for tactical decisions, and rarely justified below ~$250K monthly ad spend.
See also: Attribution · Robyn · Meridian · Saturation curve
Conversions APICAPI
Meta's server-to-server endpoint for sending conversion events that bypass browser-side blocking.
CAPI receives events directly from your server, deduplicates against the browser pixel, and lets you attach hashed first-party data. Properly configured, it lifts Meta's Event Match Quality from the 3.0–5.0 band into the 7.0–8.5 band, recovering attributable revenue lost to ad blockers and Safari ITP.
See also: Server-side GTM · Event Match Quality · Deduplication
Server-side GTM
A Google Tag Manager container that runs on a server you control, parsing and forwarding events to ad platforms over server-to-server APIs.
Server-side GTM moves event traffic onto a subdomain you control (typically tag.yourdomain.com). The container deduplicates against order or lead IDs, attaches first-party data, applies consent rules, and forwards to Meta CAPI, Google Ads enhanced conversions, GA4, and other endpoints. It is the canonical fix for browser-side signal loss.
See also: Conversions API · First-party data · Consent management
Return on Ad SpendROAS
Revenue attributed to ad spend, divided by that ad spend, expressed as a multiple.
ROAS is the most common ad-platform headline metric. It does not account for cost of goods, fulfilment, or overhead. A 3x ROAS is profitable for a 50 percent margin business and a loss-maker for a 20 percent margin business. The number that matters more is break-even ROAS.
See also: Break-even ROAS · CPA · Contribution margin
Break-even ROAS
The lowest ROAS that covers gross margin and fixed overhead. Below it, every order loses money.
Calculated as 1 divided by (gross margin minus fixed overhead, both as fractions of revenue). A business with 50 percent margin and 15 percent overhead has a break-even ROAS of roughly 2.86x. Most agency dashboards do not show this number; without it, ROAS targets are arbitrary.
See also: ROAS · Contribution margin · Ad Spend Calculator
Cost Per AcquisitionCPA
The cost of acquiring one new customer or completed conversion. Ad spend divided by conversions.
CPA is the simplest ad efficiency check. For early-stage funnels with delayed conversion, CPA should be calculated against a 28-day or 90-day cohort, not the same calendar month, otherwise the number lags the spend.
See also: ROAS · Customer Lifetime Value
iOS 14.5
The Apple iOS update in April 2021 that introduced App Tracking Transparency, breaking the cross-app identifiers Meta and most ad platforms relied on for attribution.
ATT requires apps to ask users for permission to track them across other apps and websites. Most users decline. The downstream effect was a sharp drop in deterministic attribution accuracy on Meta and similar platforms, partial replacement by modelled conversions, and a permanent shift toward server-side measurement.
See also: Multi-Touch Attribution · Modelled conversions · Conversions API
Intelligent Tracking PreventionITP
Apple's Safari feature that limits cross-site tracking by capping cookie lifetimes and blocking known trackers.
ITP shortens client-side cookie lifetimes (often to 7 days) and blocks third-party cookies entirely. Without server-side measurement, Safari traffic looks anonymous to most attribution stacks within a week of the first visit.
See also: First-party data · Server-side GTM
Event Match QualityEMQ
Meta's score (0–10) for how well your conversion events can be matched back to a Meta user.
EMQ rewards passing more identifying parameters with each event: hashed email, hashed phone, FBP cookie, FBC click ID, IP address, user agent. Higher EMQ means better attribution and better algorithmic optimisation. Most untouched setups score in the 3–5 range; a clean server-side rebuild typically lifts to 7–8.5.
See also: Conversions API · First-party data
First-party data
Data the customer has provided to you directly: email, phone, account info, purchase history.
First-party data is the durable signal in a privacy-first measurement stack. Hashed and passed to ad platforms via server-side endpoints, it powers attribution, audience-building, and lookalike modelling without depending on third-party cookies.
See also: Conversions API · Customer Match
Deduplication
The process of matching a single conversion event sent through multiple channels (browser pixel, server-side CAPI, offline upload) so the platform counts it once.
Deduplication usually keys on a stable event ID generated server-side and passed to both the browser pixel and the CAPI call. Without it, a single purchase can be counted twice, inflating reported revenue and ROAS.
See also: Conversions API · Server-side GTM
View-through conversion
A conversion attributed to an ad the user saw but did not click within a defined window.
View-through windows are short on most platforms (Meta defaults to one day). They are useful for upper-funnel campaigns and easy to abuse for justifying spend. Always check view-through revenue separately from click-through revenue when evaluating channel performance.
See also: Attribution windows · Attribution
Modelled conversions
Conversions that the ad platform estimates statistically because the user-level signal was missing or insufficient.
Both Meta and Google now report a mix of observed and modelled conversions in their dashboards. The modelled portion is opaque; reconciling against your CRM is the only way to verify whether the platform's estimate is in the right ballpark.
See also: iOS 14.5 · Reconciliation
Saturation curve
The diminishing-returns curve that describes how additional ad spend translates into incremental revenue per channel.
Most channels follow a concave response curve. Below the inflection point, additional spend produces near-linear lift; above it, lift flattens and eventually turns negative as audience saturation drives up costs. MMM and incrementality testing both attempt to estimate the saturation point.
See also: Marketing Mix Modelling · Incrementality
Incrementality
The lift in conversions caused by an ad, above what would have happened without it.
Incrementality is the gold standard for measurement. It is established through holdout tests (geo-based or audience-based), where one group sees the campaign and a matched group does not. The difference is the incremental contribution. It is more reliable than any modelled attribution number.
See also: Geo holdout · Marketing Mix Modelling
Path stitching
The process of recognising the same user across multiple sessions, devices, and platforms to assemble their conversion path.
Path stitching depends on identifiers that persist across contexts: third-party cookies, device IDs, deterministic logins. Almost all of these have been progressively cut off since 2017. Without reliable stitching, MTA models cannot operate at the precision their dashboards imply.
See also: Multi-Touch Attribution · Identity resolution
Robyn
Meta's open-source Marketing Mix Modelling library, designed to be operated in-house with one capable analyst.
Robyn fits Bayesian regression models to historical channel spend and revenue, returning contribution estimates and saturation curves per channel. Free to use, requires roughly 104 weekly observations to converge well, and produces inspectable results that hold up to finance scrutiny.
See also: Marketing Mix Modelling · Meridian
Meridian
Google's open-source Marketing Mix Modelling library, comparable to Meta's Robyn.
Meridian is Google's answer to Robyn. Same general approach, same data requirements, same in-house operating model. Both are credible and either is preferable to a black-box vendor MMM in the same price range.
See also: Marketing Mix Modelling · Robyn