If you ran paid acquisition on Meta in 2021, you remember the week the numbers stopped making sense. Conversion volume on the dashboard fell off a cliff. Cost per acquisition spiked. The bidding algorithm started doing strange things, recommending budgets it had been talking you out of two months earlier. None of it matched what your CRM was telling you, and the support article from Meta gave you a checklist that did not, in any meaningful sense, fix the problem.
Five years on, we still audit Australian businesses where the attribution layer has never fully recovered. The dashboard looks plausible in isolation. The CFO has stopped trusting it. Marketing knows there's a gap but cannot tell you how big it is. This piece is for those operators. It covers what actually broke in 2021 at the data layer, what didn't get fixed by following the official guidance, and what we do about it on engagements today.
What changed in iOS 14.5
The headline change is well known: from April 2021, iOS apps had to ask for permission before tracking the user across other apps and websites. The user could say no, and a meaningful number of users did. In the first six months opt-in rates settled somewhere between 20% and 30% globally. In Australia they were closer to the lower end.
The less well-known consequences sit at the data layer:
- The IDFA went dark for the majority of iOS users. Meta's targeting and attribution had relied heavily on this device-level identifier. Without it, the deterministic match between an ad impression and a conversion event from the same device fell from near-perfect to a coin flip.
- Browser-side cookies were already degrading. Safari's Intelligent Tracking Prevention had been chipping away at first-party cookies on iOS Safari since 2017. The IDFA going dark on the in-app side meant Meta lost the ability to bridge the in-app and web journeys for the same person.
- Meta's Aggregated Event Measurement protocol replaced the deterministic events stream. AEM caps you at eight conversion events per domain, prioritised in a strict order. Anything past the eighth event, or anything outside that priority order, gets dropped at the platform level. Most accounts we see still have AEM configured the way the agency set it up four years ago.
- Modelled conversions filled the gap, with caveats. When Meta cannot deterministically match a conversion to an ad, it models. Modelled conversions are useful, they are also a black box, and if your event setup is wrong upstream, the model is being trained on bad signals.
Why the official advice fell short
Meta's official guidance in 2021 was, broadly: configure AEM, prioritise your eight events, install the Conversions API, and the platform will recover. Some of this is correct. Most accounts we audit have done the first two. Very few have done the third properly.
Three failure modes show up repeatedly:
1. CAPI installed, but client-only data passed through
The most common pattern. CAPI is installed (often via a Shopify or WooCommerce plugin), and conversion events are firing server-side. But the data being passed is browser-only: the FBP cookie, the FBC click ID, maybe a hashed email if the customer logged in. There is no first-party data layer being assembled before the call goes out, so Meta's Event Match Quality score sits in the 3 to 5 range when it should be 7 or higher.
The fix is to assemble a first-party data record at the moment of conversion, with at least: hashed email, hashed phone, FBP, FBC, IP, user agent, and a stable external ID like an order number. Then pass that to CAPI. Match quality recovers within two to four weeks of clean data flowing.
2. AEM configured for vanity events, not financial outcomes
The eight-event slot is precious. We routinely see them allocated to PageView, ViewContent, AddToCart, InitiateCheckout, and four flavours of micro-engagement, with Purchase last in the priority order. AEM truncates at the highest-priority event that occurred. If a user adds to cart and then buys, Meta only learns about the cart-add, because that's higher priority than purchase in the configuration.
The fix is mechanical: put financial outcomes at the top of the priority order. Purchase first, lead second, anything else after. Then trim the bottom of the list to the events you actually use.
3. Consent banner blocking server-side calls
This one is sneaky. The team installs a consent management platform (OneTrust, Cookiebot, Iubenda) and configures it to block tracking until the user accepts. The CMP categorises CAPI calls under "analytics" rather than "marketing", or under a category the user has not granted. Server-side calls fail silently. The team sees match quality drop further, blames Meta, and goes back to spending more.
The fix is to audit the CMP configuration, classify CAPI correctly under marketing consent, and rewrite the banner copy to lift opt-in rates. The CMP audit takes a day. The opt-in rewrite takes a week of testing. We've seen consent rates lift from 58% to 71% on banner copy changes alone.
What an actually-recovered attribution layer looks like
If you have done the work, here are the markers we'd expect to see on an audit:
- Event Match Quality score of 7.0 or higher across Purchase events. (Anything below 5 is meaningfully impaired.)
- Server-side and client-side events deduplicated against a stable order or lead ID. Reconciliation gap with Shopify or your CRM under 5%.
- AEM configuration with Purchase first in the priority order, no more than five active events.
- First-party data layer being assembled at conversion: hashed email, hashed phone, FBP, FBC, IP, user agent, external ID.
- Consent banner with opt-in rate above 65% on Australian traffic.
- Modelled conversions making up no more than 20% of total reported conversions on a 30-day window.
If you fail any one of those, the attribution layer is leaking signal and the platform is making decisions on bad data.
What we do on a typical engagement
The rebuild is structured. We've done it enough times that the order of operations is consistent across clients.
Week one is read-only audit. We document the existing tag setup, the AEM configuration, the consent flow, and the reconciliation gap between Meta and the CRM. The output is a written report with a prioritised fix list.
Weeks two to four are the rebuild, in staging. Server-side GTM container deployed. CAPI configured to consume a first-party data layer. AEM priority order rewritten. Consent banner audited and reconfigured. None of this is live yet.
Week five is cutover. We run the old and new systems side-by-side for at least seven days, comparing event volume, match quality, and reconciliation against the CRM. If the new system is within 5% of the CRM (it usually is), we cut the old system off.
Weeks six to ten are stabilisation. We monitor match quality, consent rates, and modelled conversion percentages weekly. We tune AEM as the data settles. We document the operating procedures so the in-house team can run the system after we hand over.
By week ten, on a typical engagement, the client has Event Match Quality back above 7, reconciliation gap under 5%, and a Looker Studio dashboard that the CFO is willing to read out loud in meetings. The recovered attributable revenue varies by spend, but on the engagements we ran in 2024 the average was about 18% of trailing twelve-month ad spend, recoverable in year one without changing media budgets.
When the rebuild is not worth doing
For completeness, here is when we would tell you not to bother:
- Monthly ad spend below $20,000 AUD. The rebuild costs more than you'd save in attributable revenue. Get a Tracking Audit, action the fix list yourselves.
- Offer or funnel is not converting. Attribution is a measurement layer. If the underlying funnel is broken, fixing measurement makes the broken-ness more visible, it does not fix it. Sequence the work.
- Imminent re-platform. If you're moving from Shopify to Magento (or vice versa) in the next quarter, do the rebuild after the re-platform, not before. Otherwise you do the work twice.
The bigger point
iOS 14.5 was the first big crack in browser-and-app-based tracking. It was not the last one. Google's enforcement of similar consent rules in the EU, Apple's continued tightening of Safari ITP, and Chrome's third-party cookie deprecation (whenever it actually lands) all push in the same direction: deterministic attribution from clicks to conversions is decaying, and the only way to compensate is to assemble first-party data at the moment of conversion and feed it back into the platforms over server-side connections.
The clients we work with have stopped treating attribution as a one-off project and started treating it as an operating discipline. The measurement layer needs maintenance the same way the production database does. If yours has been left to drift since 2021, you are operating on signal that is somewhere between noisy and broken, and the bidding algorithms downstream are spending your budget accordingly.
If that's the situation, the next step is a 30-minute call. Bring two of your dashboards. We'll show you, on the call, where the numbers are diverging.
Free download · No newsletter
Want this on your own numbers?
Get the Ad Spend Calculator (the spreadsheet) emailed straight to you. Same model we run inside engagements: CPA, ROAS, contribution after overheads, scaling-headroom worksheet, CRM reconciliation tab. No newsletter, no follow-up sequence.
Written by
Andy McMaster
Founder · Profit Geeks
Andy McMaster founded Profit Geeks in 2019 after a decade running paid acquisition for Australian e-commerce and B2B operators. Specialty: server-side attribution, profit-first scaling.
More about AndyNext step
Want this kind of work in your business?
We take two clients per quarter through the PROFIT framework. If your attribution is leaking and your reports have stopped making sense, the next step is a 30-minute call.