Case study no. 01 · Sydney · Home services

From $4.8M to $9.1M without raising the ad budget.

Twelve weeks of attribution work and offer restructuring before any media spend changed. Result: a 3.1x cleaner CAC and an extra $4.3M in trailing twelve-month revenue.

Industry
Home services (residential)
Location
Greater Sydney, NSW
Revenue at start
$4.8M annual revenue at start of engagement
Engagement
12 weeks · January to March 2025

Headline result

+312%

Blended ROAS, week 12 vs. week 0

Calculated from booked-job revenue (CRM) divided by total ad spend (platform invoices). Independent of platform-reported ROAS.

$4.3M
Incremental revenue, trailing twelve months

Booked-job revenue, post-engagement twelve months minus pre-engagement twelve months.

$0
Additional ad budget used

Total monthly spend held at $52K throughout.

−63%
Cost per booked job

Pre-engagement: $387 average. Post-engagement: $142 average. Booking system data.

The brief

The problem.

The owner spent $52K a month across Meta, Google, and a niche tradie marketplace. Three platforms, three reports, three contradicting numbers. Sales were flat, ad spend was creeping, and no one could tell which channel was carrying the load.

The constraint.

No additional budget. The brief was explicit: get more out of the same $52K, or prove it's not possible. Decision deadline was the end of Q1.

What was broken

  1. 01

    Meta and the CRM disagreed by a factor of nineteen

    Meta's pixel counted 412 conversions in the prior 30 days. The CRM had recorded 22 booked jobs in the same window. The gap was attribution drift, not over-reporting: the pixel was counting form views, page hits, and partial submissions as conversions because the events had been set up in a hurry by an agency two years earlier.

  2. 02

    Google Ads was bidding on the wrong outcome

    Smart Bidding was optimising for 'lead' as defined by a clicked phone number, not a confirmed booking. Half the spend was chasing accidental taps.

  3. 03

    The marketplace platform looked profitable. It wasn't.

    Reported leads were 4x cheaper than the other channels, but the close rate was 6%, not the 22% close rate of Google leads. Margin per booking was 71% lower.

What changed and why

  1. 01

    Server-side rebuild of the measurement stack

    Server-side GTM. Meta CAPI with offline conversion uploads from the booking system. Google Ads enhanced conversions tied to confirmed jobs, not phone clicks. Two weeks of work, then four weeks of data collection before any optimisation decision.

  2. 02

    Offer restructured around margin tiers

    The flat 'free quote' offer was replaced with a tiered intake that filtered low-margin enquiries upstream. Bookable jobs went up 38%. Time spent on dud quotes dropped from 14 hours a week to 3.

  3. 03

    Marketplace channel cut, budget reallocated

    Once the CAC numbers were honest, the marketplace platform's true cost per booked job was $387, not the $94 it had been reporting. We killed the channel and moved the spend to Google search at a $128 CPB.

Timeline

  1. Week 1

    Audit and stack inventory. Read-only. No changes.

  2. Week 2

    Server-side GTM, CAPI, and offline conversion uploads deployed.

  3. Weeks 3 to 6

    Data collection. No spend changes. Reports stabilised.

  4. Week 7

    Offer restructure live. Margin-tiered intake replaces 'free quote'.

  5. Weeks 8 to 11

    Channel reallocation, weekly reviews, scaling rules tied to contribution margin.

  6. Week 12

    Engagement closes. Handover to internal team with documented playbook.

Anonymised at client request. Numbers are accurate. Identifying details have been removed.

Next step

Same problem? We do this twice a quarter.

If your ads are spending and your reports are arguing, the diagnostic takes about a week. The fix takes longer. Both start with a 30-minute call.