Your Station's Fundraising Attribution Model Is Lying to You
Last week, our contact at a major-market client station reached out with an awkward concern. After reviewing their Spring Drive results, they’d identified two gifts that had originally been attributed to digital — one for $5,000, one for $7,500 — were actually leadership-level donors fulfilling existing pledges. Both had googled the station to find the donation page, clicked a search ad on the way in, and landed in the campaign report as digital conversions anyway.
They weren't wrong to move them. A donor fulfilling a pledge already committed to a major gifts officer isn't a digital conversion in any meaningful sense, and our client knew that. But in correcting one distortion, the model introduced another: those gifts vanished from digital's ledger entirely, along with any acknowledgment that a 13-cent ad impression had been some part of the path to a five-figure gift. What they said next, though, is worth sitting with:
“The general feeling I have about last-click attribution for branded search is that it’s not something I'm going to be putting a ton of meaning into. I'm assuming it's mostly capturing donors with very high existing intent to donate — not a ton of net-new donors.”
That's analytically honest — and it gets at something the numbers themselves complicate:
The same campaign recorded $131,000 in annualized revenue from $4,300 in spend.
39% of the gifts were from net-new sustaining members who weren't in the donor file before those ads ran.
Last-click correctly identified them as new. What it couldn't tell him — what it never can — is how many touchpoints those donors had before they converted, or what role months of sustained digital presence played in making that final click possible.
The question isn't whether digital produced those results. It's how much of what actually happened will make it into the budget conversation — and right now, for most stations, the answer is less than it should be.
What last-click attribution actually measures
Last-click attribution naturally gives full credit for a gift to the final touchpoint before the transaction is recorded. It's not a flawed methodology — it answers one specific question accurately: what did the donor interact with immediately before converting?
The problem is that stations are using it to answer a different question: is digital fundraising working? Those are not the same question, and treating them as equivalent produces numbers that are simultaneously defensible and incomplete.
What last-click can't see is everything that happened before that final click. The donor who encountered a mission-focused video ad six times over the previous two months, opened three drive emails, and then searched the station's name to donate isn't a search conversion. She's the cumulative product of sustained digital work that the model never recorded — because the model only looks backwards from the moment of transaction, not across the relationship that made it possible.
The two crucial jobs digital is actually doing
This is where the attribution model breaks down most consequentially, because digital fundraising is doing two distinct things at once — and last-click measures one of them imperfectly and the other not at all.
Pathway establishment: Reaching net-new prospective donors — people who wouldn't have found their way to your station without a digital prompt. A prospective listener who discovers your journalism through a promoted post, follows your page, and donates eight months later during a drive is a digital acquisition. Last-click calls it an email conversion. The pathway that made the email work is invisible.
Touchpoint accumulation: Increasing the number of meaningful contacts with people already in your donor pipeline — by an order of magnitude. Think about what a consistent digital presence actually does between drives: it keeps your mission in front of lapsed donors, warms up new subscribers before the ask arrives, and maintains presence with the engaged non-donors who are your best conversion candidates when the drive opens. None of those impressions show up in your attribution report. But they show up in your drive performance.
Running consistent digital campaigns between drives doesn't just fill a calendar — it builds the audience depth that makes the next campaign's numbers higher than the last one's. The stations that outperform during drives are almost universally the stations that were running intentional digital outreach in the months before them.Last-click can't see that relationship. It just sees the drive.
Digital has an attribution advantage most stations aren't using
It's worth noting something that often gets lost in this conversation: digital is, by leaps and bounds, the most measurable fundraising channel most stations run. Compared to on-air pledge drives, direct mail, or event-based giving, digital campaigns can track click paths, engagement sequences, and conversion events in ways that no other medium comes close to matching.
The problem isn't digital's measurability. It's that most stations aren't fully using it.
The practical floor is UTM tagging — appending tracking parameters to every single link in every digital campaign, email, and social post. It sounds basic because it is, but a surprising number of stations are feeding their own direct traffic black hole by sending people to untagged URLs. Every untagged click that results in a donation lands in direct, invisible to the model. Tag everything, and direct traffic shrinks while attributed channels grow — not because anything changed in your fundraising, but because you're finally seeing what was already happening.
The more meaningful upgrade is integrating revenue data into GA4 directly from your donation platform, which opens the door to multi-touch attribution — seeing not just the last click before a gift, but the first one too. Even that single addition, first-plus-last-click instead of last-click alone, is literally 100% more touchpoints in your model. It's not a perfect picture, but it's a meaningfully less incomplete one, and it can be implemented without rebuilding anything substantial. The stations doing this are having very different budget conversations than the ones still reading a last-click report and making it mean more than it does.
What better measurement actually looks like in practice
This isn't an argument for rebuilding your data infrastructure before next fiscal year. It's an argument for asking different questions of the data you already have.
Look at annualized sustainer value, not first-gift value.A $10 monthly gift attributed to a digital campaign isn't worth $10 — it's worth $120 in year one alone, and more if the donor upgrades or stays for several years. When you evaluate campaign ROI against first-gift revenue, you're systematically undervaluing sustainer acquisition. The contact who flagged the major gifts situation above also added an annualized column to their reporting. That data reframes the entire campaign: not as a last-click revenue number to defend, but as a sustained-value argument for why the investment was worth making.
Examine the composition of your engagement audiences over time. Your retargeting pools — video viewers, page engagers, social followers — are the audiences your digital campaigns are continuously building. What percentage of that pool is existing donors versus genuinely new contacts? The answer tells you how much pathway establishment work your campaigns are actually doing, independent of what last-click reports.
Track direct traffic trend lines during and after campaign periods. If direct traffic rises meaningfully during and after active digital campaigns, you're seeing attribution displacement in real time — digital influence converting into visits your model is calling direct. That relationship won't be precise, but it's visible, and it's something you can bring into a budget conversation.
The budget conversation your attribution model is already having without you
FY27 investment decisions are being made right now, in rooms where the primary evidence is last-click attribution data. Stations cutting digital budgets don't realize that the model they're relying on structurally undercounts what the spend is producing. Others are concentrating everything in drive windows and measuring the result against a baseline that was built, invisibly, by the always-on presence they ran in the months before.
The model isn't lying to you maliciously. It's telling you part of the story and letting you assume it's the whole thing. Our client understood exactly what their model could and couldn't see — and said so clearly. The question is whether that clarity is informing how your station values and funds digital fundraising going into next year.
Consider what this campaign actually documented: a donor who clicked a thirteen-cent ad on the way to fulfilling a five-figure pledge. The model initially gave digital full credit, so our client correctly overrode it. But in removing that credit entirely, the model offered no way to account for whatever role digital played in that donor's relationship with the station over time — a year of video ads, a retargeting impression during the last drive, a search result that reinforced a decision already forming. Single-touch attribution can't hold that complexity. It assigns all the credit or none of it, and either answer misrepresents how donor relationships actually develop.
The stations that get this right won't necessarily have the most sophisticated attribution models. They'll just have the clearest understanding of what their model is missing — and fund the work accordingly.