The 2025 AI Equity Project survey found that 70% of nonprofit professionals are concerned about data privacy with AI. That number is striking on its own, but it's more striking next to the corollary: 82% of those same professionals report using AI tools informally and ad-hoc at work. The concern is real, and it is being overridden by the convenience.
This post is about the gap between the two. Specifically: what your team is actually putting into AI tools when nobody is governing the use, what the realistic risk of that is, and three immediate fixes that don't require a security overhaul.
The four hidden costs
The line item your finance director sees is the ChatGPT subscription. That is the smallest cost. The actual costs are operational, reputational, and legal — and they are larger by an order of magnitude.
1. Donor PII in consumer AI tools
The most common pattern, and the most dangerous: a development associate pastes a list of major donors with giving history into ChatGPT to "help me write personalized appeals for these people."
If that ChatGPT account is the consumer free or Plus tier, the data is sent to OpenAI for use in training future models. That is documented in OpenAI's data-use policy. The donor's name, their giving amount, the program they support, and any notes pasted alongside have all left your organization's perimeter and become input to a model that will be deployed to millions of users.
The probability of any individual donor's data resurfacing in someone else's chat output is low. The probability of a privacy compliance audit asking whether donor PII has been shared with unauthorized third parties is rapidly approaching 100%, and the answer is yes.
2. Board materials shared with vendors who train on inputs
Board memos, financial pre-reads, executive session summaries. The pattern: a chief of staff uses AI to "make this two-page summary into talking points." The full document leaves the building.
This is worse than the donor PII case in two ways. Board materials are often more sensitive than individual donor records. And they are usually about decisions — strategic shifts, leadership changes, financial issues — that the organization specifically hasn't disclosed publicly yet. Confidential pre-decisional information leaving the perimeter is a governance issue that lands directly on the executive director's desk when it's discovered.
3. Confidential financial data
Quarterly financials, restricted-fund balances, audit responses, IRS correspondence. The same pattern: a finance staffer asks AI to summarize, format, or analyze. The data goes out.
Most nonprofits have specific obligations around financial data. Major-gift donors with multi-year pledges sometimes have contractual confidentiality clauses about how their gift is reported. Government grants frequently include data-handling requirements. None of those requirements are met when the data is sitting in a consumer AI tool's training pipeline.
4. Inconsistent quality from inconsistent prompts
This one isn't legal — it's operational. Without governance, every staffer develops their own prompts, their own workflows, and their own sense of when AI output is "good enough." One person produces excellent first drafts; another produces mediocre ones that need extensive rewriting. The same AI tool generates wildly different value depending on who uses it.
The cost shows up as inconsistent external communications, inconsistent program documentation, and inconsistent internal artifacts that other people then have to clean up. It is the largest hidden cost in dollar terms because it is permanent: every week the team spends fighting bad output is a week of staff time that is supposed to be going elsewhere.
Real-world incidents
Specific data-exposure incidents at nonprofits are usually under-reported because there's no public registry the way there is for healthcare or financial services. The ones that have been reported share a common pattern: a single staffer uses a consumer AI tool with sensitive data, the data ends up somewhere it shouldn't, and the discovery happens months later when the organization can't reconstruct what was shared.
The most public 2025 case involved a mid-sized health nonprofit whose grants team had been pasting donor stewardship memos into ChatGPT for nine months before a privacy audit caught it. The remediation cost — legal review, donor disclosures, policy buildout, audit response — ran into six figures. The subscription cost they were trying to save by using consumer-tier ChatGPT was about $20 a month per seat.
That is the actual ratio of hidden cost to visible cost. It is not subtle.
Three immediate fixes
You don't need a transformation initiative. You need three changes, this quarter.
1. Switch to enterprise tier on at least one tool
ChatGPT Team, ChatGPT Enterprise, Claude for Teams, or HubSpot/Salesforce embedded AI all share one critical property: by default, they do not train on your inputs. That single switch eliminates the largest category of risk. The cost difference is real (Team plans are roughly $25–30 per seat per month versus $20 for Plus) but is dwarfed by what you save in compliance exposure.
Pick one tool. Make it the only approved one. Move everyone to it.
2. Document one AI policy
Use the framework from our governance post. Six sections, two pages, board-approved within a quarter. The policy is not the destination; the act of writing it is the audit. You will discover, in the process, what your team is actually doing with AI today, and most of the discovery will involve quietly closing several tabs.
3. Name an AI lead
One person is responsible for AI governance. Their job is not to be the technical expert. Their job is to keep the policy current, run quarterly check-ins with team leads, and be the single point of contact when a funder, a regulator, or a donor asks "how does your organization use AI?" If no one owns that, no one is going to do it, and the policy becomes a Google Doc that ages out within twelve months.
The closing math
The cost of doing nothing in 2026 is much higher than the cost of basic governance. A consumer-tier ChatGPT subscription with no policy is cheap on the line item and expensive everywhere else. An enterprise-tier subscription with a written policy and a named owner is roughly the same price, an order of magnitude safer, and answers the funder due-diligence questionnaires that are about to start arriving.
The shift from "we use AI" to "we govern AI" is a quarter of work. It is the most overdue line item on most nonprofit operations roadmaps right now.
Source: AI Equity Project survey, 2025.