The number that should bother every nonprofit board in 2026: 47% of nonprofits have no formal AI governance policy. That figure is from The 2026 Nonprofit AI Adoption Report by Virtuous and Fundraising.AI, published in February. It means nearly half the sector is using AI tools at scale with no written rules about what data goes in, what comes out, who is accountable, and what to disclose.
Through 2024 and most of 2025, that was tolerable. AI use was experimental, the volumes were small, and the regulatory environment hadn't caught up. None of that is true now.
Why this matters in 2026
Three things have changed in the past nine months.
Funders are asking. The Gates Foundation, the MacArthur Foundation, and a growing number of community foundations now include AI policy questions in grant applications. "Does your organization have a documented policy governing the use of AI tools with donor and beneficiary data?" is a yes/no question. There is no good way to answer it without a policy.
Donor data risk is real. Every consumer AI tool has a data-use policy that, by default, allows the vendor to train on user inputs. Pasting a board memo, a major donor pipeline, or a beneficiary case file into ChatGPT's free tier sends that data to OpenAI for training unless you have explicitly turned it off. Most staff don't know that.
Staff turnover is normal. When the staffer who was the AI champion leaves, what happens to the prompts, the workflows, and the institutional knowledge? Without a policy, it leaves with them, and the next hire starts over. A policy is a continuity document as much as a compliance one.
The six sections every nonprofit AI policy needs
After helping a dozen nonprofits draft theirs, the structure that consistently lands at "two pages, board-approvable, staff-readable" looks like this.
1. Acceptable use
A list of approved tools and the kinds of work each is approved for. Not "ChatGPT" — "ChatGPT Team for drafting external communications, summarizing meeting notes, and generating first-draft grant narratives." Specificity matters because it forces decisions about which tier of which tool you are paying for, which is the actual difference between data leaving your perimeter and not.
2. Prohibited use
The shorter list, but the more important one. Common entries:
- Donor PII, including names paired with giving history
- Beneficiary case files, intake notes, or medical/legal information
- Confidential financial materials before board approval
- HR records, performance reviews, or personnel actions
- Anything covered by attorney-client privilege
Prohibited use is enforced by training, not by tooling. People will paste things into ChatGPT until they know not to.
3. Disclosure
When is AI use disclosed to external audiences? Most policies we draft land on:
- Always disclosed: AI-written content published under a person's name.
- Sometimes disclosed: AI-assisted analysis in fundraising appeals or reports.
- Never required: AI-assisted productivity work (note summaries, calendar scheduling, internal docs).
The principle: if the audience would feel misled to learn AI was involved, disclose. If they wouldn't, don't bury them in disclosures.
4. Quality control
Every workflow with AI in it needs a human checkpoint. The policy should specify what kind of review is required for what kind of output. A grant narrative gets line-edited by a development director before it leaves the building. An internal Slack summary doesn't. The policy doesn't have to enumerate every workflow — it has to say "every external-facing artifact gets human review before publication" and let the team operationalize that.
5. Data privacy and storage
Two questions: where does data go when it enters an AI tool, and where does it live after the workflow completes? The first is answered by the tool's enterprise tier (or the lack of one). The second is answered by your existing document retention policy. AI doesn't usually need a separate retention rule — it needs a paragraph saying "AI-generated artifacts are subject to the same retention rules as their human-generated equivalents."
6. Training and accountability
One named person owns AI governance. Their job is not to be the AI expert. Their job is to keep the policy current, run quarterly check-ins with team leads, and be the person funders' due-diligence questionnaires get routed to. In a small org, this is usually the operations director. In a larger one, it can be the CFO or a dedicated chief of staff.
What this isn't
This is not a substitute for a security review, a privacy attorney, or a board resolution. It is the operational document that lives underneath those things. If your organization handles regulated data — HIPAA, FERPA, GDPR-scoped EU donors — you need a privacy attorney to review the final document before it goes to the board.
But you can write the first draft this week. We have a free template; if you want a copy for your team, book a call and we'll send it over.
Source: The 2026 Nonprofit AI Adoption Report, Virtuous and Fundraising.AI, February 2026.