Back to blog

Operations

From Ad-Hoc to Operational: Moving Your Team Beyond ChatGPT-One-Off-Use

82% of nonprofits use AI informally. Here's what shifting to operational looks like.


The 2025 AI Equity Project survey reported that 82% of nonprofit professionals use AI tools informally or on an ad-hoc basis. Twelve months later, that number has barely moved. Most of the sector knows AI exists, has used it, and could not tell you what their organization's AI strategy is — because there isn't one.

This post is about the practical mechanics of moving from ad-hoc use to operational use, without writing a 50-page strategy document or buying a platform you don't need.

Three modes of AI use

Use the same vocabulary every time you talk about this internally. It saves a lot of confusion.

Ad-hoc: A staffer opens ChatGPT, types a question, copies the answer somewhere. No prompt is reused. No output is reviewed. No one else on the team can replicate what just happened.

Operational: A documented sequence of steps that produces a consistent output. Someone other than the original creator can run it. There is a checkpoint for review. The output goes into a system or process the team already uses.

Embedded: AI is no longer a tool the team consciously reaches for; it is part of the goal. Donor segmentation runs against the CRM nightly. Grant deadlines auto-summarize when they enter the pipeline. The team doesn't think about it as "using AI" anymore — it's just how the work happens.

Most nonprofits live entirely in ad-hoc. The 7% the Virtuous report calls "high impact" mostly live in operational with a handful of embedded workflows. Embedded is the long-term destination, but operational is the realistic next step for a 2026 fiscal year.

The progression

Operational doesn't appear by accident. The path looks like this:

  1. Individual experimentation. One staffer, often unprompted, starts using ChatGPT for a specific task — usually grant writing, content drafting, or meeting summaries. They get good at it.
  2. Team patterns. Two or three other staff notice and start asking that person for prompts. The first wave of "share what works" happens in Slack threads or hallway conversations.
  3. Operational workflows. Someone — usually an operations lead — gets tired of the duplication and writes the sequence down. Now there is a shared template, a shared review process, and a shared place the output goes.
  4. Embedded into goals. The workflow gets tied to a metric the team already tracks. AI becomes a contributing factor to a number on someone's quarterly review.

Every organization moves through these stages. The 7% move through them in 18 months. The 81% are still in stage 1 four years later because nobody made the move from "this works for me" to "this works for us."

Five questions before scaling any AI workflow

Operational use is hard not because AI is hard, but because templating other people's work is hard. Before you take a workflow that one person has nailed and ask the team to use it, walk through these five questions.

1. Can someone other than the original creator run it?

If the workflow exists entirely in one person's head — their tone, their judgment, their feel for when the output is "good enough" — it is not yet operational. The test: have a different team member run the workflow with the same inputs. If the output is materially different, write down what's missing.

2. Is the output quality consistent or wildly variable?

AI output is probabilistic. The same prompt can produce a great result on Tuesday and a mediocre one on Thursday. Operational workflows account for this with explicit quality criteria — a checklist, a review rubric, a "good enough" definition — not with hope.

3. Is there a checkpoint for human review?

Every external-facing AI artifact has a human reviewer. No exceptions. Not because the AI will hallucinate (though it might), but because the brand voice and the relationship with the audience are still human responsibilities. The review can be five minutes; it cannot be zero.

4. Is the time savings actually being captured?

This is the question almost no one asks. A workflow that saves four hours per week only matters if those four hours go into something. If they get absorbed into "more meetings" or "answering more email," you have spent a quarter implementing AI and gotten nothing out of it. Define where the saved time goes before you scale the workflow.

5. Does it integrate with the systems people already use?

A grant-writing workflow that lives in a separate ChatGPT window, requires copying outputs into Google Docs, then again into a CRM, then again into a project tracker, will not survive contact with a busy week. Operational workflows have to live where the work already lives.

A composite example

Riverbend Family Services (a sample nonprofit) ran their first AI experiment in early 2025. Their development director used ChatGPT to draft grant narratives. By summer, three other team members were using it the same way, but each with a different tone, a different structure, and a different sense of when to trust the output.

In Q4 2025, their operations lead spent two days documenting what the development director was actually doing: which prompts she used, which sections she always rewrote, which research sources she pulled into the prompt before generating, and which review steps she ran before sending. That document became a Claude project the full grants team now uses, with a templated input form and a required review by the development director before any narrative leaves.

The team's grant submission rate went from 14 per quarter to 22, with no added headcount. The development director got eight hours per week of her time back, which she has redirected into major-donor cultivation. None of this required a new platform. It required someone willing to write down what was already working.

That is the move from ad-hoc to operational. It is unglamorous. It is mostly clerical. And it is almost entirely what separates the organizations getting impact from the ones still talking about pilots.

Ready to close the 7% gap?

Book a 30-minute AI assessment. No commitment, no charge.