Insights
Training
On-demand
Our guest post, written Robin Warren, Founder of Hinchilla, explores how unauthorised charity use has spread across the economy and across the charity sector
More than seven in ten (71%) of UK employees have used unapproved artificial intelligence (AI) tools at work. That finding, from Microsoft and Censuswide research in October 2025, should give every charity CEO pause. Not because AI is inherently dangerous, but because what we don’t know about can hurt us.
The charity sector has embraced AI with remarkable speed. According to the Charity Digital Skills Report 2025, 76% of UK charities are now using AI in some form. Yet, as of mid-2024, only 6% had developed an AI policy to govern that use, though encouragingly, by 2025 around 50% of charities were developing policies.
Still, the statistics demonstrate an uncomfortable reality: much of AI adoption in charities has happened without oversight, without guidelines, and often without anyone in leadership knowing it’s happening at all.
Staff across the sector are quietly integrating AI into their daily work. They’re using it to draft emails, summarise reports, analyse data, and yes, write funding applications. Most of them aren’t telling anyone.
This silence isn’t surprising when you look at the numbers. Research from Microsoft and LinkedIn’s Work Trend Index found that 52% of people who use AI at work are reluctant to admit it for important tasks, fearing it makes them look replaceable or incompetent.
More striking still, research from The Access Group found that 35% admit using AI covertly for tasks they were “supposed to do themselves”. The Joseph Rowntree Foundation found that 73% of non-profits lack any AI guidelines whatsoever.
In this vacuum, staff make their own rules. Some assume AI is fine because no one has said otherwise. Others suspect it might be frowned upon and keep quiet. A few actively hide their use, worried about the consequences of disclosure.
This isn’t a story about rogue employees or ethical failures. It’s a story about organisational design. When charities don’t provide clarity, staff fill the gap with silence.
Every organisation faces AI governance challenges, but charities operate in uniquely sensitive territory. The data charities handle often falls into GDPR’s "special categories", information about health conditions, ethnic origin, and religious beliefs. Safeguarding records frequently contain such sensitive data. When a caseworker pastes client notes into an AI tool to help draft a report, they may be sharing protected data with a third-party processor without realising the implications.
The regulatory environment is tightening. In December 2025, the UK Fundraising Regulator updated its guidance on AI use in fundraising to make clear that trustees bear accountability for how their organisations use AI in this area. This isn’t a theoretical concern anymore, it’s a governance obligation that sits with your board.
But in practice, AI use is running ahead of governance. One grant writing consultant we spoke to described receiving a client’s application and immediately recognising what had happened: "It just reeks of ChatGPT, it’s just so obvious you just literally told it to write it and sent it to me...you’re going to send this through now to a panel of people and if I can see it, there’s a good chance they can see it as well." When staff are using AI without disclosure, and external reviewers can spot it immediately, there’s an accountability gap that trustees are now explicitly responsible for closing.
For charities, reputation is operational infrastructure. The sector runs on trust, trust from donors, beneficiaries, funders, and the public. A data breach or misuse incident involving AI could do lasting damage.
Major funders are beginning to form positions on AI use in applications and programme delivery. Their approaches vary, and charities need to pay attention.
BBC Children in Need has indicated it may ask about AI use during the assessment process for grants. This isn’t necessarily negative, they want to understand how organisations are working, but it means charities need to be ready to answer honestly.
UK Research and Innovation expects transparency about AI use in funding applications. For charities involved in research partnerships, this sets a clear standard: disclose your methodology, including your use of AI tools.
Esmée Fairbairn Foundation has taken a different approach, stating no preference about AI use but emphasising the importance of honesty about how applications were developed.
The key message is that funder positions are diverging. Some are curious, some are cautious, and all want candour. Charities that have been using AI without tracking or disclosing it may find themselves in awkward conversations. Better to get ahead of this now.
The good news is that addressing hidden AI use doesn’t require wholesale organisational change. It requires honest conversation and practical action.
Start by auditing your current AI use. You can’t govern what you don’t understand. Run an anonymous staff survey to find out what’s actually happening across your organisation. Ask not just whether people are using AI, but which tools, for which tasks, and what concerns they have.
For smaller charities, this might simply be a 15-minute team conversation rather than a formal survey, the point is to surface what’s happening, not to create bureaucracy.
Get AI on the trustee agenda. This is a governance issue, not just an operational one. Boards need to understand the AI landscape in their organisation and take ownership of the associated risks and opportunities. A single agenda item to open the conversation is a reasonable starting point.
Develop a proportionate AI policy. The goal isn’t to ban AI entirely or to create bureaucratic hurdles. It’s to provide staff with clarity about what’s acceptable, what requires approval, and what’s off-limits.
A good policy removes the guesswork that drives secrecy. For an overview of how different AI platforms handle sensitive data and how to secure each one you can see Hinchilla’s article on that.
Create safe channels for staff to ask questions. If people fear judgement or consequences, they’ll stay quiet. Make it easy and safe to seek guidance. Consider designating an AI lead or establishing a simple process for staff to check before using new tools.
None of this is about catching people out or rolling back the clock on technology adoption. AI is already part of how charities work, the question is whether that work happens transparently, safely, and in line with your values.
Staff who’ve been using AI quietly aren’t villains. They’re often high performers finding ways to work more effectively in under-resourced organisations. The problem isn’t their behaviour; it’s the silence around it.
Charities have an opportunity to lead here. To show that responsible AI use and effective AI use can be the same thing. To create cultures where innovation happens in the open, where risks are managed rather than ignored, and where staff feel confident asking for guidance.
The question isn’t whether your staff are using AI. It’s whether you know how.
Follow-up questions for CAI
How can trustees assess organisational AI risk in handling sensitive data?What steps establish a proportionate AI policy for small charities?How should charities audit covert staff AI usage across teams?Which controls reduce risk when sharing safeguarding notes with AI tools?How can funders transparently require AI disclosure without penalising innovation?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.