Insights
Training
On-demand
You are viewing 1 of your 2 free articles
We explore how the sector has employed AI, how to cut through AI hype, how to mitigate AI risk, and how to effectively incorporate AI to enhance your services.
Check out our Conscious AI Hub
The ubiquity of artificial intelligence (AI) is secured by inflated marketing hype and very real, very impressive capabilities. AI is the tech everyone talks about, apparently all the time, and the charity sector has been talking, too. Our uptake of AI appears significant, particularly in the realms of fundraising and marketing. But it still lags behind in one crucial area: service delivery.
In this long article, we look in depth at how the charity sector has employed AI, how to cut through AI hype, how to mitigate AI risk, and how to effectively incorporate AI to enhance your services.
Skip to: Defining different types of artificial intelligence
Skip to: How the charity sector currently uses AI
Skip to: The massive hype around AI
Skip to: The present-day risks of AI
Skip to: The very real rewards of AI
Skip to: The services you can automate with AI
Skip to: AI use cases in the charity sector
Definitions are always helpful, especially around AI, because people tend to use it as a catch-all term, defining everything and anything. So here are the main types of AI you should know about, the types that charities might use for service delivery.
Refers to AI systems that act autonomously to achieve specific goals. It can make decisions for us, plan actions, and adapt to its environment without the need for constant human input.
Refers to systems built on large data models and huge training sets, which then use statistical analysis to identify patterns, anticipate behaviours, and forecast future events.
An AI model that can generate new content. These models learn to capture the probability distribution of the input data so they can produce data similar to their training data.
Generative AI is the latest phenomenon, the one most people talk about most of the time. Big name platforms – ChatGPT, Gemini, CoPilot – are all examples of generative AI built on large language models (LLMs).
The Charity Digital Skills Report 2025 paints a fascinating picture of the sector’s use of AI across their operations. It shows that more than three quarters (76%) of charities use AI in their work, a massive increase from 61% of charities in 2024. The growth likely stems from an increased awareness, especially around simplistic uses of AI, which broadly mimics what we’ve seen in other sectors.
But the uptake feels a little uneven. The report found that slightly more than a fifth (23%) of charities are actively using AI, an increase from 11% in 2024. That shows that, while the uptake is huge and massively improved, the majority of charities are not actively using the tech.
The way charities use AI remains consistent, year-on-year, which points towards an inability to apply it in unique ways. That, too, is pretty consistent with other sectors. For all the talk of AI capabilities, the tech is used in 2025 in much the same way it was used in 2022, albeit by far more people. Charities, according to the report, use AI for admin and project management (48%), grant funding (36%), and comms (34%). That all feels familiar to the way it has been used in the past.
A Charity Job report echoes much of the above, but shows less uptake. The report highlights that, across the economy, two-fifths (40%) of workers directly use AI in some form, a number that rises to 43% in the charity sector. Interestingly, the Charity Job report shows a general appetite for learning more, with the vast majority of people interested in improving their AI skills.
That brings us to our recent Reimagining Services Report, which showed that charities rate their service delivery at more than 7 out of 10 but rate their digital service delivery as just 5.8. That’s part of an ongoing trend we’ve noticed over many years. In terms of AI, only 12% of charities said they were using AI in service delivery. A massive 58% said they were not using it for service delivery and had no intention to use AI in the future. That shows a huge divide.
Charities are using AI, but passively and in the same ways they were using AI in 2022. They are failing to utilise its capabilities in novel ways, particularly in terms of service delivery. Why is that?
First, it’s important for us to understand the capabilities and risks of AI. And that means coming back down to earth, trying to sift through the exhausting hype that surrounds the tech.
“Artificial intelligence” is a marketing term. Consider the meaning of AI, as provided by ChatGPT: “AI is the development of computer systems that perform tasks typically requiring human intelligence, such as learning, reasoning, problem-solving, and understanding language.”
The concept of intelligence begs myriad questions, but surely the humble calculator fits that definition, as does the spellchecker or predictive text on a Nokia 3310.
“Artificial intelligence” allows for broad categorisation and companies abuse that broadness, claiming tools are driven by AI in some form, some way, even if the link proves tenuous. This is called AI washing: making inflated claims about how your product or service uses AI.
AI has become a symbol of status. It’s a signifier, conjuring up images of automated futures, hoverboards, cities in the sky. But the LLMs that drive many AI systems simply make logical deductions and predict the most likely token in a sequence, based on a massive amount of training data. LLMs are “stochastic parrots”, according to a now-famous paper on the subject, simply regurgitating the most predictable data based on a set of rules.
They’re still impressive. Very impressive. But LLMs are not going to take over the world, at least not in their current form. They do not think for themselves. They are trained to be predictable and impressive – and they seem to achieve precisely that.
The power of AI is often overstated. The risks are understated. The risks are far more grounded in reality and they are present, with us now, inflicting real-world impact at this moment.
Charities are broadly aware of the risks. According to the Charity Digital Skills Report 2025, 36% of charities are actively avoiding AI in potentially harmful areas. That speaks to the lower uptake of AI in service delivery, an area of charity work that often serves vulnerable communities.
Below we explore the main risks of AI. You should always consider the risks when using AI, but you should practice particular caution when using AI for service delivery.
AI is capable of processing huge amounts of information. But that information is not neutral, nor amoral. AI is created by humans, who decide on input, and humans, as we know, are riddled with conscious and unconscious bias. AI systems learn and even extend that bias through internal machine learning processes.
There are several potential copyright and data issues. The first, and perhaps most pressing, revolves around copyright infringement. It is an ongoing debate – not likely to be solved any time soon – whether the content used to inform AI systems violates existing copyright legislation.
But you should always aim to review any generated AI content and give credit where possible to original authors. One way to achieve that is simply by prompting AI platforms to provide sources.
The journal Joule said that, if every search on Google used ChatGPT, it would burn through roughly as much electricity annually as the country of Ireland. That is every lightbulb in Ireland, every device, every television, everything. It’s getting better – but not quickly enough. And the suggestion that it’s getting better for the environment usually comes from the AI companies themselves, often without sufficient evidence.
Many AI companies do not reveal environmental costs of training and using their models: ChatGPT 5 proving an obvious example. There is not enough discussion on the matter at present: it’s an afterthought, at best, and often not even that.
Generative AI models often present inaccurate information as though it were correct, often caused by limited information in the system, biases in training data, and issues with the algorithm. These are commonly called ‘hallucinations’ and they present a huge problem.
The spread of misinformation is bad in the first instance. But a future of widespread AI use could create large systems of misinformation, with new AI models built on existing misinformation.
The risks are clear. And charities have to pay attention to them. But, if used correctly, AI can provide serious benefits. The benefits include, among other things:
Charities need to take a realistic approach, finding the best way to make AI work for your service users. That means picking the right type of AI – generative, agentic, predictive, etc – and marrying it with the right service. That also means understanding and mitigating the risks.
Using AI without understanding impact not only presents risks, but it creates additional work. A study found that 77% of employees experienced increased workloads because of AI, largely through having to check their work and the increases in expectation. And AI is not broadly applicable to every sector, every industry, every cause. Reports show a disparity between different sectors, in terms of automation potential and exposure to AI.
In charities, research by the Joseph Rowntree Foundation and We and AI found that success with AI is less likely among those who are smaller in size, possess a lower level of digital maturity, and identify that AI would not be in line with their communities’ values.
AI is not a level-playing field, not a fix-all solution. Indeed, for some, AI may have very little value, and investing in AI may actually create unnecessary additional costs in time, money, reputation, and ultimately impact.
These are the caveats. We don’t want to push people to use AI for AI’s sake. But, if you think you can gain something from AI, here are some practical steps charities should take to find out if AI is right for their services and implement them in the best way?
To use AI in service delivery requires several steps. First, you need to identify areas of improvement and ask service users how to improve such areas. Then you need to marry those findings with the best AI options available. Let’s start by looking at auditing existing services.
Audits help you to find strengths and weaknesses in your services. But you cannot audit without a reckoning with your goals and objectives, without a concrete understanding of what your services aim to achieve. So first you need to establish goals that will serve as a benchmark against which you can judge the success of your services.
You will want to gain a quantitative understanding of your services. That means collecting internal data – membership data, financial records, service user information, Customer relationship management systems, any previous interactions with service users, and so on – and combine that with any external data – such as quantitative feedback from service users, social media analytics, information from relevant market trends, and so on.
Data always tells a story. Figure out the story it tells. You should be able to identify key gaps, inefficiencies, and areas of strength in your services. You may wish to compare outcomes to pre-existing or new targets and objectives, assessing whether resources have been used as once intended.
Examine any clear problems and figure out the source of the issue. Charity professionals are often too close to the data. Too often, we interpret the story that we’d like to hear rather than the true story the data is trying to tell us. That’s why it’s always helpful to rely on an objective perspective. If you have someone from outside your charity – whether a paid evaluator or a trustee – then ask them to read the data, ask them what story the data tells them.
Charities should run regular audits. But it’s vital that quantitative data is married with the qualitative – and that means going right to the source and speaking with your service users.
Research estimates that fewer than 5% of nonprofits have a feedback system that incorporates people’s views into decisions. Likewise, the Charity Digital Skills report shows that fewer than two in five charities (37%) are co-designing their services with users.
But if charities do not seek the views of beneficiaries, they risk falling into familiar paternalistic patterns, offering solutions to service users that are unwelcome or not fit-for-purpose.
That’s why beneficiary feedback mechanisms (BFMs) are invaluable. BFMs help charities to collect, manage, and respond to feedback on their services from their service users.
BFMs don’t have to be all-singing and all-dancing. They could take the form of a suggestion box, a hotline, a focus group, or a survey. Perhaps the easiest route is simply through Microsoft or Google Forms, sending a short and succinct survey to your users to find out the main issues.
Digital technology can improve the reach of a BFM but charities also need to consider the needs of service users who are not online. Indeed, the only essential requirement of a BFM is that all service users are able to give feedback equally, whether via SMS, email, or in-person, in respect of their varying needs.
A combination of all options is recommended to ensure charities have the full picture of their service delivery. Feedback from diverse range of sources makes it more likely that any issues or barriers will be picked up and later addressed.
On top of BFMs, charities should explore more indirect ways of getting feedback.
Parkinson’s UK used AI to better understand user needs. Reacting to the immediate challenges brought about by the pandemic, the charity used AI to track the topics and themes that communities living with Parkinson’s were discussing online, using comparative linguistics. It revealed key concerns among people with Parkinson’s, one of which was keeping fit.
The charity started to produce fitness sessions via its YouTube and other digital channels, led by physiotherapists. They went out to get feedback, indirectly, and tailored a solution in response.
Now you know your weaknesses and strengths in service delivery. You’ve gained information from service users about how you might want to improve services. Now you need to match that information with the best possible AI solution.
That means returning to the definitions above, considering which might best suit your new or improved services. If, for example, you want to create a chatbot that signposts resources to meet service user needs all day, every day, generative AI will be the best option.
But if you want to help people with disabilities to navigate the online world by anticipating and resolving accessibility issues, perhaps agentic AI might prove the more useful option.
You need to explore available AI tools and providers.
Remember to cut through the hype – do not believe everything that you are sold. The key here is to find AI systems that directly tally with your needs and values and then test, test, test.
And consider the risks of each platform: Is the provider transparent? How might you mitigate risks on the platform? How does the provider itself approach risk and ethics? The final question is particularly important, as any AI platforms that dismisses risk should be avoided. AI, particularly generative AI, is a risky tool and acknowledging risk is the first step to mitigation.
Finally, once you’ve performed all of the above, consider running a pilot on a small scale.
That will allow you to note any issues at the very beginning and provide visibility around potential risk. You can refine the tool, perhaps improving efficiency and user experience. You might even want to trial it with a few service users and invite feedback in real-time.
Once you and the service users are happy with the trial, once you’ve refined the tech, then you can launch the service, ideally with a roadmap for implementation.
Ensure continuous mechanisms to track performance, receive and implement feedback, and update the AI tool as appropriate.
Another great piece of advice: look at other charities to see how they’re using AI. Friends across the sector are the best way to learn how to use the tech, as we often face similar challenges under similar pressures. And, with that in mind, let’s look at some case studies from the sector.
Chatbots are one of the most popular use cases of AI in service delivery. And for good reason.
They provide service users with information twenty-four-seven, which proves particularly effective in low-risk scenarios, fulfilling a massive need for service users.
A chatbot is simply the simulation of human conversation. First invented in 1966, traditional forms of chatbot use rigid decision tree-style menu navigation. But today, AI chatbots use a variety of AI technologies: natural language processing to accurately interpret user questions, deep learning to become more accurate over time, and machine learning to optimise responses.
The West of England Centre for Inclusive Living (WECIL), based in Bristol, has been effectively using chatbots. An award winning, user-led organisation, WECIL supports people with disabilities by challenging the barriers to independent living.
WECIL used AI to create a chatbot called “Cecil from WECIL” that functions like an Easy Read document. Easy Read documents make written information easier to understand, usually combining short, jargon-free sentences with clear images to explain the content.
When you open the chatbot, you’re presented with options including visual options, to help find the right information. This in turn makes WECIL’s website more navigable for people who are learning disabled or who are neurodivergent.
For some charities, developments in the predictive capabilities of AI have improved their ability to tackle the problems their service users face. Predictive AI can make life a lot easier.
Housing Association charities can predict damp and mould risks by analysing repair data and other property information, enabling them to take preventative measures.
Charities have used AI to predict homelessness, enabling them to put preventative measures in place before an individual or family become unhoused.
Another example is in medical treatment. Prostate Cancer UK and Movember partnered to tackle a gap in funding for prostate cancer medicine. Though one in eight men will get prostate cancer, the disease was lagging behind other cancers in terms of precision medicine, a situation that caused avoidable death and harm.
The two charities funded £1.2 million towards groundbreaking research led by UCL, which found that AI could be used to precisely predict the best treatment combinations for specific patients.
The researchers used a new multimodal AI tool from ArteraAI. Multimodal AI is a type of machine learning that combines and analyses multiple different forms of data.
Using multiple forms of data helps achieve a more comprehensive understanding and make more robust predictions than working with just one form of data.
In 2025, Prostate Cancer UK is highlighting that being able to predict which type of treatment is best for each patient using AI could enable the NHS to move away from a “one size fits all” approach, improving access to the right treatment through greater precision.
Speed is a much-touted benefit of AI. Whether summarising dense regulatory guidance or gleaning insights from vast bodies of data, AI can give you swift answers using deep learning.
When balancing speed with quality and ethics, AI can help charities respond quickly to the urgent crises of our time, like climate change and conservation.
WWF used AI when responding to bushfires in Australia. An estimated three billion animals were affected by the bushfires, and the charity wanted to know how.
AI helped with the use of camera traps. Camera traps are digital cameras that take a picture when they sense movement, providing data on species location, population sizes, and how species are interacting.
But in many projects, up to 90% of the images captured are “false triggers” – images of with no animals in the frame. For one person to go through these two seconds at a time, it would take around four years to complete.
So the charity worked with other conservation organisations and Google to develop a better way to manage camera trap data.
Wildlife Insights was the result: an online platform that uses AI to filter out the blank images, find the ones that contain wildlife, identify the species in the photo, and share the photo with the conservation community. In addition, anyone who uses camera traps can use the platform.
This helped a team of ten scientists study more than 28 million acres, identifying more than 150 species among the seven million images collected. The findings demonstrated how drastically fires can shrink biodiversity.
Margaret Kinnaird, WWF Wildlife Practice Leader, says: “Big data and artificial intelligence have the power to transform the fate of many endangered species. Wildlife Insights promises to bring millions of unseen images to life and apply them to critical conservation decisions.”
But charities of all sizes, at all stages of digital maturity, can use AI on a smaller scale.
For some, AI automation streamlines the flow of data. For example, Great Ormond Street Hospital (GOSH) uses AI to make better use of routinely collected data, enabling new treatments to be implemented more quickly for patients.
The charity Full Fact uses AI software to challenge false claims, collecting data from online news sites, podcasts, and social media pages. The software uses a large language model to identify claims and see if they match with those that have been previously fact checked. Then Full Fact’s team of journalists take the claim offline to check it thoroughly.
Use of AI in areas like fundraising, finance, and governance indirectly help charity services. For example, where it effectively frees up time in manual and administrative processes, charities can use that time to work more directly with service users.
The use cases are many. And often the best use cases are novel. Taking steps to understand AI and the way it works, tallying that with the main challenges facing your charity, is a simple way to incorporate into service delivery.
And, if used even at the smallest scale, even if used for data entry or to summarise documents, AI is nothing if not a time-saving tool – and the time saved can be reinvested into service delivery.
To conclude, do not believe the hype: AI is only impressive if used the right way. It allows charities to make serious gains in terms of efficiency and effectiveness of service delivery. But successful implementation of AI means acting strategically, understanding and mitigating the risks, and making the most of AI for the charity’s specific needs, adapting over time.
AI for AI’s sake is a recipe for disaster, particularly in the realm of service delivery. It may not be right for everyone, and when it is right, it must still always be used in the right way.
It is a powerful tool that should be used to empower.
Our 2025 Reimagining Service Delivery Summit unlocked new perspectives on service delivery and how charities can maximise value to service users. Click here to watch the session recordings for free.
Our Reimagining Services Hub features regular articles, podcasts, and webinars to support charities in delivering services. Click here to learn more.
Follow-up questions for CAI
How can charities effectively integrate generative AI into service delivery?What methods help charities mitigate bias in AI-generated content?Which AI tools best support collecting and analyzing service user feedback?How does predictive AI improve preventative measures in charity services?What strategies ensure ethical AI use while enhancing charity operations?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.