Insights
Training
On-demand
We examine the benefits of artificial intelligence and ask how it’s developed since ChatGPT’s arrival in 2022
The concept of artificial intelligence (AI) has grown quickly in momentum since the arrival of generative AI tool ChatGPT in late 2022. A year later, AI was named “Word of the Year” by Collins Dictionary, in recognition of how quickly it had become “as ubiquitous and embedded in our lives as email”. Meanwhile, ChatGPT itself reached 100 million users just two months after launch, making it “the fastest-growing consumer application in history”, and in February 2023, the big tech names followed suit, with Microsoft and Google releasing their own AI tools, Copilot and Gemini respectively. The trajectory of AI seemed clear: it was the next big thing.
However, in 2025, developments in generative AI have slowed down. The release of Chinese AI company DeepSeek’s latest model, which purports to perform the same as ChatGPT but at a much lower cost, shook the tech industry – but little else – in late January. In March, OpenAI (ChatGPT’s owner) revealed an AI tool that had made advances with creative writing – something generative AI tools historically have been poor at...and still are.
But these revelations are yet to have much of an impact beyond the tech world. And there are clear risks involved in the technology that are yet to be fully addressed. Generative AI tools creating content, for example, are continuously dancing with issues around plagiarism and copyright, given the wealth of data it learns from to put out even the most basic writing. There are also privacy concerns around DeepSeek, with many openly available AI platforms carrying similar risks.
Adding these to the myriad other ethical issues surrounding AI content must lead us to ask ourselves: is generative AI really worth it?
While much has been made of ChatGPT’s growth, the number of users suggests that it’s not yet a ubiquitous tool people can’t live without (despite Collins’ assertion in 2023). ChatGPT says it has 300 million monthly users as of March 2025, and is the ninth most visited website in the world – which sounds impressive but is notably below Bing, Microsoft’s search engine (and considerably below Google). For further comparison, Facebook has 3 billion monthly users.
Ed Zitron, CEO of EZPR and host of the Better Offline podcast, also points out that the number of monthly users does not tell us much about how people are using AI: “It doesn’t delineate between daily users, and those who occasionally (and shallowly) flirt with an app or a website. It doesn’t say how essential a product is for that person.”
Indeed, while it would be churlish to suggest AI has no use cases at all, suggestions that the technology will “revolutionise the way we work” feels like an exaggeration, at least at present. In fact, while more than nine in ten C-suite executives expect AI to increase their organisation’s overall productivity levels, 47% of workers have no idea how to achieve these gains. The focus on AI adoption has increased workloads and pressure on employees, rather than relieving it. And the most common usages of AI remain vague. In an article about DeepSeek, the BBC lists AI’s uses for “everyday tasks like writing emails, summarising text, and answering questions - and others even use them to help with basic coding and studying.” None of this is particularly transformative and that is reflected in how charities are currently using AI.
According to the 2024 Charity Digital Skills report, the most popular usages of AI are developing online content, administrative tasks like summarising meeting notes, and drafting documents and reports. More than a third of charities are not using AI tools on a day-to-day basis. This could suggest that the charity sector is in the early stages of AI adoption generally or that perhaps the transformative power of AI just isn’t that applicable yet.
As Zitron asks: “If generative AI disappeared tomorrow — assuming you are not somebody who actively builds [AI]— would your life materially change?”
With all this in mind, we sought to answer that question below, asking what AI can do and at what cost. We explore the use cases, the risks, and how charities can mitigate them to discover if, in 2025, generative AI is really worth the hype.
It is worth noting that the world of AI moves fast. New tools seem to come out of nowhere and what the capabilities are at the time of writing this article could very well change over the next few weeks, let alone years.
However, the AI news cycle moves faster than the technology. Reports on DeepSeek when it launched in January 2025 talked about how it had shaken the markets, knocked money off share prices, and was downloaded by millions. While its growth certainly was notable in January, there have been signs of decline since then. “Many users, driven by curiosity, have tried the platform but either found it lacking compared to more established competitors or simply didn’t find enough reason to continue using it regularly,” notes TechRadar.
Similar trajectories have also been experienced by DeepSeek’s more established competitors, including ChatGPT. As of May 2024, a third of people in the UK said they had used Generative AI – which isn’t enough to demonstrate that is becoming an essential part of our lives. Of those who use it, only one in ten use it daily while two in five use it less than monthly.
But that doesn’t necessarily mean generative AI isn’t useful. It could be that we’re simply not using AI efficiently at the moment. The 2024 Charity Digital Skills report revealed that three in five charities want to develop a general understanding of AI and how other charities are using it, suggesting that there is a lack of clarity around how it can practically help. More than half (58%) of charities want practical knowledge of how to use AI tools responsibly, while 48% want to know about use cases in their services.
So, what are the use cases of generative AI as it stands? Here, we look at three of the ways charities can use AI and how they can benefit the sector.
Generating content was the most popular reason for using AI in the charity sector, according to the 2024 Charity Digital Skills report. Large language models (LLMs), which power the likes of ChatGPT, Gemini, and Claude, are able to generate swathes of text incredibly quickly. With the right prompts, charities can create email subject lines, social media posts, and even entire blogs – though the latter is not advisable due to risks involved.
Charities are trying to connect with a wide range of stakeholders – donors, supporters, beneficiaries, volunteers, and trustees. Almost three quarters of UK adults are concerned about the prevalence of AI generated content – overly relying on generative AI can lower trust and undermine a charity’s authority when it comes to communicating about their cause. Using generative AI to create images throws up the same challenges.
When using generative AI for content, it is important to apply human oversight. Human oversight mitigates the risk of publishing inaccurate information produced by AI (also known as “hallucinations”) and improves the quality of the text, which can be stolid and uncreative.
There are also signs that AI perpetuates bias if the information it is fed is biased itself. There have been many reports of generative AI tools reproducing gender and racial stereotypes. One study found that images generated by AI to show people in high-paying jobs were more likely to feature subjects with lighter skin tones, while people in low-paying roles were more likely to have darker skin tones.
While we might account for such bias in humans and take steps to tackle it, we need to recognise its influence on content created by AI. Applying human oversight allows us to catch potential issues before they become a wider problem.
There are more disadvantages to using AI to create content – including ethical implications around plagiarism – but we’ll cover those in more detail later.
One of the most common beliefs about AI is that it will give us time back, automating laborious, time-consuming background tasks to focus on the actions that are most impactful.
There are many administrative tasks that AI is able to help with, from summarising meetings to composing emails. Microsoft users can get a lot of support from its AI tool Copilot, which combines the natural language processing technology behind ChatGPT with data connected from your calendar, emails, Teams chats, and more.
With Copilot for 365, for example, charity professionals can create presentations from Word documents in PowerPoint, create graphs on Excel, and summarise email threads in Outlook. Generative AI can also create a polite response for you, based on the information in those emails. Even if the results are not quite the finished product and need a bit of tweaking, AI has the potential to perform simple tasks more quickly. How quickly is another matter.
Looking up information was cited by two in five people as a reason to use generative AI in the workplace, according to Deloitte.
This is understandable given the way generative AI can present a summary of information to any question in a personable, easy-to-understand way. It gives exact answers to bespoke questions, as opposed to delivering general information. If you have a question about ways an animal charity based in Surrey with three employees can fundraise, it will give you answers specifically for that. Far from trawling different web pages to find the information you need, generative AI tools present it cleanly, consolidating them in one place.
Prompt engineering can help AI tools refine their answers, making it more likely to give you the exact information you’re looking for. You can ask tools to provide their sources (note that Copilot for Bing, Microsoft’s search engine, already does) so you can verify information and gauge its reliability.
Again, we should consider our golden rule of using AI, especially for research: apply human oversight. AI tools have been known to share inaccurate information, or “hallucinations”, and there’s not always lots of transparency around the sources it draws from. More on that later.
So AI has shown itself to be useful for some tasks. But the question remains: do the benefits currently outweigh the costs?
Below, we explore the downsides of AI (and share a little about if and how charities can avoid them).
Regardless of the varying quality of the results, AI requires huge amounts of data in order to spit them out at all. And this data has to be housed somewhere – namely in thousands of data centres all over the world.
Data centres do not solely exist to power AI – all cloud-based software relies on them. But the rise of AI is putting pressure on data centre capacity, with research from McKinley predicting that demand for AI-ready data centre capacity could rise by an average of 33% year-on-year before 2030 if current trends continue.
The number of hyperscale data centres – which are designed for housing and processing larger data sets and are most used by the tech giants such as Amazon, Google, and Microsoft – have already doubled since 2020. They are expected to do so again in the next five years, largely estimated to be a result of AI. Goldman Sachs estimates that AI will drive an 160% increase in data centre power demand by 2030, with queries from ChatGPT needing nearly 10 times as much electricity to process as a Google search.
The issue with all this growth, then, driven by the data demands of AI, is that it comes at a significant environmental cost. Data centres require lots of energy. In 2022, data centres accounted for a fifth of Ireland’s energy consumption. The Chief Executive of the National Grid in the UK has suggested that the energy needed to power data centres could increase six-fold in the next ten years. Banking giant Morgan Stanley projects that data centre carbon emissions will accumulate to 2.5 billion metric tons by 2030.
Data centres also use up a lot of water. Data centres generate enormous amounts of heat and need cooling to stay operational. While there are lots of different approaches being tested to rectify this (the UK government, for example, is looking at recycling the waste heat to homes in the local community), traditional cooling methods might use 26 million litres of water per year even in a relatively small data centre. Concerns have already been raised over the impact of the UK’s AI usage on its supply of fresh, drinkable water, with the BBC reporting that Thames Water had been in contact with the government “about the challenge of water demand in relation to data centres and how it can be mitigated”.
Digital technology has always had a relatively hidden environmental cost – despite having a carbon footprint comparable to the aviation industry. AI and its increased pressure on data centres looks set to worsen, not improve, that footprint, and we must ask ourselves: for what? The most common use of AI is responding to texts or emails, according to research from Forbes – and we surely don’t really need it for that.
Bernard Marr, contributor at Forbes, says: “AI’s rapidly growing energy footprint represents one of the industry’s most significant challenges as it is putting technological advancement at odds with environmental goals unless major breakthroughs in computing efficiency or energy generation occur.”
Charities must remember the environmental impact of generative AI when using it and weigh up whether its benefits are worth the costs.
Remember, lots of text may appear as if by magic, but it’s not: behind every AI output is the relentless thirst of a whirring, droning data centre.
There’s a real chance that AI might not make us more efficient after all, at least in its current form. More than three quarters of workers in Australia, Canada, the US, and the UK say that AI tools have decreased their productivity and added to their workload. Almost half (47%) of those using the technology say they have no idea how to achieve the productivity gains expected of them as a result.
In research, for example, generative AI has become a search engine replacement, providing users with direct, clear answers to their questions at speed. However, it yields mixed results. Google’s AI tool Gemini, which creates a summary of information that appears automatically at the top of search results, is often inaccurate – even in its own adverts.
Meanwhile, answers to questions asked on ChatGPT have been found to be vague, out-of-date, and hallucinatory too. It can take time to adjust prompts to get the right answers, asking for specific sources to back up the information shared. Without being able to gauge how reputable the sources are, it’s difficult to verify facts in the ways we’re used to when using search engines.
So even when we’ve finally got that neat little summary of information we asked for, after prompt engineering and refining our question, can we really say it has saved much time?
The advent of generative AI has thrown up significant questions around intellectual property and copyright. Generative AI tools are trained on existing data, a lot of which comes from copyrighted material and content written by humans. The outputs subsequently generated by AI tools may then be reproducing this content, perhaps verbatim, leading to legal and ethical questions around plagiarism, ownership, and respect for content creators.
This is still new ground for AI and answers have not been forthcoming. It is expected to be a regulatory sticking point for governments and the courts for quite some time, and cases are already being raised to challenge AI for unlawful use of existing works. Meta has recently come under fire once it was revealed it had used a pirated books site to train its AI tool Llama 3. OpenAI has faced similar issues, arguing that their use of copyrighted material falls under “fair use” since it is used to create new content, not reproduced wholesale.
Whatever the decision is on this matter, if one is eventually arrived at, for now we must proceed with caution. Charities, who face higher levels of scrutiny than many other organisations, may find their content – and their authority – is undermined if found to be entirely produced by AI.
There are real concerns around the security of AI, not just of the platforms themselves and how they protect the data we feed into them, but also around how the technology is being used to support fraudulent activity.
A 2023 study from PwC found that there is “consensus that AI will drive an increase in the volume and sophistication of fraud and scams” – though there was little evidence of this happening at scale at the time of the report. A 2024 study from The Centre for Emerging Technology and Security also noted that, while “the near-term impact of AI-generated code is limited”, generative AI “does have the potential to profoundly disrupt the cybersecurity landscape over a longer time horizon, exacerbating existing risks with respect to the speed and scale of reconnaissance, social engineering, and spear-phishing.”
So our picture of how generative AI will affect the cybersecurity landscape is not yet fully formed. But there are still some factors to consider.
The National Cyber Security Centre (NCSC) highlights some weaknesses within generative AI models which cyber criminals can exploit to manipulate outputs.
Prompt injection attacks, for example, occur “when an attacker creates an input designed to make the model behave in an unintended way”, such as revealing confidential information. Generative AI is also vulnerable to data poisoning attacks, which happen when someone tampers with the data used to train an AI model. This can create skewed, biased, or harmful outputs, or cause the AI model to fail.
The NCSC has published guidelines for developing AI systems to mitigate these risks and building security into their design, but also advises caution to software developers around downloading and executing code from LLMs.
We should also add that some cyber security experts expect AI to help with detect fraudulent activity and other threats and it is already being employed to do so. As cyber security software providers Avast point out, “Machine learning algorithms can be trained to identify patterns and anomalies in network traffic, allowing them to detect and respond to threats in real-time.”
So it may be the case that AI is both a problem and a solution in the world of cyber security. But, in a world where much of the AI coverage leans towards how it can help us, it is important to remain aware of the risks.
“Quality is the most important element of content,” writes Ioan Marc Jones, Charity Digital’s Head of Content. Yet, he notes, it is “remarkable, in fact, that so little of the discourse around ChatGPT and generative AI has focussed on the absence of quality.”
Generative AI’s ability to create large swathes of text instantly is impressive. But what really sets it apart – and makes it easier to spot – is its lack of quality. We’re not just talking about accuracy, but about how it uses language.
While LLMs follow formal grammar rules, and has an excellent vocabulary, it shows little to no flair. It is vague, dull, predictable, and impersonable (to be expected since it is, in fact, a machine). Its inability to break formal grammar rules, as a seasoned human writer might, prevents it from creating compelling, exciting content and produces stale, predictable writing instead. For charities, who use content to capture the attention of supporters and motivate them behind a cause, such writing can be a significant barrier towards achieving those goals.
There’s also the matter of AI drift. AI drift happens when LLMs run out of human-generated content to train from and trains increasingly on AI-generated content, leading to less optimal outputs over time. There have been reports of LLMs “hitting a wall” in terms of capabilities, with growth slowing as they run out of human data to train on. So it is not inevitable that content quality will improve as AI tools develop in the immediate future.
Furthermore, there’s a risk that an overreliance on AI means losing the skills we’ve developed to verify reputable sources and write quality content that communicates clearly and moves people to support our cause.
In an article about the decline of handwriting, author Christine Rosen highlighted a study about laptop use for taking notes in class, in which researchers reported that students using laptops processed information more shallowly than their handwriting peers. They were less likely to reframe and reword information in order to take it down and therefore less likely to have taken it in. What they gained in speed, they lost in understanding.
In a similar vein, we must ask ourselves what skills we are losing by adopting AI even for our most simple tasks, as well as those we are gaining. Is speed worth it, at the cost of everything else?
The short answer is not right now – or perhaps never. Resolving these issues will take time, regulation, investment, transparency, and co-operation between tech companies, governments, and the rest of us. It’s bigger than the charity sector.
But there are still measures we can put in place to ensure we are using AI responsibly, if we do intend to use it.
For charities, there are three simple and accessible ways to build confidence and experience around AI: crediting your content, creating an AI policy, and investing in training. Here we explore each method in more detail.
The sector should aim to be fair and mindful if using AI content, to mitigate the risk of even accidental plagiarism. Do not copy and paste content from AI. Instead, edit. Ask for and credit your sources. Create an AI policy internally to formalise these rules and ensure everyone is using AI the same way i.e. not publishing it unedited. If in doubt, always edit and credit.
Charities should also consider following the lead of social impact charity CAST and signposting when content is created by generative AI with an AI transparency statement, originated by author Kester Brewin. You can see the transparency statement in action at the bottom of this article.
As Brewin states: “Until we have some mechanism by which we can test for AI – and that will be extraordinarily difficult – we at least need a means by which writers build trust in their work by being transparent about the tools they have used.”
Creating an AI policy becomes increasingly important when you consider half of knowledge workers (defined as those primarily working at a desk or computer) use personal AI tools at work. Considering all the factors we’ve outlined above, employers must be sure their teams are using AI responsibly, mitigating against the risks of bias, plagiarism, and data breaches (to name just three).
An AI policy helps organisations put measures in place to guide and prioritise AI responsibility in the workplace. An AI policy holds everyone accountable and creates a culture of transparency around AI usage that benefits everyone, from trustees to beneficiaries alike. It sets out clear parameters for where AI should and shouldn’t be used, highlights the risks and how to circumvent them (e.g. using anonymised data with third-party platforms).
Furthermore, by including use cases within your AI policy, it can help charities use the technology more strategically, in aid of achieving their overall mission goals.
Friends of the Earth, in its thoroughly-researched guide to the responsible use of AI, also recommends making distinctions between predictive and generative AI use in an AI policy, since the two branches differ in terms of impact. When it came to defining use cases for AI in terms of climate justice and human rights, the charity found it easier to point to areas where predictive AI was having a positive impact, as opposed to generative AI, whose harms were more obvious.
Charities can use this checklist, compiled by PIR.org, to outline the risks and opportunities of AI and form the basis of their AI policy. They can find out more about what to include in an AI policy in our podcast, in partnership with Qlic IT, AI for Charities 101.
Skills gaps are a significant challenge for charities. The 2024 Charity Digital Skills report found that lack of skills and expertise was cited as a barrier to moving forward with AI by half of respondents. As a result, there is a clear appetite among charities for AI-specific training. As noted earlier, almost three in five (57%) charities were looking to take part in external training, support, guidance, or informal opportunities to engage further with AI in 2024.
Training empowers employees to use AI with a complete understanding of the implications, from how it works with data to how it sources content. It’s clear that AI usage will seep into the workplace even without formal training in place. Investing in training now enables employees to make informed decisions about when to use AI and weigh up the considerations we’ve highlighted within this article.
“While the most popular uses tend to be generating ideas and looking up information, these may not be optimal applications of GenAI, given known issues such as hallucination,” says Paul Lee, partner and Head of Technology, Media, and Telecommunications Research at Deloitte. “Employers need to step up and invest in tools and governance to better support their staff in using this technology.”
There is an argument to say the sector has bought into AI too soon. Its uses are limited and weighed up against the cost, arguably not worth it in its current form. But we don’t know what’s next for AI. For the sake of understanding how it works even in its relative infancy, and being able to spot misinformation alone, now is a good time to start educating ourselves – not just because of the potential opportunities, but because of its risks.
In 2025, generative AI should all be about balance. Knowing the environmental impact of generative AI should make you question whether using it simply to reply to emails is worth it. It should lead us to use the technology more strategically, applying it to the areas where it is most needed, not just the areas where it is easy.
Similarly, we must consider the ethics of using AI. Content does not appear by magic, but rather content created by humans is fed into these LLMs for them to regurgitate. Therefore AI should not take the credit.
Given that it is trained on content created by humans, we must also recognise its fallibility. AI can perpetuate bias. If the information being fed into LLMs is biased, then the outputs will be too and can cause real harm.
All this to say that AI should not be relied upon as a miracle solution, but rather used as a tool to support and augment. Applying human oversight can mitigate the risks and ensure that AI is being used responsibly in the limbo before it is legally regulated.
Follow-up questions for CAI
How can charities effectively apply human oversight to AI-generated content?What strategies reduce environmental impact of AI data centers in nonprofits?How can AI training improve responsible usage among charity sector employees?In what ways can AI enhance administrative efficiency for charitable organizations?How can AI tools be integrated to support ethical content creation practices?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.