Insights
We explore why many charities are hesitant to get involved with artificial intelligence (AI) and share the best ways to use it strategically and responsibly
Artificial intelligence (AI) is both known and unknown to us in 2024. While much of AI’s technology has been around for years, in machine learning and predictive tools for example, AI has become more synonymous with its newer generative capabilities in recent years.
ChatGPT, one of the most well-known generative AI tools, was first released at the end of 2022, and over the last two years, we have seen similar AI technologies grow and grow. A 2024 survey from Charity Excellence found that up to 90% of charity professionals are using it on an individual level, while three in five charities are using it organisationally.
The gap between individual and organisational use, however, speaks volumes to how new the technology is and the caution around early adoption. The 2024 Charity Digital Skills report found that only 31% of charities saw using AI as a priority for their organisation in the next year.
As with all digital tools, those tried and tested, and those emerging, the difference between success and failure lies in strategy. Artificial intelligence has the potential to change the ways we work for the better, making it easier to make informed data-based decisions and giving more time back to focus on driving impact for our causes. But we must understand the challenges we’re facing and how, precisely, AI can solve them – AI for its own sake won’t make such a difference.
It is worth noting, too, that the ethical implications of AI need to be taken into account. Given that AI tools need data to be effective, charities need to be sure of the security of the different AI platforms when working with sensitive data, such as donor details.
With generative AI, it is also important that charities are aware of the risk of hallucinations and misinformation. Charities have a uniquely trusted voice and must protect it with information that is accurate, clear, and authentic to their cause.
For charities thinking about how they can balance the problem-solving capabilities of AI with other concerns, digital consultancy Reason Digital hosted an online session on AI ethics at Charity Digital’s AI Summit to help elucidate matters. They also have a Skills Hub that can help charities navigate AI and other important digital areas.
Below, we explore these considerations further to help charities use AI responsibly and to help them discover a pathway for using it effectively within their organisation.
When we think of AI in 2024, we tend to think of generative AI – the likes of Copilot for Microsoft Edge, ChatGPT, and Gemini creating quick text in response to a prompt. According to the 2024 Charity Digital Skills report, the uses of generative AI are the most popular among charities, helping with developing online content, administrative tasks, and drafting documents and reports.
It is important to be aware, however, of some considerations when using generative AI. Firstly, misinformation. Generative AI pulls information from across a variety of sources, which can sometimes result in hallucinations, where the outputs are inaccurate or just plain wrong. It can also repeat biases or limitations found in the source material it pulls from, so charities should apply human oversight to ensure the accuracy and validity of the content before using any of it.
Secondly, in a similar vein, charities should be aware of issues around plagiarism. Copying content directly from generative AI could lead to copyright infringement or plagiarism even by accident. Charities should avoid using generative AI content wholesale to avoid copying other people’s work that has fed into the AI algorithm.
Thirdly, generative AI causes concerns around the quality of content. Written content created by generative AI, in its current form, is pretty easy to spot because of its plain, and often dull, responses. Charities are a trusted voice – around three quarters of people say they trust the information they hear from a charity through its website – so relying on artificially generated content without human oversight can undermine your authority and authenticity.
And finally, there are environmental implications for using generative AI too. Research from Goldman Sachs revealed that a query on ChatGPT needs almost ten times as much electricity to process than a Google search, with more demand on data centres than other cloud-based technology. As the sectors endeavours to reduce its environmental impact, charities must be aware of the carbon footprint of AI and only use it when relevant to specific objectives. This means building a strategy that spells out when generative AI may be useful and appropriate.
Charities should consider creating a formalised AI policy to mitigate these concerns. With an AI policy, charities can control when and how AI is used internally. For example, generative AI may be used for ideation, rather than content creation. It ensures that entire teams are aware of the challenges of using generative AI and encourages them to apply that vital human oversight to overcome them.
Predictive AI is one of the most useful applications for the charity sector, enabling organisations to spot trends and gain insights from their data with quicker analysis. However, only 8% of charities are currently using AI tools for data analysis, per the Charity Digital Skills report, which is a missed opportunity for the sector.
Some of the hesitancy around predictive AI may lie in how it processes data and fear of its complexity. Yet much of what we know of predictive AI is familiar to charities in some form. Many charities use tools to determine the best times of day to post social media content, the success of different email subject lines, the best times of year for donations, and much more. This is what AI is good for – taking reams of information and analysing it at pace to discover paths forward.
When it comes to protecting data, charities should include this within their AI policy. For example, ensuring all data is anonymised before being fed into AI tools can ensure that your information is secure when being processed.
Similarly, being aware of the security of any third-party platforms you use and how it uses your data afterwards – is it being used to train their models for instance? Tools such as Microsoft Copilot for 365 say they don’t use your Microsoft Graph data to train their AI model, and the information contained within your prompts (the data input and the outputs) remain within the Microsoft 365 service boundary already in place.
Charities may want to consider creating their own AI tools to control how their data is used, both in generative and predictive applications. Creating your own tool removes the reliance on third-party security systems to protect data and allows a better understanding of how that data is processed. Lack of transparency around AI algorithms can mean that users can’t be sure of how it reaches its conclusions, meaning it could be repeating biases found in data or delivering inaccurate results.
Building bespoke AI tools can be complex. Charities must first work out what they want to achieve, balance their need with the ethical implications, and ensure that their proposed use of AI meets their goals and objectives. Again, it all comes back to strategy.
Click above to access Reason Digital's incredible Skills Hub.
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.