Insights
Training
On-demand
We explore how the Charity Digital Code of Practice encourages exploration of AI and how your charity can take meaningful steps towards responsible AI adoption
Artificial intelligence (AI) is all the rage. It’s talked about in every organisation, every boardroom, and every conference. Many platforms now champion the use of AI in their systems. There is plenty of discussion and a lot of hype surrounding AI, but the potential remains very real. Charities should take a practical and responsible approach to AI, sifting through all of the fanfare and making the tech work for them.
The Charity Digital Code of Practice 2025 (the Code) supports that objective. The updated version of the Code specifically addresses the risks and opportunities around AI. It shows charity leaders how they can effectively utilise AI and build AI literacy across their charities. In this article, we explore the key tenets of AI in the Code and establish exploration as a key part of AI literacy.
As specified in the Code introduction, AI refers to systems that mimic human intelligence to solve complex tasks. Generative AI has become particularly popular in recent years, but other forms of AI, such as deep learning and agentic AI, can also provide significant benefits to charities.
AI has already begun to revolutionise the very nature of work, providing huge opportunities across the UK and beyond. By 2030, according to PwC, AI could add up to $16 trillion (approximately £12.5 trillion) to the global economy through productivity enhancement, stimulated demand, and small-scale automation. That’s why building AI literacy in the charity sector is so important.
The first step to this literacy is to improve our understanding of AI, and be able to meaningfully define its different forms. AI is a word that is thrown around, often without much accuracy, so we decided that the Code should define the core types of AI charities might use. Here are the most relevant definitions, as found in the glossary.
Agentic AI: Refers to AI systems that act autonomously to achieve specific goals. It can make decisions, plan actions, and adapt to its environment without constant human input. This type of AI is used in areas like robotics, virtual assistants, and autonomous vehicles.
Deep learning: A subset of machine learning involving neural networks. Deep learning is integral to AI models, especially in processing complex inputs like images and natural language.
Generative AI: An AI model that can generate new content. These models learn to capture the probability distribution of the input data so they can produce data similar to their training data.
Large language model (LLM): Type of AI algorithm that uses deep learning techniques and massive data sets to understand, manipulate, summarise, and generate human language.
We then sought to integrate AI across all rights of the Code’s principles. This means that, for every principle, whether Culture or User Led or Adaptability, you’ll learn the best decisions to make with AI in relation to that principle. Consider, for example, the way AI has been integrated into the Data principle.
We explain that charities can use generative AI tools to perform data analytical tasks, but warn against clear risks. The risks clearly specified in the Code include:
With AI, as Arturo Dell of Azeus Convene said in a recent webinar, charities need to balance efficiency with the need to manage risks surrounding accuracy, security, and the impact on critical services. That is largely the objective of the Code: boost efficiency, minimise risk.
The best way to mitigate risks and boost efficiency is through better AI literacy, and the route to better AI literacy, as specified in the Code, comes from a spirit of learning and exploration.
The spirit of exploration is vital to building AI literacy. Under the Strategy principle, for example, charities are directly encouraged to “explore how digital tools” will increase impact, with direct reference to using AI platforms. That exploration, as mentioned above, should not be contained to the generative form of AI.
The Code particularly encourages charities to explore “extractive and predictive AI” as a way to use data to support decision-making. It contains advice on automation and how that frees up time otherwise spent on repetitive tasks.
Effective exploration depends on caution. Charities should learn about AI, the Code says, and support staff on their learning journeys, prioritising the safer and more transparent platforms. “Focus on developing skills in the short-term,” the Code says, “with employees and volunteers developing AI skills in relation to the goals of their work, such as understanding how AI models work, how to conduct prompt engineering, and how to personalise and tailor outputs for use.”
Responsibility is key, as emphasised in the Risk principle. Charities need to actively identify risks prior to exploration and ensure that the AI aligns with key values, such as integrity and inclusivity, fairness and openness. “This includes using AI responsibly and understanding how adopting AI will impact marginalised communities.”
That means experimenting with solutions on risk-free tasks prior to adoption, and piloting on a small-scale prior to wider rollout. It also means monitoring progress, especially around risks, and making incremental changes, and talking to other charities, discovering what works for them, and building literacy as a community. Charities should be ready to learn and explore, but always keep responsibility at the front of it all.
In the simplest way, the spirit of exploration builds your AI literacy. But the Code has plenty of advice for you to go further. It promotes the responsible, practical, and realistic use of AI. And, of course, it comes with a corresponding AI chatbot that can offer tailored advice that directly meets your needs. Try CAI out now and see how our AI can help you learn how to responsibly use AI.
Follow-up questions for CAI
How can charities effectively build AI literacy using the Code of Practice?What are the key AI risks charities must manage according to the Code?How does the Code recommend charities explore different types of AI?In what ways can generative AI improve data analysis for charities?How should charities balance AI efficiency with ethical responsibility?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.