Insights
We explore how charities can prioritise the responsible use of artificial intelligence (AI) in their organisation, with a checklist from Public Interest Registry (PIR)
Artificial intelligence (AI) is poised to be a defining technological force of this decade, driving transformative changes across society and industries. With each breakthrough, capabilities and benefits increase, but so do risks and the potential for unintended consequences. In 2024, there are increasing calls to establish frameworks for responsible use, so we can ensure AI benefits society while safeguarding against its risks.
For charities, it is important that the sector contributes to that conversation. AI has the capacity to revolutionise the way we work, freeing up time that is desperately needed to help our communities and service users. But there are also very real concerns. Research from the Charities Aid Foundation shows that 34% of the public believe the risks and opportunities of AI are equal, while more than a fifth (22%) say the risks outweigh or far outweigh the opportunities. Among their top concerns are subsequent reductions in the workforce, the risks of a data breach, and poor data leading to biased decisions.
But the fact remains that AI is an important opportunity for the sector. It can help improve productivity, increase efficiency, and find new solutions for difficult challenges. What’s more, many in the charity sector are already embracing that opportunity. The 2024 Charity Digital Skills report revealed that 61% of charities are using AI in their day-to-day operations. Of these, 45% are using AI tools informally, testing them out to understand how they might be used to drive further impact for their charity.
Given the level to which many software products are incorporating AI into their core functionality, for many charities, is not a matter of choosing to use AI or not, but rather shaping how we use it, ensuring it aligns with our strategy, purpose, and ethics.
As part of that conversation, PIR.org, the organisation that powers the .ORG domain name, has created a helpful checklist for charities to use to balance AI’s usefulness with its risks. Outlined by PIR’s Chief Technology Officer, Rick Wilhelm, the checklist has three key areas for charities to consider: risks, opportunities, and guidance.
Below, we share the key questions charities can ask themselves when assessing each area. Their answers to the checklist can then help charities formulate their internal AI policy, codifying what responsible use looks like for all stakeholders.
Confidentiality – Consider the confidentiality of the data you’ll be providing to your AI tools. Where is the data going? Who else is seeing it? After the tool provides the results you are seeking, will your data be used to train the AI tools or is it kept within your security boundary? In many cases, these answers are available in the Terms and Conditions of the tool.
Countermeasure: Review your organisation’s policies related to agreeing to any End User License Agreement (EULA) that involves organisational data; in many cases, these will require review by an attorney..
Skills – Will using AI build skills internally? Could a human better (i.e. faster, better quality, etc.) perform the task you’re applying AI to? When using AI instead of a person to perform a high-frequency or high-importance task, an organisation risks forming a dependency risk that is similar to having a single team member that can do/understand a task: that is, no one else on the team can either do the work or check the work.
Countermeasure: Consider how using AI is contributing, positively or negatively, to your (or your organisation’s) long-term skills plan.
Quality – Assess the quality of AI’s output. Are the answers correct? Do they match with your expertise as a charity professional working in your organisation? One doesn’t have to look hard for articles about AI generating hallucinations and non-sensical answers. An important thing to remember about AI its answers are rarely fully deterministic and are also can be highly sensitive to small changes in inputs. This stands in stark contrast to users’ typical expectations of computer systems which when queried for the same data over time will (typically) give the same answer.
Countermeasure: Always check the output of an AI before using it.
Efficiency – AI can help with laborious tasks. List the tasks that could be made more efficient with AI and the tools that can make that possible.
For each task, is the effort required to use AI for these efficiency gains worth the time saved (and the time/money/complexity cost of the tool)?
Capability – Are there skills missing in your charity that AI can give you?
Is this skills assessment (in the context of AI) pointing out a training or capability gap in the current team?
Quality – AI can improve the quality of the work you’re doing. For example, it can help you analyse data more quickly and improve how you fundraise or deliver services. Work out where its outputs can really have value.
Are there other, perhaps simpler, ways in which you can improve the quality of your work? The possible use of AI might be the trigger for anaylsing operations for possible improvement, but there are many solutions
Read the fine print - Once your organisation’s data is submitted to an AI it’s hard (probably impossible to completely delete, so it is vital to both read the fine print and pick your partners carefully. Understand how each tool works before using it, particularly when confidential or PII data is involved.
Purpose – Just because you or your team is using AI doesn’t make your work product better. And just because a product has AI doesn’t make it better. Ask yourself whether AI is really making a difference right now.
Test – While there are plenty of reasons to go slow on AI, rest assured, it is not going away. This is another foundational technology that every business needs to apprehend and be ready to incorporate in some way. In the grand tradition of the telephone, copier, fax, PC, LAN, Internet, ecommerce, and cloud: AI is here to stay. So charities can, and should start testing and getting familiar with the technology, probably starting with low-stakes pilots.
Relax – The market is evolving quickly right now. Charities can afford to take a breath and take their time working out their needs, testing tools and letting the market evolve. These are “early days” in the AI revolution. At some point, we will look back at the launch of ChatGPT with the same nostalgia as the launch of the Netscape 0.9 web browser, a little more than 30 years ago.
Collaborating with other charities in the sector and seeking out expertise from others can also help inform our understanding of AI. PIR’s .ORG Learning Center, Charity Digital’s AI Hub, and CAST are all good places to start when considering AI and its impacts on the way charities operate.
Now is the time to lay the foundations for how we use AI – only then can we use it to build and grow on our shared mission to help others.
Click above to find out more information to find more learning resources for charities delivered by PIR.org
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.