ao link

A trustees’ guide to AI ethics

We explore what charity leaders need to know about the ethics of artificial intelligence and the right questions to ask when mapping its risks 

A hand holding aloft a compass representing AI ethics on a blue background

Artificial intelligence (AI), like all digital tools, carries its fair share of risks. Data bias and insecurity, plagiarism, misinformation, and its environmental footprint are all well-documented risks associated with AI that should be taken into account when evaluating the technology.  

 

For trustees, who are responsible for setting the charity’s risk appetite and putting controls in place to mitigate threats, it is essential that they are across the ethics of AI and able to ask the right questions as the technology develops.  

 

Leadership around responsible use of AI is welcome as the sector tentatively adopts the technology –three in five charities want leaders and trustees to have more training to help them and the sector move forward with AI. 

 

However, while AI develops at pace, organisational governance is further behind. Just one in five charities are undertaking regular assessements of AI risks, according to the 2025 Charity Digital Skills report, while a similar proportion (22%) are reviewing their governance to give trustees better oversight of AI. As the report points out, “The fact that these numbers are low is a concern.” 

 

To help charity leaders develop their understanding of responsible AI, in this article we share some of the common concerns around AI use and how trustees can learn more about the technology by exploring it safely in their own work.  

 

Trustees can find out more about using AI for governance in OnBoard’s helpful guide below. 

 

 

What are the ethical risks of AI? 

 

Data security  

 

One of the key concerns around AI use is how it keeps sensitive data secure. Some large language models (LLMs) use the data they are fed to train how they respond, meaning that that data becomes available to everyone, informing later outputs, not just those involved in the initial request.  

 

It is crucial, therefore, that charity leaders understand how their data is used with third-party AI tools. Steps such as anonymising data should be taken and outlined in an AI or data policy to ensure responsible use across the organisation.  

 

Likewise, only trusted tools should be used to process data. Charities should consider starting with existing digital tools which keep data safe within their eco-system, with their AI tools subject to the same security controls. For example, OnBoard’s board management software operates in a “closed-loop system” – no data leaves your board’s system and no data is used to train external models.  

 

 

Data bias  

 

There are many recorded cases of AI perpetuating harmful stereotypes or amplifying data bias due to flaws in the information it is fed. Just as we take steps to address bias in humans, we also need to recognise its potential within AI systems. If groups are under-represented in data when it is given to AI, the outcomes it produces will likely exclude them.  

 

Charity leaders need to be aware of these flaws in AI and always apply human oversight to its outputs. Creating a data or AI policy can help ensure that only clean, accurate (anonymised!) data is used with AI systems, leading to better analysis and more relevant results.  

 

Bias within generative AI – including AI images – can also perpetuate damaging stereotypes, but this can be mitigated using guidelines created by ethical filmmakers The Saltways. The guidelines explore how to prompt responsibly and sorts uses of generative AI into high and low risk 

 

 

Misinformation 

 

AI has been known to produce hallucinations, inventing information and offering it as fact. There have been high-profile examples of non-existent articles being attributed to journalists and legal cases that never happened cited in courtrooms.  

 

Charity leaders making informed decisions for the future of their organisation must be especially aware of the risks of misinformation when using AI as a research tool and always double check their sources.  

 

 

Environmental footprint 

 

Environmental issues are becoming increasingly important to charity leaders as a matter of good governance. The 2025 updates to Charities SORP means that larger charities (those with an income of more than £15 million) are now required to report on environmental, social, and governance (ESG) matters, while smaller organisations are also encouraged to do so if it meets the needs of their stakeholders.  

 

The environmental impact of AI is huge and potentially underestimated. The environmental impact of AI also puts more emphasis on using the technology strategically. Charities with a commitment to lowering their environmental footprint must particularly consider this when evaluating use cases. The benefits must outweigh the risks.  

 

 

What can trustees do to prioritise AI ethics? 

 

Charity leaders can start developing their understanding of AI by committing to training and exploring the tools firsthand. Common barriers to AI proficiency include technical fear and a lack of shared language among board members. But sharing the learning curve can give the board a practical understanding of how AI actually works, allowing them to govern more effectively and make more informed decisions.  

 

A safer way to start is with their own digital tools. OnBoard’s software comes with a suite of board-focused AI tools that can help leaders engage with AI without compromising confidentiality.  

 

With OnBoard AI, board members can generate structured agendas, summarise board materials, and transcribe meeting minutes, while keeping data within the board management system. It also makes it possible to list action items, follow up on incomplete actions, and its Insights AI function analyses talk time and agenda alignment to see what discussions are typically taking up the most time.  

 

“Great governance doesn’t just react – it anticipates,” explains OnBoard’s AI guide. “Insights AI identifies emerging issues, compliance red flags, and under-the-surface dynamics before they escalate.” It helps trustees prepare and guide the organisation, knowing what topics need to be explored and what actions need to be taken to move forward.  

 

AI is just a tool. How charities decide to use it needs to be balanced with practical use cases and whether it aligns with organisational values. But whatever the outcome, it’s clear that, in a particularly time-pressured sector, AI use needs to be properly explored. And the best way to do that is by testing it in a secure environment with tools we’re already familiar with.  

 

To find out more about how AI can support better governance and board management, download OnBoard’s full guide below. 

Laura Stanley

Laura Stanley

Laura Stanley

More on this topic

More on this topic

Practical ways Copilot is saving charities time

Practical ways Copilot is saving charities timeSponsored Article

Charity Digital Academy

Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.

 

Tell me more