Insights
Training
On-demand
Charities are being urged not to forget potential legal issues that can arise from using artificial intelligence in their work
There are huge benefits for charities by investing in artificial intelligence (AI). The technology can help improve their organisation and generally make life easier and more impactful, even just by taking on day-to-day administrative tasks to help focus staff time on frontline support.
Likewise, through chatbots, more people may be helped faster, and support can also be improved through AI’s role in analysing data.
However, when using AI in their work, charities are being urged to also consider the legal implications. This includes adhering to data collection laws, ensuring there is no copyright infringement when using the technology to generate content, and being aware of discrimination and bias in AI produced work.
Speaking at a discussion session in 2025 on AI run by think tank New Philanthropy Capital, Kieran John, Managing Associate at law firm Mishcon de Reya, advised charities not to view AI “in a vacuum”, adding that the starting point in using the technology “has to be your legal duties”.
Developing a clear policy around AI use can help charities better identify and pre-empt risks, as well as set standards for the technology, according to Blackbaud’s Status of UK Fundraising 2025 report. The report revealed that AI use in the charity sector had increased from just over half in 2024 to more than three in four a year later. However, it also found that “AI policy development is still in its early stages despite increased use”.
Here, we look at some of the key legal implications around AI for charities to consider and explore practical steps they can take to mitigate risk.
The Information Commissioner’s Office points out that whenever an organisation is processing personal data to train a new AI system or make predictions using an existing set up “you must have an appropriate lawful basis to do so”.
In its guidance, the UK’s information watchdog also notes that it is each organisations’ responsibility to use AI within the law and that any decisions around this should be documented.
Among laws to be aware of are the 2018 Data Protection Act and the UK’s General Data Protection Regulation (GDPR), both of which protect personal sensitive data. Guidance from Mishcon de Reya points out that the use of personal data with AI technology would necessitate a data protection impact assessment being carried out under UK GDPR.
Intellectual property (IP) infringement takes place when someone’s work is used, copied, or exploited without their permission. Mishcon de Reya’s guidance says that “charities should be mindful of the intellectual property complexities” in using AI-generated content and ensure they put in place “policies that take account of any potential right of third-party IP rights or breaches of terms and conditions of use of third-party content”.
In December 2024, the UK government launched a consultation on the issue of copyright and AI to give greater clarity. Currently, if a generative AI model includes a “substantial part of a copyright work” without a licence then “this may infringe copyright”, it points out.
However, if a work is generated without any “human authorship”, that may be protected as a “computer-generated work” in UK copyright law.
Options being considered by the government include strengthening copyright by only training AI models to use copyright works if they have a licence. Another option is to introduce a “broad data mining exception” with little restrictions on the use of copyright material. But the UK government acknowledges that few countries provide this, with Singapore the only example given.
Charities also need to be aware of potential breaches of the 2010 Equality Act through bias in AI systems that discriminates against people.
Already there is considerable concern around AI’s potential for bias. For example, a study by Carnegie Mellon University in Pittsburgh, USA, found that algorithms used in Google’s online advertising displayed high-income jobs to men more than women. Similarly, Amazon scrapped an AI recruiting tool that discriminated against women for technical roles.
Mishcon de Reya also warns that biased AI systems “could also give rise to reputational damage and may result in regulatory scrutiny”.
The law firm’s guidance also raises the risk of “hallucinations”, where AI generates false or misleading information. This is where generative AI systems may create “plausible” information that is “actually inaccurate” as it may not be able to discern nuance or broader context.
Practical legal steps charities can take to mitigate the above risks include:
Follow-up questions for CAI
How can charities develop effective AI usage policies to mitigate legal risks?What steps ensure AI compliance with UK data protection laws in charities?How might charities identify and reduce bias in AI-driven decision-making?What legal considerations should charities address when using AI-generated content?How can charities implement human review to validate AI-generated outputs?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.