Insights
Training
On-demand
We discuss the best practices for using AI at work, from how to use it strategically to protecting against its risks
There’s been a lot written about AI, the efficiencies it brings and how it will transform the workplace. In some cases, new tech has already been implemented – three quarters of charities are using AI, according to the 2025 Charity Digital Skills report.
Despite the heralded advances, it’s not a free-for-all. In this article, we share some simple rules and best practices to steer how you use AI and what to take precautions against.
If you’re going to use AI at work, understanding its use-cases, scope, and limitations is important. From an organisational perspective, drafting an AI governance policy sets out the standards which staff should abide by.
Some concepts to include in an AI policy document are ethical guidelines, examples of when to and not to use AI, along with who is responsible for owning the technology within the charity.
There are many considerations around how AI algorithms select appropriate reference datapoints. Data could be gathered from around the internet or from a subset of select sources. Understanding where the information comes from may uncover unintended biases.
For example, AI applicant tracking systems may determine that certain backgrounds have higher hiring success rates. It might conclude that it makes logical sense to promote candidates from those origins. Bias is inherent here – that’s not a fair policy for selection. To mitigate such risks, IBM recommends using AI fairness tools to evaluate.
There are efficiencies to be gained from AI in the workplace. In finance, the technology could assist directors in automating mundane reconciliation tasks and discerning trends. Meanwhile, in the fundraising team, AI could help speed up the time to produce a grant application.
AI could also be used to engage audiences via chatbots and other service delivery automatons. Audiences may want to, or have the right to know when they are dealing with Al. It’s up to charity leaders to decide how and in what format audiences are told.
Diving into the AI assistant, charities should be cautious. Due diligence should be performed on the assistant in terms of what it’s meant to do (i.e. answer questions or something more?); who it speaks to; the tone of language it uses; and how it is implemented.
Charites should proceed only when the AI is tightly controlled and tested. Leaders will want to ensure that AI bots have a firmly circumscribed role so that there’s no added moral or ethical liability.
Another valid concern around the implementation of AI is how to ensure that what it spits out makes sense and is correct. Microsoft recommends a simple rule to follow. They say: “Always double-check AI-generated results before using them in your work. AI can make mistakes.” As Charity Digital agrees, always apply human oversight.
There are still sensitive areas where AI shouldn’t be used. Confidential information around beneficiaries, contracts, and financial details used with AI could potentially expose charities to cybersecurity risks, even if unaware.
In fact, the Society for Computers & Law notes that: “In the UK, 57% of organisations admit they cannot track sensitive data exchanges involving AI, amplifying the risk of breaches.”
When using AI, staff are generally asking generative platforms questions or asking them to synthesise data. A major consideration around this process is how the AI collects data and whether its shared or used for reference.
The UKSPA highlights that not only do most consumers not know how much personal data is collected by AI, but in many cases, they regret sharing data. The way to mitigate this risk is easy – do your due diligence around the AI, understand what’s being collected, and let beneficiaries choose what to share.
Having an AI policy isn’t enough to govern how colleagues use the tech. Training staff is critical to successful implementation.
Here we share some of the best resources to help charity teams navigate AI:
Charity Digital’s Artificial Intelligence hub shares articles, webinars, and podcasts designed to educate the sector on AI and its impacts
Salesforce’s workbook helps staff prepare for how the tech will impact their organisation and how to draft policies
Chartered Institute for Fundraising’s Introduction to AI Tools covers examples of AI and some of the ethical, privacy and organisational concerns
Charity Excellence Framework’s AI for Fundraising and Charity Innovation shares information including handy written frameworks and how AI might impact jobs and operations
Media Trust and NCVO’s AI Essentials Bootcamp includes live workshops and digital learning. The three-week course works well for all charities as it’s free of charge.
Follow-up questions for CAI
How can organisations effectively draft an AI governance policy?What methods mitigate bias risks in AI algorithms at work?How should charities communicate AI usage to their audiences?What are best practices for testing and controlling AI assistants?Which training resources best prepare staff for AI implementation?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.