Insights
Training
On-demand
We explore how charities can adopt AI ethically, with insights from our Digital Fundraising Summit panel
Artificial intelligence (AI) ethics may seem an esoteric subject, but charities are increasingly aware of how vital it really is. For time-pushed, purpose-led organisations looking to adopt AI, practical guidance is needed to make sure responsibility stays at the core, where charities know it belongs.
Our Digital Fundraising Summit 2025 panel, ‘Building Ethical AI in Fundraising Solutions’ explored the most important practicalities of AI ethics charities should know, to help you get started in the right way.
During the Microsoft Elevate hosted session, the panel included speakers from Kerv, mhance, and TES, three of Microsoft’s UK partners. They each work directly with charities to help accelerate their missions using Microsoft solutions. In this article, we explore their key insights into how charities can use AI ethically in fundraising and beyond.
Download Microsoft’s AI Toolkit
In 2025, many charities are dazzled by the potential of AI technologies and are eager to adopt them at speed to reap their benefits as soon as possible. But among all the excitement, it’s important not to sidestep the ethical questions that can help prevent disaster later down the line.
This was a point raised in the panel by Mandeep Padan, Solutions Director for Social Enterprises at Kerv. To avoid rushing to adopt AI and inadvertently compromising your charity’s ethics, he shared a four-pillar approach:
Pillar 1: Strategy Being clear and intentional about how the charity will use AI responsibly, aligning clear purposes, use cases, and value of AI with the appropriate ethical principles and management
Pillar 2: Governance and security Managing AI risk by making sure data has the appropriate tagging, classification, security access, and control filters
Pillar 3: Management Rigorously testing your charity’s AI projects
Pillar 4: Implementation Building the AI solution, involving innovation, scalability, alignment with your mission, and managing your charity’s AI knowledge
When charities are diligently working towards their missions, with all of the many related challenges and opportunities, grappling with the wide-ranging and nebulous topic of AI ethics may, rightly, feel a bit overwhelming. So how can charities make it easier on themselves, while still managing the risks effectively?
Chris Wilson, Nonprofit Technology and Solution Strategist at TES, suggests thinking about it in the context of individual use cases for your charity. Start by zeroing in on the particular projects your charity intends to use AI for—and then you can start to articulate what the ethical risks are and how you can mitigate those risks.
Gone are the days where organisations can be passive in how they use digital technology, the panellists explained. With the intense pace of change in AI, it requires charities to be actively engaged in the ethics of their AI projects throughout their lifespan.
Adopting AI is an ongoing evolution, said James Glover, Chief Technology Officer at mhance. For example, a once perfectly functioning AI solution can go off the rails if you lose focus and begin to enter slightly inaccurate data. As another example, the government legislation on how we use AI could change, and charities need to be prepared with an understanding of the ethics of their AI projects, ready to adapt it as needed.
It’s important to recognise that implementing and managing AI is a journey for the long-term, the panellists agreed. One way of managing this is monitoring the success of AI projects with how well they adhere to ethical parameters, as Chris Wilson highlighted.
Charities are responsible for using data ethically, such as protecting the privacy of stakeholders’ personal data, ensuring data is not used to make biased decisions, being inclusive with data, and avoiding harm to vulnerable communities when processing and using data.
Mandeep Padan outlined three areas that charities can focus on to ensure transparency. First is an architectural framework for ethics, including elements such as a strategy, governance, management, and AI tools. Second is having clear roles and responsibilities within the team, starting with commitment to AI ethics from senior leaders. Finally, charities need to have clear communication channels that lead right up to their stakeholders.
It’s important to know that everyone is still finding their way with AI, he noted. Similarly, when communicating about AI to stakeholders, James Glover emphasised that it’s best not to simply say, “We’ve done AI” as a box-ticking exercise, but rather communicate “We’re doing AI, and it’s a journey.”
Microsoft Elevate is committed to delivering affordable and innovative cloud solutions to help nonprofits tackle the world’s biggest challenges.
Follow-up questions for CAI
How can charities implement AI while maintaining ethical fundraising practices?What are key ethical considerations when using AI in fundraising campaigns?How does ethical AI improve donor trust and engagement in fundraising?Which AI tools best support transparent and responsible fundraising efforts?What steps ensure AI-driven fundraising respects donor privacy and consent?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.