Insights
We explore how fundraisers can take their first steps with artificial intelligence (AI), with insights from our Digital Fundraising Summit’s Executive Panel
While the demand for innovation in fundraising has never been more apparent, the good news is that fundraisers are matching today’s challenges with an appetite for ever-more resourceful and creative ways of working. It is no surprise, then, that many are turning their attention to the booming technology of artificial intelligence (AI) to produce efficiencies, unlock resource, and help create imaginative campaigns.
In response to this growing interest, our Digital Fundraising Summit 2023 Executive Panel, ‘The impact of AI on the nonprofit sector’ discussed how to use AI to address today’s key fundraising challenges, how to work with different internal teams on embracing AI effectively, and the future possibilities for AI in fundraising.
In this article, we explore the panel’s insights on the first steps charity fundraisers should take to respond to the emergence of AI.
In the panel, Lucy Squance, Director of Supporter Led Fundraising and Digital at Alzheimer’s Research UK, encourages fundraisers to get curious about both the benefits and risks of AI. This could involve talking to others about how they are using it, exploring different tools away from work to get familiar with their different uses, and signing up to regular news updates on the latest AI developments.
Some initial ways that Alzheimer’s Research UK has got started with AI is in its social media and content creation – for example simplifying and shortening lengthy research for their impact hub and social posts. The charity has also been using AI to support marketing through demographic information and engagement patterns. AI has helped with bid and pitch writing, as well as producing engaging internal presentations.
Squance recommends fundraisers to learn about both vulnerabilities and opportunities when it comes to using AI, to find the tools that will be most beneficial to them and their organisation, and to master those specific tools.
In terms of working with others in charity teams on the use of AI, Sophie Green, Director of Nonprofit Cloud EMEA at Salesforce, suggests meeting internal concerns with curiosity, taking the time to listen to others and collaborate on finding a suitable way forward.
Lisa Chomette, Head of Partnerships at Charity Digital, notes the importance of being guided by the charity’s mission and goals when experimenting with AI.
The panel advises that fundraisers should not proceed with using AI without first formally agreeing with senior leaders how the charity will approach the technology’s risks.
For example, at Alzheimer’s Research UK, leaders have taken a medium risk appetite to AI: whilst the team is encouraged to use AI, leaders also put in place controls on its use to protect the charity’s reputation, its employees, and all the data the charity has access to.
For Alzheimer’s Research UK, this means having an AI policy, having clear approval around workflows, and having appropriate checks around potential security and privacy risks. This way, the charity can make the most of AI’s benefits while ensuring they are meeting regulatory requirements and avoiding unnecessary risk.
Because AI relies on data as fuel, Green emphasises the need for fundraisers to pay attention to their data strategy. Incomplete and inaccurate data restricts what is possible, so having an effective data strategy underlies using AI effectively.
While a data strategy outlines your business plans for your data, it is also important to ensure that the charity has the right data infrastructure to support those plans, she says.
When it comes to managing risk, Green highlights the need for charity fundraisers to understand the AI tools they are using, as well to use tools provided by companies who are in ethical alignment with the charity’s own principles and values.
She explains that trust in AI means individuals and organisations can feel confident that an AI system is reliable, has integrity, and is ethical. This might mean looking at how accurate and consistent an AI’s outputs are, and at how transparent its algorithms are. It can also mean considering the ethical implications of that particular system for your work, including potential biases.
In a fundraising context, trusting the security of an AI system includes finding out whether it has secure data access and protects personally identifiable information.
Green advises that personally identifiable information should stay on trusted internal systems to avoid breaching regulations. Clear internal data governance and regulations can help make sure everyone on the team is on the same page when it comes to using personal data.
Here are some more helpful resources:
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.