Insights
Training
On-demand
We explore how artificial intelligence can enhance cyber security for charities, with insight from a new report by Microsoft
Cyber security is a fundamental element of charity work. It underpins an organisation’s ability to fundraise, deliver services, and keep the data of their donors and beneficiaries safe.
With the advent of artificial intelligence (AI), cyber security has become even more important. While many charities are currently taking “active steps to adopt AI responsibly and strategically”, working with AI tools throws up potential security risks around how they use data and protect it.
This is a challenge charities must meet now. As Microsoft points out in its e-book AI-Enhanced Security Fundamentals for Nonprofits, for charities to get the most out of AI, they also have to “be confident that your data is safe both within and beyond your network”. And with more than three quarters of charities using AI tools in their day-to-day work, according to the 2025 Charity Digital Skills report, ensuring they are doing so securely should be a priority.
Microsoft recommend using a “Zero Trust” approach to protect your data against the rising tide of cyber threats. A Zero Trust philosophy strengthens a charity’s cyber security by assuming everything is a threat until proven otherwise.
In this article, we explore what goes into a Zero Trust approach to data protection and how AI can be the solution to robust cyber security, with insight from Microsoft’s e-book.
According to Microsoft, a Zero Trust approach to cyber security must adhere to three principles:
Verify explicitly
Use least-privileged access
Assume a breach
Let’s explain more about each principle.
Verify explicitly: This principle involves continuously authenticating and authorising access to data based on “all available data points, including user identity, location, device health, service or workload, data classification, and anomalies”.
Use least-privileged access: This means only allowing volunteers and employees access to the precise information they need when they need it, employing what Microsoft calls “just-enough-access" and “just-in-time policies”.
Assume a breach: The third principle of a Zero Trust approach is clear: treat any situation as though a breach has already occurred to improve security and minimise the potential impact. This last step is particularly important given that 70% of nonprofits say they currently don’t have response capabilities in the event of a cyber breach.
Microsoft’s report also covers seven key risk areas that will enable charities to employ a Zero Trust approach that is suitable for their various security challenges, organisational needs, and available resources. These seven risk areas form a framework to support charities with the decision to adopt a Zero Trust approach and how they can do so with Microsoft solutions.
Charities have a responsibility – both legal and ethical – to protect the data of their donors, beneficiaries, and other stakeholders. With AI changing how we use data organisationally, charities must consider new approaches to protect it. AI tools can help charities deliver more impact for their mission, but their gains are not worth it if they don’t keep the data they’re fed secure.
Fortunately, as Microsoft points out, a Zero Trust security approach “helps effectively manage AI security risk” while mitigating “the impact of [cyber] attacks with a framework that’s simpler to understand, implement, and manage”.
With Zero Trust protocols in place, charities can confidently use generative AI tools in a secure environment, knowing that their data is safe within their network. Critical datasets are safeguarded even as they are used with AI tools outside of their network. Tools like Microsoft Purview can implement governance controls to ensure data is protected when used with AI, and policy controls – such as those limiting access to data to certain users – can be automated to ensure adherence.
AI-enabled tools can also support Zero Trust security, detecting anomalies and identifying threats before they become an issue. AI can give charities real-time insights to help them respond quickly to potential cyber threats, limiting their impact. Microsoft Security Copilot, for example, can investigate security alerts and offer charities a step-by-step response to solve the issue.
It’s clear that AI can both pose a threat to cyber security and enhance it. While AI can support charities in achieving their goals, it is also helping cyber criminals automate cyber attacks and exploit vulnerabilities at speed. A Zero Trust approach accounts for both capabilities and keeps data secure.
To find out more about how charities can adopt a Zero Trust approach to meet the risks of AI and protect their data, download Microsoft’s e-book below.
Follow-up questions for CAI
How does a Zero Trust approach enhance charity cyber security?What role can AI play in detecting cyber threats for charities?How can charities implement least-privileged access effectively with AI?In what ways does Microsoft Purview support AI data governance?How can AI tools improve charities' cyber breach response capabilities?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.