Insights
Cyber criminals are taking advantage of artificial intelligence. Here’s what you need to know to keep your charity safe
Artificial Intelligence (AI) can provide a huge boost to your charity’s security, helping to keep you safe from cyber attacks. But malicious hackers can also make use of AI technology to attack your charity. That means that when it comes to cyber security, AI is something of a double-edged sword.
Keeping your charity protected as AI continues to develop is, therefore, essential. Charities can find a wide range of cyber security products available at a discount on the Charity Digital Exchange.
Check out Avast on the Charity Digital Exchange
AI systems can be used to create very powerful security tools, because they can monitor vast amounts of data very quickly. They can also be trained to spot unusual patterns of behaviour – such as a large number of attempts to enter a password, or people logging in to computer systems in the middle of the night. When they do spot something anomalous these security tools can raise a security alert or even take defensive actions such as shutting down a particular system.
AI security tools can be particularly beneficial for smaller charities that may not have a large dedicated cyber security team. Sometimes AI is embedded into security software that the charities run themselves, but more frequently the AI security tools are run by “managed security service providers” who monitor charities’ computers for cyber security threats on their behalf.
That’s the good news when it comes to AI and security. The bad news is that cyber criminals are also beginning to take advantage of AI technology to help them in their criminal endeavours.
For example, many charities suffer cyber security breaches when employees fall victim to phishing attacks after reacting to malicious emails. These malicious emails can sometimes be spotted before they cause harm because they have spelling and grammar mistakes.
But AI systems like ChatGPT – known as Large Language Models – can make it much easier for criminals with a limited command of English to generate more convincing content for their emails. In the future it’s likely that criminals will also use AI to generate “deepfake” content such as fake voice messages from charity bosses asking employees to make bank transfers.
Large Language Models also present a security risk because charity staff may use them to generate content such as reports and proposals. That can be a problem if staff submit confidential information to a publicly accessible AI-system which may stolen by hackers.
Another problem with AI systems is that they can be very hard to understand. While an AI might make recommendations – for example it may analyse data held by your charity to help decide which projects should be carried out or where money should be allocated – it is not always obvious why the AI made the recommendations that it did.
This makes them vulnerable to attack by criminals who wish to manipulate the recommendations. They may attempt to do this by “data poisoning.” This involves breaking into your charity’s databases and injecting false data or changing existing data. If the AI processes this data, then it could be made to make recommendations which benefit the criminals rather than your charity.
The problem is compounded by the fact AI systems often rely on large amounts of data which they ingest and process. When it comes to charities, this data could include private and personal information about constituents. If cyber criminals are able to break into your AI databases then they may be able to make off with a vast trove of confidential information that they can look to exploit.
At the moment the risk to charities is relatively low – both because the adoption of AI systems is in its infancy, and because cyber criminals are only beginning to exploit AI for their own advantage. But it’s a problem that has been recognised by the UK’s cyber security chief, National Cyber Security Centre CEO Lindy Cameron. “AI developers must predict possible attacks and identify ways to mitigate them. Failure to do so will risk designing vulnerabilities into future AI systems,” she warns.
But as time goes on these risks will inevitably increase. Here are some things your charity should be doing to protect itself from the cyber security threats posed by AI systems.
For the sixth year in a row, we're bringing back an action-packed event filled with Digital Fundraising insights from the charity and tech sectors. Join us on 7th October 2024 for a free, one-day online event featuring informative webinars and interactive workshops.