Insights
Criminals are starting to exploit the power of artificial intelligence to commit fraud. Here’s what to watch for so you don’t get caught out
Deepfakes exploded into the public consciousness in 2018 thanks to the below video, in which former president of the United States Barack Obama presented a warning about fake videos.
The twist, of course, was that the video itself was a deepfake. American filmmaker Jordan Peele used deepfake technology to manipulate existing video footage of Mr Obama so that it appeared that the former president was voicing Peele’s words and head movements.
The word “deepfake” derives from the fact that it involves creating fake video or audio clips using “deep learning” – a form of artificial intelligence (AI) that imitates the workings of the human brain to process data.
A deepfake, then, has come to mean a video made using deepfake technology which appears to show someone in a situation that they have never been in, or saying words that they have never said. Since most people instinctively trust video footage that they see without considering that it might have been created by AI algorithms, the potential for mischief or criminal activity using deepfakes is enormous.
Just a couple of years ago, creating a deepfake involved sourcing a significant amount of video footage of the subject, along with several minutes of recordings of them speaking. Once this had been obtained, generating a deepfake video or recording could take many hours of processing time. That created a significant barrier to making deepfakes.
But thanks to advances in machine learning, convincing deepfake videos can be made using just a single picture of the victim and just five seconds of their voice, in a very short period of time, according to Security Boulevard. Since many people post short videos or photographs of themselves on social media or on YouTube, this means it has become trivial for cyber criminals to create deepfakes of almost anyone they want.
Perhaps the easiest way for a criminal to use deepfake technology is to create a deepfake voice in real time that sounds like a charity leader. They could then make a phone call to another charity staff member and impersonate the charity leader. If the deepfake voice is convincing enough, the staff member might be tricked into carrying out an instruction such as transferring a large sum of the charity’s money to an overseas bank account.
This type of scam has actually been carried out successfully already: in 2019 the CEO of a UK energy company was tricked by a deepfake voice that sounded like the chief executive of the company’s German parent company and convinced to make a bank transfer of over 220,000 Euros (£190,000).
As technology improves in the near future, it is not unthinkable that a criminal could invite a charity staff member to a meeting on Zoom. They could then impersonate the charity leader in video as well as audio using deepfake technology, and issue instructions for a bank transfer.
Since most people are accustomed to believing what they see, the potential for crimes based on deepfake impersonations could be very great indeed.
Impersonation can be used in other ways too. For example, it might be fairly easy to use voice impersonation over the phone for social engineering purposes – perhaps to get a charity worker to reveal an account password which would give a cyber criminal access to the charity’s network or confidential data.
Another way that cyber criminals could use deepfake technology is to create a video which appears to show a senior charity figure behaving inappropriately, or perhaps saying disparaging things about the charity and its use of donors’ funds.
The cyber criminal would then threaten to release the video on social media unless a payment is made. This kind of video could be extremely damaging to a charity because it erodes trust. It is not hard to imagine that many potential donors might not know whether to believe it or not, and therefore to choose to donate to a different charity instead.
As an alternative a blackmail, a cyber criminal could create a deepfake of a well-known charity figurehead or spokesperson and mix it with genuine charity videos of projects that are underway. The deepfake could then appeal for donations and ask potential donors to make their donations at a fake website or phoneline.
Thanks to the rise of deepfake videos, everyone will need to learn how to spot them if they are not to be fooled by them. For charities, it is important that staff are trained to spot them.
According to security company Kaspersky, tell-tale signs of a deepfake video include:
Creating deepfake audio is easier, and spotting it is harder. Perhaps the most reliable way to spot a deepfake voice is to listen out for words or phrases that the person being impersonated would be unlikely to say.
Because of the difficulty of spotting deepfake audio and, increasingly, deepfake video, charities need to take additional security steps to keep themselves safe as well as their standard cyber security measures.
The easiest way to do this is to create processes that require staff to obtain a second means of verification for any instructions that they receive. Similar in concept to two-factor authentication, this would involve a staff member receiving phone instructions to verify them through a different “channel” – perhaps by email, or by calling the person involved on their mobile phone, or by checking the instructions in person.
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.