Insights
AI has a host of benefits and can be powerful if used for good. But what happens when AI is used to spread misinformation? Here’s how charities can address misinformation and disinformation caused by AI
AI brings huge benefits. It can analyse vast amounts of data in order to make better decisions. It can help us to be more efficient and even creative. AI can also help organisations save money by automating routine tasks, streamlining recruitment processes, detecting unusual patterns and potential fraud, powering chatbots for customer service, and much more.
But AI also comes with challenges, including ethical ones. AI is increasingly being used in social media to spread misinformation and disinformation. In fact, AI-generated images and memes are a hot topic of the 2024 US election. While some are clearly satire, such as former president and candidate Donald Trump on stage playing a rock guitar as Darth Vader plays the drums, others are hyper-realistic and are creating confusion among voters.
And then there is the more malicious use of AI, where it is being used to create deep fakes of candidates, as well as realistic photos that show events that never took place, all in an attempt to deceive voters and influence the outcome of the election. It is deeply troubling.
Misinformation is when information that is inaccurate is spread, without the person knowing that it’s untrue. It includes sharing social media posts, for example, that have unverified claims or assumptions – but that the sharer believes to be true. Disinformation, on the other hand, is the deliberate spread of information known to be untrue or false.
How can charities be vigilant when it comes to spotting AI-generated content? What do they need to reconsider when tackling misinformation generated by AI? Here are four tips to tackle AI misinformation if it involves your charity or your cause.
Some social media platforms, for example Instagram, request users to label images that have been created by AI. However, not everyone will use the labels as they should.
A recent example is when X owner, Elon Musk, shared an AI-generated image (created by his inbuilt AI assistant Grok) of US presidential candidate, Kamala Harris.
The image depicted Harris dressed as a dictator and was accompanied with the text, "Kamala vows to be a communist dictator on day one. Can you believe she wears that outfit!?"
Musk shared the post and image to his 197.5 million followers without declaring it as being AI-generated. That tweet has had 83.7 million views to date.
One way to check the validity of an image that is circulating on social media is to use a reverse image search. That way you can find out if it’s actually an old image that is being used.
There are several ways to do this, depending on whether you’re using a phone or a computer. Zapier explains the different ways you can undertake reverse image searches.
If you’re not sure whether something is true or not, do some due diligence before sharing the content. Check the source of the news. Is it a legitimate media agency? If in doubt, visit the website and look at their ‘about us’ section.
Even if it is a legitimate news agency, that doesn’t always mean that what they are reporting has been verified. You can use sites such as BBC Verify and FactCheck.org to confirm whether it is true. Also, look at whether other reputable news organisations are reporting on it. If none of them are, chances are that it’s fake news.
Once you have verified that the news is fake, or it contains false or misleading information, you should report it to help stop it spreading. Each social media platform has its own process for reporting content.
For example, on Facebook you need to click on the three dots at the top right of a post, then choose ‘report post’ or ‘report photo’. You will then be prompted to choose a category, such as ‘false information’.
If you know that there is misinformation being spread on social media about your organisation or an area you work in, for example claims of herbs that cure cancer, you should actively debunk that information.
Firstly, you should report the post as false information to the social media platform. You could also reply to the post stating that the information is false and linking to accurate information or a statement on your website.
You should also create your own post but do not link to or share the original post as you will just be spreading its reach further. Your own post should make your audience aware that there is false information circulating on a specific social media platform and lay out the reasons why that information is false.
Although this isn’t in relation to AI, an excellent example of how to debunk misinformation using social media, is this example by the RNLI.
We're proud of the lifesaving work our volunteers do in the Channel – we make no apology for it. Those we rescue are vulnerable people in danger & distress. Each of them is someone’s father, mother, son or daughter - every life is precious. This is why we launch: pic.twitter.com/lORd9NRpdP
— RNLI (@RNLI) July 28, 2021
They tweeted this in response to politician Nigel Farage calling them ‘a taxi service’ for refugees. Their response led to a surge in donations. If done well, debunking and challenging misinformation can lead to huge benefits for your charity.
For the sixth year in a row, we're bringing back an action-packed event filled with Digital Fundraising insights from the charity and tech sectors. Join us on 7th October 2024 for a free, one-day online event featuring informative webinars and interactive workshops.