Insights
From using your intuition to the best online AI content detectors, we look at the best ways to detect AI content
Artificial intelligence (AI) is everywhere. Facial recognition is a common component of biometric security in airports. Your social media feed is personalised through AI predictive algorithms, as are your Netflix or Apple TV recommendations. Google Maps uses AI to give you live traffic updates. From Alexa in your home to Siri on your phone, AI technology is very much a part of our every day lives.
But the rapid advancement of AI and natural language generation (NLG) technologies makes AI-generated content difficult to detect.
In May 2023, Forbes identified the critical need to identify AI-generated content. There are a variety of AI watermarks and AI digital identifiers in development, but “the timing of them becoming ready to use for AI screening is a moving target.”
Simply, the issue is that AI-generated content can be used for a multitude of malicious purposes, including the spread of disinformation, propaganda, and the creation of deepfakes. Therefore, ability to identify AI-generated content is essential.
At best, AI chat bots and content creators are the new plagiarism for those looking to create content quickly. At worst, they are a huge concern for public good.
Charities often act in response to world events. From time to time, they may need to demonstrate support or comment on socials media trends, or the opinions/policies of world leaders.
If a charity was to unwittingly repost, respond, or take part in a false AI-generated narrative online, or accidentally show support for a piece of misleading information, it could have serious reputational repercussions.
Here are our top tips on how you can spot AI content.
At first glance, it can be hard to determine whether an article or image has been created by AI. But there are several indicators to look out for, particularly when it comes to AI language.
AI language is often repetitive, wordy, and bland. Copy created by AI content bots often lack depth and/or analysis, too. If the article feels robotic, prescriptive, or regurgitative, if it doesn’t venture beyond pure surface level information or fact, it has likely been created by AI.
Speaking of facts, look out for inaccurate or outdated information. Many AI tools create content based on prediction (this helpful guide covers how AI technology creates content based on prediction).
Consequently, the output can be incorrect or unrelated to true facts. Information can also be outdated. ChatGPT’s knowledge base, for example, is limited up to September 2021.
When it comes to AI generated images, look out for watermarks or signatures, textured backgrounds, text in backgrounds, and the overall sharpness of the image. Are there random brushstrokes throughout? Are some parts of the image keenly in focus and others indecipherable blur? Are the hands or features of people in the image asymmetrical?
A quick way to check if an image may have been created using AI is to use reverse image search on the image. This will confirm if there are any traces of the image already on the web.
Originality.ai, Writer.com, and GLTR are all examples of online AI content detectors. They claim to be able to quickly and accurately detect AI-generated content, but tests have shown that many of these sites have varying levels of success.
Overall, the AI-detection and plagiarism checker tool, Originality.ai, is the highest reviewed AI-content detector. According to their own tests, they are the most accurate AI detector, with 99% accuracy on GPT4 and 83% accurate on ChatGPT.
OpenAI, the parent company of Chat GPT, have released their own AI classifier for identifying AI-written text. By their own admission: “Our classifier is not fully reliable [but] is significantly more reliable on text from more recent AI systems.”
In OpenAI’s evaluations, the classifier correctly identified 26% of AI-written text as “likely AI-written,” but incorrectly labelled human-written text as AI-written 9% of the time.
A detection tool kit is a great way to set an organisation-wide, standardise practice for detecting AI.
Your tool kit might be made up of a series of initial ‘human checks’ (including steps like reverse image search, analysis of bland language and outdated facts), which, depending on whether the piece of content passes or not, then proceeds to an AI content detector tool.
Be sure to establish why it is important for your charity to be able to distinguish between AI and human created content in your toolkit. Include factors such as limiting the spread of misinformation, reputational security, reducing the risk of litigation, and ensuring necessary care or due diligence.
If you’re a charity that commissions content from external freelancers, check out Award Force’s guide to avoiding and detecting AI-generated content in your submissions.
For the sixth year in a row, we're bringing back an action-packed event filled with Digital Fundraising insights from the charity and tech sectors. Join us on 7th October 2024 for a free, one-day online event featuring informative webinars and interactive workshops.