Insights
Content bots are on the rise. But what are the risks of using AI-generated content?
Content bots are a form of artificial intelligence (AI) technology that use natural language processing to create a variety of content, including copy, digital adverts, and even code.
ChatGPT, copy.ai, and Jasper.ai are all prominent examples of content bots, with ChatGPT being the most popular. Time Magazine called ChatGPT “the fastest growing web platform ever”.
The overwhelming benefit to using a content bot is the speed at which it can create assets and copy. According to the open.ai community board, ChatGPT API responses average around 20-30 seconds to complete a request. Apparently this is considered “very slow”.
But there are some serious risks when using content bots. Below, we explore some of the biggest concerns.
In a recent investigation, anti-misinformation outfit NewsGuard found almost 50 AI-generated ‘content farms’ across the web. The report claims some sites are producing hundreds of articles a day and “nearly all of the content features bland language and repetitive phrases, hallmarks of AI”.
A separate report by NiemanLab found around forty AI-generate articles from the same site that contained almost like-for-like quotes and the exact same regurgitative phrasing.
While charities may not use content bots for the same reasons as some ‘news’ outlets, content created by bots will almost always results in bland and repetitive copy.
The risk is that charities will expend more time and effort proofreading, factchecking, and editing assets produced by AI content bots, which can negatively impact ROI.
Over the last decade or so, it has become increasingly blatant just how much human biases can make their way into AI systems. In some cases, there have even been reports of racism and sexism. The issue arises out of the data set the AI has been trained on. In short, if the data set is biased, it is likely that any generated content will be biased, too.
Bias can be hard to detect. ChatGPT, the world’s fastest growing content bot, is currently in its research and review phase. OpenAI, the creators of ChatGPT, claim that the bot’s language model has been “trained on a massive amount of text data from a variety of sources.”
But. according to MIT Technology Review, “bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect it”. Already there are concerns about how OpenAI trained its model in the first place, particularly with regards to GDPR. But we’ll get to the risks surrounding litigation later.
Charities have a moral obligation to uphold openness and integrity. Charities also have a responsibility to their beneficiaries and to “carry out their purposes for the public benefit”.
Bias is alienating. Bias can incite discrimination. And bias can create inequitable outcomes for vulnerable persons within a society. Charities have an ethical duty to avoid and mitigate bias from their external and internal communications at all costs.
Most AI databases are amassed through open-source searches. The result is a vast directory of content, including articles, blogs, websites, photos, journals, and other public documents. In 2020, Forbes reported that Clearview AI’s database amassed almost three billion images through Facebook and Twitter.
The issue with this, of course, is that it raises questions of copyright and ownership. Copyright law, at the best of times, is a murky subject. But, mostly, in the UK at least, Copyright law is automatic; it protects your work and precludes others from using it without explicitly given permission.
The UK-based comment site, The Conversation, states that the “law suggests content generated by an AI can be protected by copyright”. But content bots are notorious for failing to cite original sources and AI databases do not exclude copyrighted works.
The risk of ligation is not inconceivable. Already in the U.S., Microsoft, GitHub, and ChatGPT’s parent company OpenAI are at the centre of a proposed class-action lawsuit.
The lawsuit alleges that these companies violated copyright law by knowingly using copyrighted open-source code to train their AI.
AI content bots, unlike humans, can comb through incredible amounts data and information in lightning speed, finding sources – however unreliable – to support an argument or opinion. As a result, the risk of touting inaccurate and unsubstantiated information is great.
In fact, one review of ChatGPT claims “it fails at basic math, can’t seem to answer simple logic questions, and will even go as far as to argue completely incorrect facts”.
Accidentally contributing to the spread of misinformation could cause considerable reputational damage. Charities owe it to beneficiaries and the basic charitable principles to take the necessary care and attention when it comes to their campaigns and content.
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.