Insights
We look at whether language-led generative A.I. tools can produce accurate and compelling copy to the same standard as charity comms and fundraising professionals
The rise of generative artificial intelligence (A.I.) in 2023 has been phenomenal. ChatGPT now has over 100 million users and debates on the ethics and efficacy of language-led A.I. tools continue to rage. The beginnings of legislative processes for regulating A.I. are already underway in some regions and nations. The E.U. is working towards the first legislation on A.I. and, in the U.K., A.I. has officially been classed as a security risk in the 2023 National Risk Register.
With regulation, generative language models and tools like Chat-GPT, GPT-4, Jasper, Google Bard, or the new Microsoft Bing have been touted as the future of copywriting. Writers who benefit from the $63bn and growing global content marketing industry revenue are concerned that A.I. could take their jobs. In some cases, it already has.
In the third sector massive content demands are often met by comms and fundraising teams. In some instances, charities even have a dedicated content team producing social media, video, email, and written content to meet the demands of multiple audiences, platforms, and channels. Some charities are already testing whether A.I. copywriting tools can save them time and money.
One of the most insidious risks of using A.I. is baked in bias. If bias exists in the data used to train a large language model, it can be replicated in the outputs it produces.
Researchers at M.I.T. examined some pre-trained models and found them, “teeming with bias” such as assuming that certain professions corresponded with certain gender pronouns: secretary as feminine and lawyer as masculine, for example.
Generative language tools are also prone to “hallucinations” – presenting false information as fact. On one occasion, Chat GPT described a meeting between James Joyce and Vladimir Lenin that has never been confirmed.
Drawing information from pre-existing sources also makes language-led A.I. tools vulnerable to copyright infringements and other intellectual property (I.P.) issues. According to McKinsey I.P. rights issues should be one of the biggest concerns for any C.E.O.
Using human-checks on A.I. content can help to neutralise biases and address “hallucinations”, and there are also plagiarism tools that can detect copyright violations in a click.
But what about the risk that A.I. copy simply won’t get read?
In an online-first world you have a few seconds to convince someone to read an article when they land on your website. There are a few factors that determine whether they decide to hang around and read the full blog. According to HubSpot the top three are:
Readers will also consider how long they might need to read the article. Some publishers add a read time below the byline to help with that decision.
All of these things can be added into a prompt for an A.I. content tool to produce a good result. You can also prompt A.I. copywriters to ‘display the results in markdown’ so that they create headlines and subheads for you.
Steve Gamel, an award-winning journalist and author of a book on writing, says at a basic level, good writing is, “perfect grammar; an active, powerful voice; a varied sentence structure and word choice. Putting a human face on the piece and quality, compelling storytelling.”
Generative A.I. tools are straight A students. They write grammatically accurate, consistently formatted copy. They can be asked to mimic the voice of a writer or brand and be supplied with raw materials like case studies to weave into their copy.
So with a bit of human checking and getting the prompt right, A.I. can do as good a job as a human copywriter, right? Erm…not quite.
After testing Chat–GPTs writing abilities, Ian Bogost, Computer science and engineering professor at Washington University and contributing writer at The Atlantic said: “The bot’s output, while fluent and persuasive as text, is consistently uninteresting as prose.”
Bogost notes that as well as following grammatical rules to the letter, ChatGPT uses the five paragraph essay technique taught in schools. It’s a solid technique. It gets students top grades, but it doesn’t make them writers.
Writers are rebels. We break the rules and write three-word sentences or compare the feel of freshly laundered cotton to armies of tiny soldiers marching across our fingertips.
We have the privilege of hearing people’s experiences because we make them feel safe. We carry heavy stories and tell them with the dignity and honour they deserve – because we care.
Generative A.I. isn’t sentient. It doesn’t understand privilege or pain. It doesn’t respect you or the community you serve. It doesn’t have your values, it won’t advocate for you, and, frankly, its copy is a little bit boring.
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.