ao link
Charity Digital
Search
Remember Login

New to Charity Digital?

User Menu
Remember Login

New to Charity Digital?

Remember Login

New to Charity Digital?

Search

The ethics of generative artificial intelligence (AI)

Generative AI content has been the most talked-about topic of 2024. But the tech presents risks around ethics, accuracy, and quality. We explore the AI risks and show you how to mitigate them

AI-generated image with a robotic person emerging from a phone surrounded by content
Click to listen to the articleplay_arrow31:13
The ethics of generative artificial intelligence (AI)

Generative artificial intelligence (AI) was the tech talking point of 2024. And, without question, it will prove the talking point of 2025. People are turning to ChatGPT, Bard, Scribe, and a million other competing platforms to create their content. But there is a small problem: a lot of that content isn’t particularly good. Or, worse, the outputs are inaccurate, outdated, or even unethical.

 

We are seeing the bandwagon effect in full motion. People are rushing to use AI for everything, largely because others are using AI for everything – and telling everyone else to use AI for everything. We are seeing minimal thought awarded to the accuracy and veracity of outputs, ethical issues in terms of copyright and plagiarism, or even the basic quality of the content produced.

 

So, with all that in mind, we want to show readers how to use AI responsibly. We tackle accuracy issues and discuss how to avoid out-of-date and inaccurate outputs. We explore the ethics and show you how to mitigate introduced bias and absence of originality. And we explore quality concerns, showing how human oversight helps to produce edifying, engaging, and entertaining content with generative AI.

 

Skip to: What is generative AI content?

Skip to: Producing ethical generative AI content

Skip to: Producing accurate generative AI content

Skip to: Producing high-quality generative AI content

 

 

What is generative artificial intelligence (AI) content?

 

We use AI every day, often without knowing. Google Maps is powered by AI. Search engines use AI. Email software is now using AI. AI decides your Netflix algorithm. AI dictates all your social feeds, picks your Ads, picks your online shopping recommendations. AI helps you open your phone, send messages with predictive text, send emails with spell check, receive the right emails with AI-powered spam filters, and so on.

 

AI is everywhere, all of the time, and the tech has led to huge benefits. But one form of AI has become the most talked about tech of the past few years: generative AI.

 

Generative AI depends on fast processing and intelligent algorithms, married with huge amounts of data to produce results based on prompts. The tech learns automatically from patterns or features of the data and uses that information to improve processing and algorithms – and produce better results.

 

Generative AI has myriad applications. It can, among other things, generate text (ChatGPT and Jasper), images (DeepAI and DALL·E 2), audio (Soundraw and Jukebox), video (Synthesia and Pictory), and so much more. There are many different (and often absurd) generative AI platforms. Indeed, you can use the AI aggregator, There’s an AI for that, to track the thousands of uses for generative AI – and this tool itself is, funnily enough, also dependent on AI.

 

Generative AI is revolutionising everyday work. The benefits of the tech are substantial. Generative AI can allow you to save time, cut costs, reduce errors, streamline processes, improve operations, and so much more. There are various uses, including analysing and reporting on data, research and due diligence, discovering trends and insights, brainstorming and ideation, and so much more.

 

But one use of generative AI has perhaps overshadowed all others: creating content. By content, we simply mean information published for consumption. That means various forms, such as short- and long-form articles, podcasts and webinars, e-Books, videos, infographics, and case studies. But, importantly, content here also means social posts, quick blogs, memes, internal documentation, even memos.

 

Drafting content, in all forms, has been perhaps the most widely-used task of generative AI. And it is perhaps the one that presents the most challenges, in terms of ethics, accuracy, and quality. Below we explore each of those challenges, describe the problems, and show you how to avoid them.

 

Let’s start with three key ethical concerns.

 

 

Producing ethical generative AI content

 

Introduced bias in generative AI

 

The problem: AI is capable of processing huge amounts of information, more than any human or indeed any collection of humans. But that information is not neutral, nor amoral. AI is created by humans, who decide on input, and humans, as we know, are riddled with conscious and unconscious bias. AI systems learn and even extend that bias through internal machine learning processes.

 

The issue becomes more concerning in the medium- to long-term. If generative AI systems produce bias, based on the initial human inputs, that bias may be used to create content in the future. The content will consequently feed into various generative AI platforms, further extending the bias.

 

Generative AI, in short, has inherited the conscious or unconscious human bias through inputs, extended that bias through machine learning and outputs, and perpetuated the same bias due to further consumption of the information the AI produced.

 

We’ve seen plenty of examples of bias in AI. Sexism, ageism, classism, and racism are particularly prevalent in image generation. A Bloomberg analysis of more than 5,000 images produced by generative AI found myriad racial and gender disparities – more bias, indeed, than is found in the real world. High-paying jobs, for example, were dominated by images of men with lighter skin tones.

 

Generative AI had not only inherited existing human bias – the sort we all aim to change – but it learned and extended those biases, allowing more prejudice than exists in the actual world. That’s worrying considering the bias is extended with every use. And that’s particularly worrying considering that experts suggest that 90% of content could be AI-generated in the next few years.

 

How to avoid the problem: The problem needs to be addressed at a regulatory level. Industry researchers have long been ringing the alarm on the risk of bias being baked into advanced AI models, and now EU lawmakers are considering safeguards to address these issues. In addition, Joe Biden’s recent Executive Order aims to produce a more ethical and bias-free approach for generative AI.

 

But, as legislators struggle to catch up, individuals can take various steps to mitigate AI bias. Start by working with transparent generative AI platforms, platforms that carefully curate and reveal the inputs they use or at least demonstrate a clear commitment to avoiding bias.

 

Create an AI policy that defines the general rules of AI usage. The policy can offer best practice advice around responsibility, mindfulness, and privacy – and it should also reference the environment.

 

You could also avoid the problem by following our golden rule: apply meaningful human oversight to outputs. That means employing scepticism, careful reading, and a cognisance of potential problems to everything the platform generates. Indeed, as you’ll see in this article, applying meaningful human oversight helps to mitigate most problems – and makes the most out of the tech.

 

 

Copyright and plagiarism in generative AI

 

The problem: There are several potential copyright and data issues. The first, and perhaps most pressing, revolves around copyright infringement. It is an ongoing debate – not likely to be solved any time soon – whether the content used to inform AI systems violates existing copyright legislation.

 

There have already been claims. In late 2022, the now famous legal dispute, Andersen v. Stability AI et al., saw three artists sue multiple generative AI platforms, suggesting the AI used their work without licence to train AI. And an even more high-profile case is currently in motion, with writers including George R. R. Martin, Jonathan Franzen, and Jodi Piccoult suing OpenAI for copyright infringement.

 

These are debates that are not solved – and one that the future of AI regulation will determine. But the larger issue for individuals is using other people’s ideas and quotes without giving them credit. No one publishing content should depend on another person’s work without referencing that work.

 

How to avoid the problem: Avoiding copyright infringement is an evolving issue. The response from users will largely depend on court decisions and legislative changes. It may be that the onus rests on the individual, the user, rather than the platform, to avoid copyright infringement, in which case users are legally obliged to avoid infringement – a near impossibility, it seems, without using additional tech.

 

Regardless of the evolution of copyright, individuals should aim to credit people for their work. Even if it does not break any laws, even if the courts rule that all AI-generated content is fair game, crediting work is still the right thing to do. Failing to credit people may also hit your reputation, as plenty of tools exist that are capable of finding instances of plagiarism, tools that can quickly undermine your content.

 

A perhaps more difficult challenge is inadvertent plagiarism. No one can expect you to know everything. So the only thing you can do is practice caution and follow the golden rule: apply human oversight. All AI content should be checked for factual inaccuracies and potential plagiarism before publication, at the very least. Edits should be required, too, and ideally you will re-write. 

 

Do not copy and paste, at least not for any content that you publish. Copy and pasting generative AI content constitutes passing other content off as your own, even if that content is produced by machine learning. And generative AI content is often dull, verbose, unoriginal, and quite clearly written by generative AI – so, even devoid of ethical considerations, you’ll want to avoid the copy and paste.

 

For people who want to be particularly, cautious, consider AI spotter tools. AIspotting tools can identify generative AI content, find issues within that content, and provide myriad other functionalities, based on the tools. Running your own articles through the tools may help you find issues.

 

 

Environmental issues in generative AI

 

The problem: The Verge recently produced an interesting fact: if every search on Google used generative AI tools similar to ChatGPT, it would burn through roughly as much electricity annually as the country of Ireland. That is every lightbulb in Ireland, every device, every television, everything.

 

That’s a particularly scary statistic considering a) we are in the midst of a climate crisis, and b) people increasingly use AI platforms as search engines.

 

Generative AI is terrible for the environment. It depends on substantial computing power that is already making energy at data centers skyrocket – and it will get worse, not better. The rise of AI poses serious environmental concerns – a point that seems routinely ignored in generative AI discussions.

 

How to avoid the problem: Start by tracking your generative AI footprint. Not all data centres are equally carbon efficient – some depend on renewable energy sources, for example – so you can try to research the carbon footprint of the platforms you are using. If the platforms have no indication around footprint, that’s probably not the best sign. Perhaps look for alternatives, if you can find similar ones.

 

You can also practice mindfulness about usage. Do you have to check your query out on ChatGPT? Is that something you can search for on Google instead? Tech and climate have a strange relationship, in that the absence of the physical makes it seem that it is not damaging. But tech is harmful, and you should be mindful of the way you use it, in all instances. That’s especially true of generative AI.

 

And remember: it’s perfectly acceptable not to use AI at all. 

 

 

Producing accurate generative AI content

 

The great hallucinations in generative AI

 

The problem: Generative AI models often present inaccurate information as though it were correct, often caused by limited information in the system, biases in training data, and issues with the algorithm. These are commonly called ‘hallucinations’ and they present a huge problem.

 

A famous example was noted by a Guardian journalist. A researcher had come across an article, written by that Guardian journalist a few years previously. But the article proved elusive – there was no sign of it on the internet and the reporter could not remember writing the piece. It sounded real enough, sounded as though it was something the journalist could have written, but the article was nowhere to be found.

 

In fact, it had not been written. ChatGPT had simply made the entire thing up. That particular example is so fascinating because not only did the initial reader trust the veracity of the article, but so did the proposed writer who did not write it. That’s the core problem with hallucinations: they seem very real.

 

Hallucinations occur because large language models (LLMs), upon which AI works, are fed huge amounts of information, which is broken down into smaller units, such as letters and words. LLMs use neural networks to work out how words and letters work together, adjusting outputs each time the generative AI system produces predictions. But, importantly, the system never understands the words.

 

Generative AI grasps grammar rules, word associations, and even develops a form of semantic understanding, but it does not understand concepts. It is pattern matching, in essence, and if incorrect information matches the pattern, the AI may well produce that information. And, importantly, since the AI is often verbose and confident, it will relay that information in a way that is easy to believe.

 

Hallucinations cause various issues. They can spread misinformation in a particularly pernicious way: unconsciously. The recipient will believe, through little fault of their own, what they’ve read – and they may tell others, even use it in research to spread the misinformation further. They will particularly believe the information if it’s reproduced by trusted organisations that have not fact-checked their own content.

 

Hallucinations harm decision-making abilities and threaten reputational harm. They can lead to loss of trust on various levels: at an individual, organisational, or even national level. In short, they threaten accuracy of information, harm reputations, and undermine our belief in the notion of truth.

 

How to avoid the problem: The first step is to use transparent AI platforms that boast diverse and representative training data. That will minimise the potential for inaccuracies. We’ve said it before, and we’ll say it again, AI systems are only as good as their inputs – use systems with curated inputs.

 

The second step is to follow-up with AI systems, asking for more accuracy, or more detail on outputs. Generative AI models, such as ChatGPT, will often verify the claims in the content it produces. You’ll need to be specific in the follow-up prompts you use, but that should help to note hallucinations.

 

Finally, apply the golden rule: meaningful human oversight. Read through the articles and query any information that looks questionable and query all information that possesses the potential to cause harm. You should always substantiate writing – regardless of use of AI – always giving credit to other sources, but using ChatGPT puts a far greater onus on the individual using the platform.

 

Fact-checking has never been more important. You can check any information against reputable sites, ensuring that the claims generated by the AI are also confirmed in other places. That will depend on using an editorial eye, querying any suspect information, and performing a little fact-finding mission.

 

You can use a fact-checker to achieve that, rather than manually searching. Some of the best fact checkers on the market include Google Fact Check Explorer, Claimbuster, and Snopes.

 

 

Out-of-date information in generative AI 

 

The problem: Many generative AI systems, particularly text-based ones, depend on older inputs. ChatGPT, for example, initially depended on data up to September 2021, meaning that for months people relied on outputs that ignored all the information published in the past two years.

 

It is commonly known as a “knowledge cut-off date” and anything published after that date would not specifically feature in outputs. ChatGPT now uses more up-to-date information, so outputs might not be so outdated, but many platforms still rely on outdated inputs. No generative AI platform uses the most up-to-date information. That’s because the platforms must be pre-trained on inputs and that process is time-consuming. In the time that process takes, generative AI platforms become out-of-date.

 

In addition, at the time of writing, it seems that few platforms have functionality of chronological prioritisation: they are unable to prioritise the latest – and the best – information. So you might get close to the latest information, but it risks inaccuracy, as they may rush through or completely ignore the time needed for human oversight. Or you might get accurate information that is out-of-date.

 

How to avoid the problem: Meaningful human oversight prevents out-of-date content. You can start by checking the dates of AI inputs, if such information is available. Most systems, at the very least, will tell you their knowledge cut-off date: you may simply ask the system. We asked ChatGPT on 08 November 2023 and we received the following response: “My knowledge was last updated in January 2022. If there have been developments or changes since then, I may not be aware of them.”

 

Awareness of the knowledge cut-off means you can effectively use (or not use) the information provided, depending on the purpose of the information. You might not want to ask the AI platform about the latest football results. But you might feel confident asking about the French Revolution.

 

You can then, with adequate awareness, read through the outputs to find anything that looks suspect or raises red flags. You can also fine-tune prompts, depending on the platform, and suggest that it only rely on data from specific dates.

 

Generative AI is perhaps better positioned for “How to” content rather than dependence for facts and stats. You can easily apply logic to “How to” content, ensuring it makes sense, ensuring the outputs are interesting and helpful for your reader. But facts depend on more work, double- and triple-checking, so it might sometimes prove simpler, easier, and more efficient to skip the use of generative AI in that instance.

 

Presenting facts perpetuates you, the author, as an authority. Depending on outdated, and potentially inaccurate, information presents an ethical issue.

 

An always important tip: generative AI should not be used for everything. Yes, it can give some great inspiration on “How to” topics and, yes, it can summarise information on given topics in a very effective way. But if you’re looking for the most up-to-date facts, if you’re formulating moral arguments, or if you’re writing creatively and aiming for originality, generative AI systems might prove a little useless.

 

 

Producing high-quality generative AI content

 

Boring and stale prose in generative AI 

 

The problem: ChatGPT is the most used generative AI system. It is profoundly boring. We asked it whether it was boring and let the tech speak for itself:  “The outputs generated by this language model aim to be engaging and informative, tailored to meet your specific requirements and preferences.”

 

That was boring. So we wrote: “That was boring.” The tech responded: “I apologize if my previous response did not meet your expectations. If there’s a specific topic or style you’d prefer, please provide more details or let me know how I can better assist you and I’ll adjust my responses accordingly.”

 

Again, boring. Generative AI follows grammar and punctuation rules to perfection, along with various professional semantic rules, all of which produce outputs that are verbose, generic, unoriginal, and stale. Sentences are often the same length and repetitive, which is discouraged in copywriting.

 

The tech has no sense of humour. Attempts to give the tech a sense of humour, as Elon Musk is currently attempting, have been, much like Elon Musk, lacking self-awareness.

 

Generative AI does not do sarcasm. Its idiom is usually serious, its delivery confident, sometimes verging on arrogant. In short, it makes for a perfectly acceptable reading experience, but not an exceptional one, not one that is likely to make your audience return.

 

How to avoid the problem: Re-write, at the very least. Generative AI possesses many potential uses, but drafting engaging articles is not one of them. The tech can help you draft an article, provide inspiration, summarise complex information, but it should not be used to produce complete drafts.

 

It’s worth noting that more attentive readers quickly recognise AI-generated content and such readers are unlikely to return to your site. If they want AI-generated content, they can just go straight to the AI.

 

It is beyond the scope of this article to show people how to write. But the emergence of generative AI does encourage writers to write more creatively, with more daring. So writers need to break rules, make sentences unique and interesting, and avoid any of the verbosity upon which AI systems rely.

 

Originality will quickly distinguish you from generative AI copy. Tone of voice will also help you differentiate. Writers typically champion authoritative writing, but that may shift with the rise of AI, and writing that feels explorational, writing that does not pretend to know all the answers, may well prove more successful in the age of generative AI. That seems a welcome change.

 

You can improve outputs on the generative AI systems. The outputs will still require redrafting, but you can make them more applicable and more engaging. Start by fine-tuning your prompts. Consider all of the following when making the initial request from the generative AI platform.

  • Clarity: Make it clear exactly what you want to read
  • Formatting: Give advice about format and structure
  • Language: Use, and demand, simple and concise language
  • Context: Provide background information to improve specificity

On top of that, you can make outputs more engaging by adding playful prompts. These, at present, are perhaps not as useful as they might seem. They are entertaining and at time impressive but help very little in terms of publishing content. Nonetheless, you can trial some of the following:

  • Write [proposed output] in the style of…[Virginia Woolf, Willy Wonka, melancholic spider, etc]
  • Write [a haiku, a limerick, a one-line joke, political satire, etc] about [proposed output]
  • Here is an essay. Now write an essay on [your proposed output] in the same style

There are plenty of other ways to generate more engaging AI content. The above will help boost initial engagement, perhaps provide greater insights, but you should still apply human oversight. Remember that, if you want to produce high-quality content, you’ll need to re-write outputs from generative AI.

 

 

Lack of creativity in generative AI

 

The problem: At present, most AI platforms notice trends in data and regurgitate those trends, providing feedback to prompts. It might be able to notice patterns that humans might not have found, or even provide insight that humans might not have noticed. But it does not present anything new.

 

AI lacks human creativity and the capacity for serious independent thought. The systems are typically programmed to perfect certain tasks and seldom allow for novel solutions. Generative AI content suffers precisely because it parrots available information, especially in terms of “how to” articles.

 

Asking ChatGPT to write an article on fundraising trends in 2024 will produce an article that largely rehashes pre-existing content on fundraising. The tech uses a huge amount of content to produce outputs, but much of that content is out-of-date and not particularly good.

 

How to avoid the problem: Generative AI, at present, should be used with caution. It’s great for certain things: summarising complex information, providing ideas for brainstorming, definitions, heading and subheading ideas, and so much more.

 

But, as mentioned above, the tech is not ready to write an entire article, especially one that is meant to provide new and interesting ideas to your audience. It is striking, in fact, how many ‘thought leaders’ produce generative AI content, essentially regurgitating ideas that already exist.

 

Content writers typically use AI in the act of drafting. They may refer to the system to overcome writer’s block, aiming for inspiration, but they seldom use the tech to draft entire sentence or paragraphs. The tech, at present, is just not good enough, not creative enough, to draft full articles.

 

 

The lack of the unpredictable in generative AI

 

The problem: Writing does have rules. The rules are in place to maximise cognition, so that everyone who has learnt the rules can understand the writing. But the best writing breaks the rules, openly and drastically, while retaining cognition. Some of the best prose depends on rule-shattering linguistic devices to create a certain feeling, to convey a driving impulse, to shock or stun the reader, to imbue urgency, to reflect the mental state of the narrator, and so on. So, yes, prose should be concise and sharp and you should omit needless words, but you should also abandon every rule, at the right time.

 

Great writing is unpredictable. We enjoy reading because we don’t know what’s going to happen next. That might mean on the level of words, or the way words interact – alliteration, assonance, rhyme, repetition. It could mean at the sentence level, or the level of interacting sentences. One long sentence, for example, might be followed by a short sentence, which creates a rhythm, prevents monotony, and better engages the reader. And it works. Or the unpredictable can exist at the level of plot, with twists and turns and so on. Great writers use plot and prose to keep us on our toes. Generative AI, by its very nature, produces profoundly predictable content. It is trained to do so.

 

How to avoid the problem: Use generative AI for certain tasks, not others. It’s fine to ask generative AI systems to generate prose for internal policies, outgoing emails, title inspiration, or brainstorming sessions. But if you’re aiming to engage an audience, if you want readers to return to your content, if you want to create thought leadership content, generative AI is not the best route to success.

 

Use generative AI to produce initial ideas. But follow it up with writing and re-writing. Use AI to define architectural designs, describe a wedding dress, or provide marketing ideas. But do not rely on its every sentence and certainly do not publish every sentence.

 

As we’ve seen with all of the above, meaningful human oversight is a necessity. Ensure you participate actively in the creation of content, applying and embracing the unpredictable, rather than relying on the predictable generative AI to create all copy. The risks of relying completely on AI are many: it raises problems with ethics, accuracy, and quality. But applying human oversight to the tech can generate incredible results and allow you to reap many of its benefits.

 


Related Articles

How artificial intelligence is impacting financeHow artificial intelligence is impacting finance
The ultimate guide to artificial intelligenceThe ultimate guide to artificial intelligence
Why embracing artificial intelligence early is the key to long-term impactWhy embracing artificial intelligence early is the key to long-term impact
Zoom Meetings Pro Plan Bundle 1 Year Subscription Access to Discounted RatesZoom Meetings Pro Plan Bundle 1 Year Subscription Access to Discounted Rates

Related Media

Generative AI and biasGenerative AI and bias
What is Generative AI?What is Generative AI?
Who we areWho we are

More on this topic

How one charity more than doubled its event’s income

How one charity more than doubled its event’s incomeSponsored Article

Volunteering trends for 2025

Volunteering trends for 2025

How to deepen your impact with social media in 2025

How to deepen your impact with social media in 2025Sponsored Article

Charity Digital Academy

Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.

 

Tell me more

Recite Me toolbar