ao link

Generative AI content: The importance of ethics

Generative AI content has become all the rage in the past year. But generative AI presents various ethical issues, such as introduced bias, copyright and plagiarism concerns, and environmental risks

Generative AI image of a human head, rendered in two forms, both clearly generated, staring at each other
Generative AI content: The importance of ethics

Generative artificial intelligence (AI) poses various ethical issues. The issues differ depending on individual usage versus organisational usage. As an organisation, generative AI need more thorough attention as organisations must adhere to related regulations. Organisations must also navigate legal and reputational challenges, especially if the content produced is inaccessible, inaccurate, or offensive.

 

Individuals are under less official regulations or constrictions, but, like organisations, they should aim to practice ethical use of AI. The ethical use of AI is not difficult. It depends only on human oversight, transparency, willingness to engage in produced content, and an editorial eye capable of finding errors.

 

Below we explore three core ethical challenges posed by generative AI and suggest some of the best ways for individuals and organisations to navigate the challenges. Let’s start with bias.

 

 

Ethical generative AI: Introduced bias

 

The problem: AI is capable of processing huge amounts of information, far more than any human in the history of humans. But that information is not neutral. AI is created by humans, who decide on input, and often relies on information created by humans.

 

And humans, as we well know, are riddled with conscious or unconscious bias. AI systems learn and even extend that bias through internal machine learning processes.

 

The issue becomes even greater in the long-term. If generative AI systems produce bias, based on the initial inputs, that bias may be used to create content in the future. The content will likely feed into various generative AI platforms, further extending the bias.

 

Generative AI, in short, has inherited the conscious or unconscious human bias through inputs, extended that bias through machine learning and outputs, and perpetuated the same bias due to further consumption of the information the AI produced.

 

We’ve seen plenty of examples of bias in AI. Sexism, ageism, classism, and racism are particularly prevalent in image generation. A Bloomberg analysis of more than 5000 images produced by generative AI found myriad racial and gender disparities – worse, indeed, than those found in the real world. High-paying jobs, for example, were dominated by images of men with lighter skin tones.

 

Generative AI had not only inherited existing human bias – the sort we must aim to change – but it learned and extended those biases, allowing more prejudice than exists in the actual world.

 

That’s worrying considering the bias is extended with every use. And that’s particularly worrying when you consider that experts suggest that 90% of content could be AI-generated in the next few years.

 

The problems exist in text, imagery, videos, and seemingly every form of generative AI. It is an ethical problem that every individual and every organisation should actively seek to avoid.

 

 

How to avoid the problem: The problem needs to be addressed at a regulatory level. Industry researchers have been ringing the alarm for years on the risk of bias being baked into advanced AI models, and now EU lawmakers are considering proposals for safeguards to address some of these issues. Joe Biden’s recent regulation also aims to ensure a more ethical and bias-free approach.

 

But individuals, as legislators struggle to catch up, can take various steps to mitigate AI bias. Start by working with transparent generative AI platforms, platforms that carefully curate and reveal the inputs they use or at least demonstrate a clear commitment to avoiding bias.

 

Create an AI policy that defines the general rules of AI usage. The policy can offer best practice advice around responsibility, mindfulness, and privacy – and perhaps also mention the environment.

 

You could also avoid the problem simply by following our golden rule: apply meaningful human oversight to outputs. That means employing scepticism, careful reading, and a cognisance of potential problems to everything the platform generates. Indeed, as you’ll see in this article, applying meaningful human oversight helps to mitigate most problems – and makes the most out of the tech.

 

 

Ethical generative AI: Copyright and plagiarism

 

The problem: There are several potential copyright and data issues. The first, and perhaps most pressing, revolves around copyright infringement. It is an ongoing debate – not likely to be solved any time soon – whether the content used to inform AI systems violates existing copyright legislation.

 

There have already been claims. In late 2022, the now famous legal dispute, Andersen v. Stability AI et al., saw three artists sue multiple generative AI platforms, suggesting the AI used their work without licence to train AI. And an even more high-profile case is currently in motion, with writers including George R. R. Martin, Jonathan Franzen, and Jodi Piccoult suing OpenAI for copyright infringement.

 

These are debates that are not solved – and one that the future of AI regulation will determine. But the larger issue for individuals is using other people’s ideas and quotes without giving them credit. No one publishing content should depend on another person’s work without referencing that work.

 

 

How to avoid the problem: Avoiding copyright infringement is an evolving issue. The response from users will largely depend on court decisions and legislative changes. It may be that the onus rests on the individual, the user, rather than the platform, to avoid copyright infringement, in which case users are legally obliged to avoid infringement – a near impossibility, it seems, without using additional tech.

 

Regardless of the evolution of copyright, individuals should aim to credit people for their work. Even if it does not break any laws, even if the courts rule that all AI-generated content is fair game, crediting work is still the right thing to do. Failing to credit people may also hit your reputation, as plenty of tools exist that are capable of finding instances of plagiarism, tools that can quickly undermine your content.

 

A perhaps more difficult challenge is inadvertent plagiarism. No one can expect you to know everything. So the only thing you can do is practice caution and follow the golden rule: apply human oversight. All AI content should be checked for factual inaccuracies and potential plagiarism before publication, at the very least. Edits should be required, too, and ideally you will re-write.

 

Do not copy and paste, at least not for any content that you publish. Copy and pasting generative AI content constitutes passing other content off as your own, even if that content is produced by machine learning. And generative AI content is often dull, verbose, unoriginal, and quite clearly written by generative AI – so, even devoid of ethical considerations, you’ll want to avoid the copy and paste.

 

For people who want to be particularly, cautious, consider AI spotter tools. AI spotter tools can identify generative AI content, find issues within that content, and provide myriad other functionalities, based on the tools. Running your own articles through the tools may help you find issues.

 

 

Ethical generative AI: The environmental issues

 

The problem: The Verge recently produced an interesting fact: If every search on Google used generative AI tools similar to ChatGPT, it would burn through roughly as much electricity annually as Ireland. That is every lightbulb in Ireland, every device, every television, everything.

 

That’s a particularly scary statistic considering a) we are in the midst of a climate crisis, and b) people increasingly use AI platforms as search engines.

 

Generative AI is terrible for the environment. It depends on substantial computing power that is already making energy at data centers skyrocket – and it will get worse, not better. The rise of AI poses serious environmental concerns – a point that seems routinely ignored in generative AI discussions.

 

 

How to avoid the problem: Start by tracking your generative AI footprint. Not all data centres are equally carbon efficient – some depend on renewable energy sources, for example – so you can try to research the carbon footprint of the platforms you are using. If the platforms have no indication around footprint, that’s probably not the best sign. Perhaps look for alternatives, if you can find similar ones.

 

You can also practice mindfulness about usage. Do you have to check your query out on ChatGPT? Is that something you can search for instead? Tech and climate have a strange relationship, in that the absence of the physical makes it seem that it is not damaging. But tech is harmful, and you should be mindful of the way you use it, in all instances. That’s especially true of generative AI.

 

And remember: it’s perfectly acceptable not to use AI.

 


Related Articles

Generative AI content: The absence of qualityGenerative AI content: The absence of quality
Generative AI content: The need for accuracyGenerative AI content: The need for accuracy
The ultimate guide to artificial intelligenceThe ultimate guide to artificial intelligence
Three ways charities can use AI in fundraisingThree ways charities can use AI in fundraising
Zoom Meetings Pro Plan Bundle 1 Year Subscription Access to Discounted RatesZoom Meetings Pro Plan Bundle 1 Year Subscription Access to Discounted Rates

Related Media

Can tech save the world? An introductionCan tech save the world? An introduction
Who we areWho we are

More on this topic

What loud quitting tells us about Gen Z

What loud quitting tells us about Gen Z

Q&A session: An introduction to Microsoft Copilot

Join us on the 14th of May for our Q&A session. It will provide a whistlestop tour of Microsoft Copilot’s key capabilities, how they can help charities, and answer all your burning questions around Microsoft’s AI service.  

 

Sign up here

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.