ao link

You are viewing 1 of your 1 articles as an unregistered user

For unlimited access to our free content, please register or login.

The risks of generative artificial intelligence

From spreading misinformation to security concerns, we look at the harm that generative artificial intelligence can cause

Generated AI image in blue and green appearing to show a man smiling
The risks of generative artificial intelligence

We’ve explored artificial intelligence (AI) and chatbots in detail. This article focuses particularly on generative AI, which has had a lot of buzz around it in recent months with the launch of ChatGPT.

 

So, what is generative AI? It’s a type of artificial intelligence that can produce content such as text, images, audio, videos, and synthetic data.

 

Tech Target explains: “Generative AI often starts with a prompt that lets a user or data source submit a starting query or data set to guide content generation. Traditional AI algorithms process new data to return a simple result.”

 

There has been a lot of talk about the benefits and risks of generative AI to businesses and organisations. But there are more dangerous risks to society that also need looking at, as we’ll discuss below.

 

 

Spreading misinformation

 

Generative AI tools don’t always generate accurate content. This is because these tools get their information from sources on the internet which are not always true.

 

A blog post from Deloitte says that there is too much uncertainty with generative AI: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. This raises the risk of spreading misinformation through the chatbot’s false sense of confidence.”

 

 

AI ’hallucinations’

 

Hallucinations are the errors that AI models can make. AI models rely on training data to provide answers (training data is a collection of data used to train the algorithm to make accurate predications).

 

 

AI deepfakes

 

A deepfake is when AI is used to create convincing images, audio, and video hoaxes. TechTarget says: “Deepfakes often transform existing source content where one person is swapped for another. They also create entirely original content where someone is represented doing or saying something they didn’t do or say.”

 

Some famous examples of deepfakes include the photos of Donald Trump being arrested and Pope Francis wearing a puffer jacket.

 

Deepfakes can spread false information from what appears to be trusted sources. In 2022, a deepfake video showed Ukrainian president Volodymyr Zelenskyy asking his troops to surrender.

 

There are also concerns that deepfakes could be used to interfere with elections and propaganda.

 

 

Copyright and AI 

 

There are concerns over transparency and copyright as AI tools are trained using internet data to create an output. This includes work that has not been shared by the original source, which is plagarism.

 

The European Union has drafted an agreement that requires companies to disclose any copyrighted material used to develop AI tools.

 

 

Bias in AI 

 

If an AI tool uses algorithms that make it more likely to find or give weight to certain data sources rather than others, the content could have a narrow perspective. These biased views – including sexist, racist, or ableist – could affect the way people think.

 

 

Generative AI and harmful content

 

Generative AI can be used to create and share harmful content that incites violence, hate speech and online harassment. It can also create non-consensual pornography.

 

 

Security concerns around AI

 

One of the major concerns with AI is trust and security with some countries, like Italy, banning ChatGPT, and others rethinking AI policy.

 

A Forbes article says that generative AI is likely to pose a risk to data privacy, as chatbots gather people’s personal data, which might be shared with third parties.

 

Generative AI tools could also be exploited by hackers. These tools could be used to create new and complex types of malware and phishing schemes that bypass protection measures. This could lead to data breaches, financial losses and reputational risks.

 

 

AI and fraud

 

Content produced by generative AI tools could be used for malicious purposes. This includes creating fake reviews, scams and other forms of online fraud.

 

 

Environmental costs of AI

 

There are increasing concerns about the impact technology, including generative AI, is having on the environment.

 

There is limited information available on the carbon footprint of someone asking a query on a generative AI tool. Research suggests though that the number is four to five times higher than a search engine query.

 

As the number of queries AI models receive each day grows, the bigger effect it will have on the environment.

 

And what about developing an AI model? An AI researcher says that “the more powerful the AI, the more energy it takes to develop it”.

 

The Council on Foreign Relations says that training a single AI system can emit more than 250,000 pounds of carbon dioxide.


Related Articles

AI and the future of the charity sectorAI and the future of the charity sector
How to spot AI contentHow to spot AI content
Should charities use A.I. content bots?Should charities use A.I. content bots?
The best Artificial Intelligence resources for charitiesThe best Artificial Intelligence resources for charities
The ultimate guide to artificial intelligenceThe ultimate guide to artificial intelligence

More on this topic

The road to better supporter journeys

The road to better supporter journeysSponsored Article

How AI can serve everyone

How AI can serve everyone

How to learn AI in 2024

How to learn AI in 2024

Q&A session: An introduction to Microsoft Copilot

Join us on the 14th of May for our Q&A session. It will provide a whistlestop tour of Microsoft Copilot’s key capabilities, how they can help charities, and answer all your burning questions around Microsoft’s AI service.  

 

Sign up here

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.