Insights
From spreading misinformation to security concerns, we look at the harm that generative artificial intelligence can cause
We’ve explored artificial intelligence (AI) and chatbots in detail. This article focuses particularly on generative AI, which has had a lot of buzz around it in recent months with the launch of ChatGPT.
So, what is generative AI? It’s a type of artificial intelligence that can produce content such as text, images, audio, videos, and synthetic data.
Tech Target explains: “Generative AI often starts with a prompt that lets a user or data source submit a starting query or data set to guide content generation. Traditional AI algorithms process new data to return a simple result.”
There has been a lot of talk about the benefits and risks of generative AI to businesses and organisations. But there are more dangerous risks to society that also need looking at, as we’ll discuss below.
Generative AI tools don’t always generate accurate content. This is because these tools get their information from sources on the internet which are not always true.
A blog post from Deloitte says that there is too much uncertainty with generative AI: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. This raises the risk of spreading misinformation through the chatbot’s false sense of confidence.”
Hallucinations are the errors that AI models can make. AI models rely on training data to provide answers (training data is a collection of data used to train the algorithm to make accurate predications).
A deepfake is when AI is used to create convincing images, audio, and video hoaxes. TechTarget says: “Deepfakes often transform existing source content where one person is swapped for another. They also create entirely original content where someone is represented doing or saying something they didn’t do or say.”
Some famous examples of deepfakes include the photos of Donald Trump being arrested and Pope Francis wearing a puffer jacket.
Deepfakes can spread false information from what appears to be trusted sources. In 2022, a deepfake video showed Ukrainian president Volodymyr Zelenskyy asking his troops to surrender.
There are also concerns that deepfakes could be used to interfere with elections and propaganda.
There are concerns over transparency and copyright as AI tools are trained using internet data to create an output. This includes work that has not been shared by the original source, which is plagarism.
The European Union has drafted an agreement that requires companies to disclose any copyrighted material used to develop AI tools.
If an AI tool uses algorithms that make it more likely to find or give weight to certain data sources rather than others, the content could have a narrow perspective. These biased views – including sexist, racist, or ableist – could affect the way people think.
Generative AI can be used to create and share harmful content that incites violence, hate speech and online harassment. It can also create non-consensual pornography.
One of the major concerns with AI is trust and security with some countries, like Italy, banning ChatGPT, and others rethinking AI policy.
A Forbes article says that generative AI is likely to pose a risk to data privacy, as chatbots gather people’s personal data, which might be shared with third parties.
Generative AI tools could also be exploited by hackers. These tools could be used to create new and complex types of malware and phishing schemes that bypass protection measures. This could lead to data breaches, financial losses and reputational risks.
Content produced by generative AI tools could be used for malicious purposes. This includes creating fake reviews, scams and other forms of online fraud.
There are increasing concerns about the impact technology, including generative AI, is having on the environment.
There is limited information available on the carbon footprint of someone asking a query on a generative AI tool. Research suggests though that the number is four to five times higher than a search engine query.
As the number of queries AI models receive each day grows, the bigger effect it will have on the environment.
And what about developing an AI model? An AI researcher says that “the more powerful the AI, the more energy it takes to develop it”.
The Council on Foreign Relations says that training a single AI system can emit more than 250,000 pounds of carbon dioxide.
For the sixth year in a row, we're bringing back an action-packed event filled with Digital Fundraising insights from the charity and tech sectors. Join us on 7th October 2024 for a free, one-day online event featuring informative webinars and interactive workshops.