Insights
Training
On-demand
We explore how charity service users are impacted by artificial intelligence in 2025, including through AI transparency, literacy, bias, environmental impacts, and more
In 2025, nearly a third of charities are seeking to understand how their target audience is impacted by artificial intelligence (AI). That’s no surprise: since the technology burst into the mainstream in 2022, we have seen both fervent affirmations of its positive potential and pressing warnings of its capacity to harm. People’s hopes and fears about AI have been wide-ranging and passionate – but most agree that the mainstream use of AI signals a point of change for ordinary people, and society at large.
The truth is that AI is neither entirely saintly nor sinful. The reality is much more nuanced. AI technology has improved medical diagnosis, catching early signs of common preventable chronic diseases. It is being used to map environmental destruction and the emissions of greenhouse gases. Some UK charities have even started using AI to support change in their services.
But in 2025, AI also harms people and society. A lack of AI transparency and literacy prevent people from understanding AI, applying scrutiny to systems, and ensuring they reflect societal values. Biased AI systems worsen racial and gender equality. AI data centres have environmental impacts, which threaten the health of humans and animals living on the planet.
Exactly how AI will change life for people in the years to come depends on society’s response. Charities have the power to make unheard voices heard, to uphold society’s values, and to transform the harmful into the helpful. Charity service users are already being impacted by the emergence and use of AI, reshaping what they need from charity services. In this article, we explore how AI is impacting people’s lives, and the possible roles of charities.
Skip to: Find out how your charity’s service users are impacted by AI
Skip to: Facts about AI and UK service users
Skip to: How AI can help people
Skip to: AI transparency and AI literacy impact the power of the people
Skip to: AI bias impacts equality
Skip to: AI impacts the environment
Skip to: More impacts of AI in 2025
Skip to: How charities are using AI for service delivery in 2025
Due to limited AI transparency and literacy, not everyone knows how they are impacted by AI, and it can therefore be easy to conclude that they are not directly affected. But with organisations of many types embracing the latest AI technologies, normal people are both positively and negatively affected in their daily lives.
By understanding some of AI’s biggest issues , which we explore below, charities can assess how their service delivery can meet the challenges of the AI era. But by going beyond the facts and working directly with their service users to understand impacts, charities can gain better insights to make services relevant and effective. That means involving service users in the process of learning about AI, listening to their hopes and concerns, and making space for them to drive change where possible.
Let’s start by outlining some of the attitudes of the UK public towards AI. As different demographics vary in their outlooks, this can help contextualise where AI risks and opportunities could lie for charity service users.
As an overview, it seems most people in the UK are undecided about AI, with 38% saying they feel neutral towards the technology, according to research by TalkTalk. A large portion of people also feel neutral towards the trustworthiness of AI (37%), and a further 18% feel distrustful, reflecting a lack of widespread public confidence towards the technology.
People over the age of 55 are most likely to use AI for AI-driven internet searches, while Gen Z prefer searching on TikTok or Instagram than search engines like Google. These social media engines also use AI in various ways.
Men feel around 6% more familiar with AI than women, although around 15% of both genders use AI daily.
Women are around 10% more likely than men to feel negative about the idea of AI companionship. According to research by Deloitte, women are more wary than men about how their personal data is used and protected, affecting their willingness to share data. This is the case particularly when it comes to engaging with generative AI around sensitive topics such as personal finances, relationships, and medical or mental health issues.
There is a regional digital divide in the UK, with economic hubs like London and Manchester saying they’re more familiar with AI, and use it more often, than other regions such as the North East and Northern Ireland.
Non-working individuals are less familiar with AI than people employed in industries such as tech, construction, marketing, healthcare, and public services. This fits with the picture of digital exclusion as a whole, as people who are digitally excluded are two to three times more likely to be unemployed, according to research by Deloitte and the Digital Poverty Alliance. It could also correlate to age and retirement status.
AI presents some genuine benefits to some people in day-to-day life, creating new potential ways for charities to help service users. The use of AI in assistive technology for people with disabilities is one example.
According to the Scottish AI Alliance, AI technologies such as ‘The Voice’ app can help people communicate through speech, while people who are blind or have visual impairments can use AI technologies such as Microsoft’s ‘Seeing AI’ app.
Because AI is good at making sense of patterns, people can use devices powered by AI to monitor their health and be alerted to signs of health issues if patterns change. AI can also help prevent falls in people’s homes, allowing older people to live independently for longer. Meanwhile, smart speakers, powered by AI technologies, can be used by people who have visual or mobility impairments, and help people manage their schedules, for example taking medications regularly. People can use generative AI to automate stressful administrative tasks, such as drafting a letter of complaint.
These are some examples of how charities might help their users make the most of AI. But at the same time, services may address AI’s harmful impacts. As we will explore, AI has a lot of room for improvement – in terms of the technology itself, how it is used, and how it is regulated. We explore below how AI is changing what people need, and the potential roles of charities as it develops.
A key problem with AI in the UK in 2025 revolves around the fact that it isn’t understood by most people. That means that people are excluded from making the most of the technology, steering clear of its risks, and having a say as it increasingly influences the decisions that affect our daily lives. All of this comes down to two main challenges: transparency and literacy. Below, we explore how people in the UK are impacted by a lack of AI transparency and literacy, and how charities could help.
AI transparency means making AI explainable. Every level, from how AI works to how it is employed to the impacts it has, should be explainable to non-experts, particularly as it is increasingly used for decisions that affect our daily lives – even potentially life or death ones like diagnosing illnesses and using self-driving cars.
While AI technologies are not transparent by nature and can be hard to explain – particularly data-driven technologies like machine learning – it is still very much possible to explain how AI models get to a decision. And transparency is the most important for AI technologies with high impacts such as granting credit or recommending medical treatment. As AI expert Evert Haasdijk explains, “AI models that make high-impact decisions can only be allowed with the highest standards of transparency and responsibility.”
A few key problems have been highlighted: organisational leaders themselves are often “not really aware” of the exact technical work going on in their organisations. In addition, a lack of AI literacy, paired with increasingly user-friendly AI models can lead to people using it irresponsibly in an organisation without they or leaders being aware of it. According to Haasdijk, a bank in 2023 made an inventory of all their models that use advanced or AI-powered algorithms, and found a huge total of 20,000, most of which would not have been subject to any kind of regulation. This concern extends to charities, as Zoe Amar explores in her article, ‘Without a strategic approach to AI, we’re all exposed to risk’.
Currently, there is a lack of one commonly accepted framework for how to assess AI. Organisations must follow GDPR to manage privacy in AI use, but there is nothing similar for managing AI ethics. That means that the brunt of responsibility for the future ethics of AI falls on companies, organisations, and society at large.
AI transparency can help address public anxiety about AI, allowing people to understand the technologies that affect their lives, how they work, and the values that go into building them. AI solutions expert Stefan Van Duin reflects, “Not only do people want to understand how AI-based decisions are made. They want to be reassured that AI is used to benefit mankind and is not causing harm.”
Another necessary benefit of AI transparency is that it enables AI technologies to build upon mistakes, and to be regulated. Transparency allows organisations to explain AI decision-making to the people it affects, making it easier to avoid unfair and discriminatory practice.
AI transparency allows ordinary people (any member of the public, who could have any level of digital literacy) to understand how these systems work to apply scrutiny and make sure the decisions AI makes about them are fair and correct. As the Public Law Project has explained, AI transparency “allows for proper debate and consensus-building around the use of new technologies in the public interest”.
Ideas for how charities can help with AI transparency include:
AI literacy builds upon AI transparency: it means that people not only have relevant information about AI from organisations but that they are also able to understand and apply that information. It means expanding AI skills so that there is a diversity of people able to become experts and claim a stake in the development of this influential technology.
As it stands, certain AI narratives can make people feel fearful, such as those that emphasise the “brand new, frontier or complex nature of technologies”, explain Jeni Tennison and Tim Davis from Connected by Data. This fear comes from the narrative that AI’s creators are the only people who understand its impacts and that only they get to set the terms for its deployment and regulation.
On the other hand, learning about the limits of AI systems can help people feel more hopeful about AI, realising that it is not “magical” but “a tool they could wield”. Or, following on from the work of ClearCommunityWeb, involving communities in practical examples relevant to them also makes AI more accessible.
Around one in seven people are digitally excluded in the UK, meaning they can’t interact with the digital world fully when, where, and how they need. “We still have millions of people who are excluded from basic digital inclusion, let alone an AI version of society,” says Helen Milner, CEO of Good Things Foundation. And while AI itself has a long way to go, a lack of AI literacy in the UK is likely to leave those who are already digitally excluded even further behind.
As of early 2025, there have been no nationwide initiatives in the UK to increase AI literacy among the public. In comparison, other countries, like Finland, Scotland, and the USA, have made a start.
Tania Duarte, Founder of We and AI, and Ismael Kherroubi Garcia, Founder and CEO of Kairoi note that because of this lack of AI education in the UK, public conversations default to referencing the science fiction genre rather than starting from pragmatic evaluations of AI and its capabilities. They explain that the UK public is limited in its AI literacy due to its reliance on resources from tech companies who have a vested interest to present AI in a commercially attractive light.
In contrast, they explain that credible and accessible national AI literacy initiatives would enable people to make the right decisions for themselves around AI, based on the ability to understand both technical and social practices and implications, question the quality of data and information sources, separate fact from opinions, and recognise the limits of data and AI. AI literacy can support activism and movement building and potentially help civic and democratic participation.
“Let’s make AI work for eight billion people, not eight billionaires,” says Rachel Coldicutt, founder of Careful Industries. With increased AI literacy, the public can have a meaningful voice in how AI is developed and used in society today. So, what are some ways charity services can help with AI literacy?
AI bias is a challenge partially obscured from public view due to problems around AI transparency and literacy – but it can still have a large impact. AI can be biased when there are flaws in the data used to build an AI system or when the system reflects biases of their developers. With AI used behind the scenes in influential areas such as policing, healthcare, job recruitment, and more, biased AI systems, left unchecked, can lead to increased inequality. With many charities championing social equality, explore what their role could be in AI in 2025.
AI has been used in the UK to try to predict where crime is most likely to happen and allocate resources to those areas. In 2024, a UK civil society coalition of 17 human-rights focused organisations urged the government to ban AI-powered predictive policing and biometric surveillance systems, on the basis that they are disproportionately used to target racialised, working class, and migrant communities.
Research has shown that AI-powered predictive policing tools are racially biased when trained on police data such as arrests and victim reports. MIT Technology Review explains that this is because AI systems determine that crime is most likely to recur in places where the most victim reports and arrests have previously happened. This is inaccurate because victim reports and arrests in particular areas can be shaped by other factors like racial bias and trust in the police. This type of bias means that algorithms can lead police forces to misallocate patrols, inaccurately designating some places as crime hotspots while others are under-served.
In 2025, Amnesty International UK has found that at least 33 police forces across the UK have used predictive profiling or risk prediction systems. Their report ‘Automated Racism – How police data and algorithms code discrimination into policing’ explores the issue in more depth.
There are also concerns about the current uses of AI for surveillance and bias. Police can now use AI to recognise individuals in very large groups of people through AI-powered facial recognition technology, but there has been widespread resistance to this among 31 rights and race equality organisations and 65 MPs. The group’s concerns include that the technology is incompatible with human rights and has discriminatory impacts.
In 2019, the US National Institute of Standards and Technology found that the majority of commercial facial recognition systems exhibit bias, falsely identifying African American and Asian faces 10 to 100 times more often than white faces. The AI system had the most errors when identifying Native Americans. It also struggled more to identify women than men and falsely identified older adults up to 10 times more than middle-aged adults.
In 2022, the House of Lords raised concerns over AI bias in policing and the law, citing evidence that bias is present “at every level of deployment”. The government rejected recommendations made by the House of Lords committee and in March 2025 the UK’s first permanent facial recognition cameras were reportedly installed in South London.
AI bias amplifies health inequalities. A widely used AI health system in the US was found to racially discriminate against Black patients. The bias happened because the system used health costs as a proxy for health needs, interpreting the systematically lower health spending among Black people in the US to mean that their needs for healthcare were lower.
In the UK, research is underway to reduce bias in AI health prediction models which are trained on real-world patient data, in order to address ethnicity disparities in healthcare highlighted during the COVID-19 pandemic. The NHS is also working with Ada Lovelace Institute on an AI Impact Assessment Tool to reduce bias in AI-driven health technologies.
In a recruitment context, AI systems, for example by preferring applications with younger birthdates or typically masculine hobbies listed. Marginalised groups often "fall through the cracks, because they have different hobbies, they went to different schools", says investigative reporter Hilke Schellmann.
A 2024 report published by the UK Information Commissioner’s Office (ICO) raised concerns over the use of AI in recruitment software due to the fact that some can filter applicants based on protected characteristics like race, gender, and sexual orientation. Among a list of other harms, the ICO explained that the ability to filter out applicants with specific characteristics could lead to direct discrimination.
AI bias impacts its generation of images. When trained on images that aren’t diverse and representative, generative AI amplifies stereotypes and cliches. A Bloomberg analysis of more than 5,000 AI-generated images found worse racial and gender disparities than those present in the real world – with images of high-paying jobs, for example, disproportionately dominated by images of men with lighter skin-tones.
Research has found that seeing biased images tends to make people themselves more biased towards others. In the case of gender bias, researchers expect this to “affect the well-being of, social status of and economic opportunities for not only women, who are systematically underrepresented in online images, but also men in female-typed categories such as care-oriented occupations”.
Concerns have been raised with regard to the bias of AI against those who are digitally excluded. Those who don’t have basic access to digital technologies have a lower digital footprint and therefore aren’t represented accurately in the datasets used to train AI. Where AI is used for decision-making, for example in public and health services, decision-making therefore won’t reflect those who have lower access to digital.
There are criticisms of bias in Automatic Gender Recognition tools, incorporated into digital products, mainly behind the scenes, which discriminate against transgender and non-binary people. The digital rights group Access Now, along with more than 60 other NGOs, are calling for a ban on the technology, reporting, “When you and your community are not represented, you lose the ability to advocate effectively for your fundamental rights and freedoms. That can affect everything from housing to employment to healthcare.”
Suggestions for charity services to address AI bias include:
Looking at the impacts of AI transparency, literacy, and bias is all about what happens when AI is in the hands of developers, organisations, and the public. But behind all of that design and use is the energy required to make AI actually work.
When thinking about AI, some imagine a fleet of sentient robots, while others picture an abstract ingredient, like fairy dust, carried on a stream of air and floating seamlessly into our devices. In fact, to use AI requires continual computing energy in noisy, hungry facilities called data centres. These house a large group of networked computer servers and evidence the fact that that the apparent magic of AI actually requires a straining of effort, labour, and extraction.
And because AI does in fact exist on the same physical plane as us mere humans, we are impacted by its physical demands. In this section, we explore what this means and how charities can help.
In 2024, the World Meteorological Organisation (WMO) reported that the clear signs of human-induced climate change had reached new heights and was likely the first calendar year to be more than 1.5°C above the pre-industrial era. “While a single year above 1.5 °C of warming does not indicate that the long-term temperature goals of the Paris Agreement are out of reach, it is a wake-up call that we are increasing the risks to our lives, economies, and to the planet,” says WMO Secretary-General Celeste Saulo.
In this context, the environmental impact of AI is important because of the fact it has created an additional source of pressure to the environment since widespread use started in 2022, and because of the extent of those pressures. As we will explore, environmental harm takes place at every stage: building data centres, using them, and their disposal of electronic waste.
Constructing data centres is demanding on our natural resources. Building the servers requires a staggering mass of raw materials, with 800kg of raw materials going into making a single 2kg computer. To make the microchips that power AI, rare earth elements like lithium, cobalt, and nickel are used, which are often mined in environmentally destructive ways, causing deforestation, water pollution, and high carbon emissions.
When in use, AI data centres need a lot of energy to power their complex electronics, which usually still comes from burning fossil fuels. Doing so produces greenhouse gases that warm the planet. To put this into perspective, a request made through ChatGPT consumes 10 times the electricity of a Google Search, and a single data centre can consume the equivalent electricity of 50,000 homes.
AI is threatening tech companies’ environmental goals. Google’s greenhouse gas emissions increased by 48% between 2019 and 2024, stating that reaching net zero by 2030 “won’t be easy”, and that there is “significant uncertainty” around reaching the target due to “the uncertainty around the future environmental impact of AI, which is complex and difficult to predict”.
Data centres use large amounts of water during construction, and when in use, to cool electrical components. Many data centres use irrigation systems to pipe chilled water between the servers to prevent them from overheating. Globally, AI-related infrastructure may soon consume six times more water than Denmark, a country of six million inhabitants.
This high use of water contributes to the growing demand for water globally, which is increasing the need for energy-intensive water pumping, transportation, and treatment, and is contributing to the degradation of important environments that store carbon and depend on water, like peatlands. Aside from climate impacts, this high use of water is unsustainable given that a quarter of humanity are already lacking access to clean water and sanitation.
Finally, data centres produce e-waste: electrical equipment that is disposed of incorrectly. This often contains substances like mercury and lead, which pose hazards to humans and wildlife by leaking into the soil and groundwater, polluting nearby water sources.
The exact extent of the environmental impact of AI data centres cannot yet be measured, but we know that it is contributing to existing environmental problems. Without necessarily including the added pressures of AI data centres, the processes outlined above, such as rare earth element mining, greenhouse gas emissions, high usage of water, and the disposal of e-waste, are already harming people and animals directly in many ways.
Looking to the future, AI growth in the UK specifically is expected to cause further environmental problems for the nation. In parts of the UK, especially the south, there is already a threat of water shortages due to climate change and population growth. But industry sources say plans to make the UK a “world leader” in AI could put already stretched supplies of drinking water in the UK under strain.
Researchers from the University of Loughborough have raised concerns that the UK’s commitment to a twenty-fold increase in public AI computing power by 2030 “presents an immense challenge for the country’s electricity system”. They say, “Without immediate and concerted efforts to expand renewable energy and improve efficiency, AI’s electricity demands could hinder the transition to a net zero future.”
Net zero means cutting carbon emissions so that there are none left in the atmosphere overall. According to the UN, it is important to reach net zero as a planet by 2050 in order to avert the worst impacts of climate change and preserve a liveable planet.
Climate change is impacting both the global population and local people. In the UK, climate change is already harming our physical and mental health by increasing the number of deaths during periods of extreme heat, increasing post-traumatic stress disorder for people affected by flooding, and increasing the risk of catching certain infectious diseases.
In response to the above, what are the some of the ways charities might help with the environmental impacts of AI?
Transparency, literacy, bias, and environmental harm are among the biggest challenges of AI today in how it impacts ordinary people and society. But they aren’t the only ones. There are a range of other important ways that charity service users are already being impacted by AI.
AI is used maliciously to commit violence and abuse against specific social groups, as well as being used in many types of cyber-attack. To the individuals affected, these can be some of the most direct and overt harms of AI. Find out more from Refuge, Signify, and the NSPCC in partnership with Childline.
When it comes to democracy, the Alan Turing Institute has found that AI “fuelled harmful narratives and spread disinformation” during 2024, a major year for elections. It discovered that generative AI tools were used to amplify conspiracy theories and sway public opinion, which has ultimately eroded trust in democratic institutions and heightened public fear of AI’s potential misuse. However, hearteningly, the Institute also says there is no evidence that AI impacted election results themselves in the UK, France, or Europe.
Away from the political sphere, AI-generated misinformation can impact peoples’ daily lives in realms like health and the climate. On the flip side, AI has also been used to identify misinformation, for example through Full Fact’s AI software.
AI has impacts in both the classroom and the workplace. For example, some are concerned that an overreliance on AI could lead to an erosion of teaching, writing, and reasoning skills, and “may fundamentally change the educational experience offered to young people”. In the workplace, those who are monitored by AI systems have reported high-levels of anxiety.
In addition to the challenges of AI explored above, there are a range of ways the technology can be used to help people and society. It can help people with accessibility, it can instantly translate text and speech into other languages, it can provide more effective medical treatments, it can search online spaces to find out what service users need, it can help conserve biodiversity, and more.
When it comes to how charities are using AI for their services, most seem to be taking tentative steps. Overall, only 15% of charities are using AI tools in their service delivery, according to the Charity Digital Skills Report 2024. That includes 5% who are offering services built on AI tools, and 12% who say they are using AI tools behind the scenes.
These low numbers are likely influenced by AI skills, confidence, and trust among charities, with 40% keen to skill up on how to assess AI risks and adopt AI responsibly, and 38% keen to develop AI policies and governance. Charities like the Wildlife Trusts, WWF, and Homeless Link have started to lead the way by publicly sharing their knowledge on how to use AI responsibly.
Non-profit and grassroots organisations have had high hopes for AI, but in July 2024, research by Joseph Rowntree Foundation and We and AI found that successes with AI were not evenly distributed among charities. Those less likely to use generative AI were likely to be smaller in size, not have existing digital privilege, or to have identified that AI would not be in line with their communities’ values.
When non-profits had successes with generative AI, the research found that they came at a cost to organisational cohesion, beneficiary trust, internal values, and in some cases the investment and development of solutions which might be more appropriate, affordable, or transformational in the longer term.
In using AI for service delivery, charities face a range of practical and ethical challenges, some of which relate to those faced by service users discussed above. Among the challenges charities have faced is a lack of external support, such as pathways for them to feed into how and what decisions are made about AI in the UK, and independent spaces to discuss and learn from other charities.
As we’ve seen, there are a lot of ways that people’s lives are already being impacted and changed by the emergence of AI. Here, focusing on the potential role of charities, we have paid attention to AI’s darker side, the ways in which people’s lives can be changed for the worse. But that is only one part of the story. Service users can reflect most accurately on their own experiences, knowledge, and feelings about AI. Charities can help shape the future by making sure that those perspectives shine through to make a difference on a larger scale.
Our 2025 Reimagining Service Delivery Summit unlocked new perspectives on service delivery and how charities can maximise value to service users. Click here to watch the session recordings for free.
Our Reimagining Services Hub features regular articles, podcasts, and webinars to support charities in delivering services. Click here to learn more.
Follow-up questions for CAI
How can charities effectively improve AI transparency for their service users?What strategies increase AI literacy among digitally excluded populations?How might AI assist in reducing health inequalities through better diagnosis?In what ways can charities advocate for sustainable AI data centre practices?How can service users contribute to identifying and correcting AI bias?Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.