ao link
Charity Digital
Search
Remember Login

New to Charity Digital?

User Menu
Remember Login

New to Charity Digital?

Remember Login

New to Charity Digital?

Search

The ultimate guide to artificial intelligence

We explore everything in our guide to artificial intelligence (AI), looking at the past, present, and future of AI, and exploring the various legal, ethical, and regulatory AI challenges on the horizon

Artificial intelligence robots lined up against blue background
The ultimate guide to artificial intelligence

Artificial intelligence (AI) offers huge potential to us all, as individuals, as nations, and as a planet. AI can make our lives easier, tackle huge issues such as climate change and inequality, improve living standards for people across the world, and broadly create a much brighter future.

 

But AI also poses various ethical challenges – including issues with plagiarism, the loss of the human, and prejudice and bias – as well as myriad social challenges – including issues around wealth and power distribution, unemployment and inequality, environmental decline, and so much more.

 

The success or failure of AI – machines that boast human levels of intelligence – ultimately depends on human decisions. And it is vital, to ourselves and all future generations, that we make the right ones.

 

In this article, we explore all the key issues in our guide to AI. We look back to the past, examining the foundations and the many innovators that paved the way. We look at the present situation, exploring how AI has proliferated into every part of our lives, often without our knowledge.

 

And we look to the future, exploring the ways in which AI might develop, looking at various models of the future, and predicting potential trends. And, finally, we consider the ethical and regulatory challenges of AI, giving individuals and organisations all the information you need to make effective and informed decisions.

 

So, without further ado, let’s begin at the beginning, with some science fiction.

 

Skip to: The definition of artificial intelligence

Skip to: Different types of artificial intelligence

Skip to: The history of artificial intelligence

Skip to: The current state of artificial intelligence

Skip to: The future of artificial intelligence

Skip to: The ethics of artificial intelligence

Skip to: The regulation of artificial intelligence

Skip to: Final words on artificial intelligence

 

 

The definition of AI

 

AI has long occupied space in the human imagination. Prior to the birth of AI, science fiction was already grappling with the concepts of artificially intelligent robots. Think of the Tin Man in Wizard of Oz, the humanoid robot Maria in Metropolis, or the War-Robot in Master of the World.

 

The idea of AI grew and developed in later science fiction, with machines taking on more and more human characteristics, culminating in robots that seemed indistinguishable from humans, as seen in Philip K. Dick’s Do Androids Dream of Electric Sheep? or Ian McEwan’s Machines Like Me.

 

But fictional AI differs substantially from AI in real life. We use AI on a daily basis and, to our knowledge at least, we are not dealing with humanoid robots. AI does not mean intelligent forms that mimic or attempt to destroy the human race, but rather Google Maps and Siri, automated systems that are able to demonstrate human levels of intelligence, or machines that act in a more ‘human’ way.

 

AI works by using iterative and fast processing and intelligent algorithms, married with huge amount of data. The tech learns automatically from patterns or features of the data and uses that information to improve processing and algorithms. AI acts as a simulation of human intelligence in machines that are programmed to think like humans.

 

Indeed, the term ‘artificial intelligence’ can be applied to any machine that exhibits traits associated with a human mind, such as learning and problem-solving.

 

 

Different branches of AI

 

Artificial intelligence is a broad field, encompassing different technologies, methods, and intellectual theories, with debates raging on the ethics, philosophy, and application of AI. There are seven main branches of AI that are currently in application, many of which solve real-world problems, streamline processes, reduce costs, and save time. Below we define each of the branches and give examples of application.

 

 

Machine learning

 

Machine learning is a form of analytic model building, perhaps the most well-known form of AI currently in use, which also acts as an umbrella term to encompass some of the other branches mentioned below. Machine learning allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so.

 

 Machine learning depends on inputs of historical data to predict present or future outputs, allowing machines to continuously evolve. Machine learning is all around us, but predictive text is one that we often take for granted.

 

A much-discussed commercial example is Tensorflow, a free and open-source software library for machine learning and artificial intelligence. The programme has a variety of functions, including AutoDifferentiation, Eager execution, and various optimisers for training neural networks.

 

 

A neural network

 

Neural networks are machines that can effectively learn through external inputs, relaying information between each unit of input. The tech is made up of interconnected units, which SAS compares to neurons, and repeat processes find connections and derive meaning from previously meaningless data.

 

It is a form of machine learning that takes inspiration from the workings of the human brain. Examples of a neural network include sales forecasting, industrial process control, customer research, data validation, and even targeted marketing.

 

 

Deep learning

 

Deep learning uses extensive neural networks with myriad layers of processing units.

 

The tech utilises the vast advances of computer power and training techniques to learn complicated patterns, employing massive data sets. Face ID authentication is an example, with biometric tech employing a deep learning framework able to detect features from users’ faces and match them with previous records.

 

Deep learning tech also detects barcodes, text, and landmarks through camera devices.

 

 

Natural language processing

 

Natural language processing is one of the more commonly used AI systems. It takes advantage of the ability of computers to analyse, understand, and generate human language – particularly around speech.

 

The most common form of natural language processing, at the moment, are chatbots. Natural language processing, at a more evolved stage in development, allows humans to communicate with computers using normal language and asking them to perform certain tasks.

 

The latest advances in AI chatbots showcase the potential – and risks – associated with natural language processing. 

 

 

Expert systems

 

An expert system uses AI to mimic the behaviour of humans or organisations that possess specific knowledge or experience. The system is not designed to replace particular roles, but assist experts in specific complex decisions.

 

An expert system essentially aids the decision-making process by using data, in-depth knowledge, alongside facts and heuristics. It is a machine that has a narrow focus, typically trying to solve a particularly complex problem. Expert systems are typically employed in technical vocations, such as science, mechanics, mathematics, and medicine.

 

They are used to identify cancer in early stages, for example, or to alert dentists to unknown organic molecules.

 

 

Robotics

 

Shockingly, robotics is about robots. The main aim of robotics is to implement human intelligence in machines, with a particular emphasis on deploying such machines, or robots, to support human work and labour.

 

The robots rely on other forms of AI, but robotics ensures machines perform actions automatically or semi-automatically to the overall benefit of humans.

 

An obvious example of the application of robotics are self-driving cars, otherwise known as robotic cars (or robo-cars), which are capable of driving without human input and could save humans time and money.

 

 

Fuzzy logic

 

Fuzzy logic is a rule-based system and a form of AI that aids decision-making. Fuzzy logic uses experience, knowledge, and data to advance decision-making processes, and assess how true something might be on a scale of 0-1. Fuzzy logic will answer a question with a number, such as 0.4 or 0.8, and with that it aims to overcome the binary human response of true and false, and instead give degrees of truth over vague concepts.

 

The application of fuzzy logic often appears in low-level machines, particularly in consumer products, such as controlling exposure in cameras, air conditioning systems, and the timing of washing machines.

 

So, as shown above, AI has many branches and many usages. Most likely, you will have used technology today that relies upon AI. The popularity of AI has boomed in recent years and organisations, charities, and individuals are trying to get involved.

 

But AI is the process of more than seven decades of work, starting from hypothetical machines and mouse robots built with wires to Google Maps, robo-cars, and Chat GPT. So, before we look at the future of AI and before we tackle the ethics, it is worth going back to the beginning and tracking the journey to the present.

 

 

The history of AI

 

Let’s go back to the beginning. Some of the earliest work on AI was performed by the British logician, computer pioneer, and the man currently occupying the £50 note: Alan Turing. In 1935, Turing described an abstract computing machine, one that possesses endless memory, is capable of sifting through that memory, one symbol at a time, reading what it finds, and learning from what it has read. That machine, according to Turing, would then be capable of autonomously writing new symbols.

 

The act of scanning is directed by instructions programmed in the memory and the act of scanning modifies and optimises its own programme. That hypothetical machine, able to improve and optimise its own programme, has been dubbed the Turing machine and it was proposed two decades before John McCarthy coined the term ‘artificial intelligence’ in 1956.

 

Various practical AI programmes were written prior to the coining of the term AI. Perhaps the first worth mentioning is Theseus, built by Claude Shannon in 1950. Theseus was a remote-controlled mouse that was able to find its way out of a labyrinth and remember the course it took. Other computers, which can now be seen as early precursors to AI, advanced the theory and paved the way.

 

This included an AI programme called Ferranti Mark I, written in 1951 by Christopher Strachey, which showed a machine developing skills in checkers. Not long after that, Dietrich Prinz produced a similar machine for chess. It is no wonder that Game AI continues to be an important measure of AI progress, even today, as many of the earlier and later AI developments are judged based on gaming skills.

 

But ‘artificial intelligence’ was still not even coined as a term. That changed with the Dartmouth Conference, hosted by McCarthy and Marvin Minsky in 1956. The Conference is seen by many to represent the birth of AI, though it was only attended by a small number of people.

 

But the conference still brought together some of the top researchers, sparked insightful discussion on the future of computing, and even put the term ‘artificial intelligence’ into common usage. And the conference saw the introduction of Allen Newell, Cliff Shaw, and Herbert Simon’s Logic Theorist, a programme that many consider the first true example of AI, one that mimicked the problem-solving skills of a human.

 

AI flourished in the decades that followed. Computers stored more information, scanned information at an ever-increasing speed, and became cheaper. Machine learning improved, with early demonstrations of programmes such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showing progress in terms of problem solving and the interpretation of spoken language.

 

But progress was limited, as myriad obstacles arrived. The main one was the lack of computational power – despite progress, the computers of the 70s were exceptionally limited. The problem is that effective AI does not depend simply on standard computational power, but exceptional power, requiring the ability to store huge amounts of data and process so many combinations of that data. Simply put, computers were too weak to exhibit human levels of intelligence.

 

AI saw a boost in the 80s. John Hopfield and David Rumelhart introduced deep learning to the masses, which allowed computers to learn using experience, and Edward Feigenbaum introduced expert systems, which made computers mimic the decision-making processes of experts.

 

These advances laid the groundwork for further successes and greater advances later down the line. And computer power saw a similar boom, which boosted the capacity, allowing application of intellectual ideas behind AI. 

 

In the 1990s and 2000s, AI thrived. Many of the earlier goals of AI were achieved, despite an absence of government funds. Consider, for example, one of the oft-referenced Game AI goals: the ability of a computer to defeat a grandmaster chess player.

 

IBM’s Deep Blue did just that in 1997, claiming a highly publicised victory over the reigning world chess champion Gary Kasparov. The grandmaster described how it felt to lose to Deep Blue: “Deep Blue was intelligent the way your programmable alarm clock is intelligent. Not that losing to a $10 million alarm clock made me feel any better.”

 

That same year saw Windows implement speech recognition software. And soon after that, human emotion was noticed and understood by Kismet, a robot developed by Cynthia Breazeal.

 

And, after 2010, we saw another boom, perhaps the largest boom, which continues to the present day, largely predicated on the huge volumes of data now available and the high efficiency processers that could accelerate the ability to learn increasingly complex algorithms. In short, computers became more powerful, data more accessible and extensive, and that has enabled considerable progress.

 

The new power led to significant developments, which we can recite by looking at the various successes in Game AI. In 2011, IBM’s Watson defeated two Jeopardy champions. In 2012, Google X recognised cats in a video, which required more than 16,000 processors but burst open the field of deep learning.

 

In 2013, DeepMind from DeepMind Technologies beat the many Atari games using one model. In 2016, Google’s Alpha Go beat the European Champion, the World Champion, and then herself in the Asian board game Go.

 

In 2017, Google’s AlphaZero became a master of Chess, Go, and Shogi. In 2019, Google’s AlphaStar achieved a ranking in the top 0.2% of players in Starcraft – a complex and real-time strategy game – which was the first time an AI had ever topped e-sport rankings.

 

The above uses just examples of Game AI. But the power that drove such achievements has led to huge development in real-world applications, many of which we have already mentioned.

 

And in the 2020s, we have already seen a further acceleration, with AI growing faster than ever and public hype around the subject growing at a similar speed. And that brings us to the present moment, 2023, in a year when AI has become one of the most discussed topics in tech, and its usage has exploded.

 

 

The current state of AI

 

AI increasingly powers everyday tasks. AI systems determine whether you get a loan, whether you are eligible for government support, and whether you will get pushed through to a second-stage interview. AI is used to support surveillance, to track healthcare, to monitor dietary needs.

 

AI recommender systems determine what you see on social media, the ads you see on your browser, your recommendations on YouTube, the products shown to you in online shops, and the next series you might watch. AI is working on so many elements of everyday lives, around the clock. It is a constant feature of contemporary life.

 

But it is not just the simple, everyday tasks. Now, AI is going much further. AI is helping to solve some of the hardest problems of mathematics and science. AI is playing an increasing role in finance, with huge firms now relying on AI-based algorithms to dictate investment choices. AI is used across the military, with threat monitoring, drones, automated target recognition systems, and autonomous vehicles.

 

AI is essential for cyber-security, as it studies patterns of cyber-attacks and forms protective strategies against them. AI is a huge part of the fight against climate change, improving modelling, creating better monitoring programmes, and analysing large and complex data sets based on environmental criteria. AI, in short, is growing faster than ever and contributing to human advancement in various fields.

 

But it is also posing novel philosophical and ethical problems, at rates that are arriving faster than we have the chance to discuss them. The ethics of AI and the regulation of AI will ultimately define its future. But, based on the present situation, we can predict some future trends.

 

 

The future of AI

 

AI has evolved from the Theseus, the robotic mouse mentioned earlier, to the most advanced systems we see today, such as DALL-E and PaLM, which produce photorealistic images and interpret, translate, and generate huge amounts of advanced language.

 

For the past six decades, AI training increased as per Moore’s Law, broadly doubling every 20 months. Since roughly 2010, however, the growth has been accelerated, doubling roughly every six months. AI is destined to develop faster in the future – and it is already developing frighteningly fast.

 

Long-term trends can give us a sense of how AI might look in the future. The most widely discussed long-term trend, according to Our World in Data, comes from AI researcher Ajeya Cotra. Cotra aimed to find the point in which AI systems could match the capacity of the human brain. The latest estimation, based on a wealth of research and data, suggests that there is a 50% probability that AI matching human capacity will be developed by 2040. The result of that could be transformative.

 

It is important to note that Cotra may be wrong. She offers only one of many predictions, one of many models, some of which suggest even faster acceleration, some of which suggest a slower growth. But they all suggest that AI will continue to grow. The simple fact that we have to face is that AI is bound to play a huge role in the future, not just in the future of tech but in the future of humanity.

 

That is why debating the ethics and regulation becomes so important. We need to understand the ethical problems created by tech and uncover how we should approach the arguments. So, without further ado, let’s turn to some of the major ethical issues around AI and uncover the core arguments.

 

 

The ethics of AI

 

The rise of AI has been matched by an increased philosophical argument around AI. The benefits of AI are clear in various fields, but the technology is not without risk. The risks are becoming more notable, as AI has exponentially grown and plays an increasingly commercial role in society. Below we look at some of the core ethical challenges of AI and discuss some of the overarching debates.

 

 

Inequality

 

AI raises ethical questions around the general application of economics. That particularly concerns economic inequality and the role played by AI in lessening or widening that inequality.

 

Economic policies often aim to reduce the uneven distribution of wealth, but wealth distribution, in countries across the world, has been widening over the past few decades. A common criticism of AI is that a company can drastically reduce dependence on a human workforce, which may lead to far greater unemployment levels, increasing the wealth gap and condemning people to poverty.

 

Consider statistics for the World Economic Forum, which suggest that the three biggest companies in Detroit made the same revenues as the three biggest companies in Silicon Valley, but Silicon Valley companies hired ten times fewer employees.

 

A major concern, based on the results of AI, stems from the fact that the individuals running AI-driven companies – which will likely make up more and more of the overall economy in years to come – stand to make a lot more money than others.

 

Counter-arguments come from various economic and ethical positions. The economic arguments suggest, first, that the future is unwritten and that different economic environments allow for greater redistribution. It is an economic choice, perhaps best summed up by the position of Stephen Hawking: “Everyone can enjoy a life of leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution.”

 

The economy depends on humans making particular economic decisions and ensuring that people across the economy share the benefits of the AI and automation. Inequality at the behest of AI is due to political and economic choices, divorced from the tech. The ethics concern the economy, not the tech.

 

There is an economic counter-argument to increased inequality, which stems from the idea of creative destruction associated with economist Joseph Schumpeter who derived the idea from the work of Karl Marx. Creative destruction describes the process of industrial mutation and adaptation that revolutionises pre-existing economic structures, destroying the old structure and creating a new one.

 

Schumpeter, building on Marx and sociologist Werner Sombart, argued that creative destruction would undermine capitalism. But the term has been used in mainstream economics to describe a positive shift, creating new resources and the ability to reinvest in more productive ways.

 

Creative destruction reduces outdated and archaic structures and tech, according to one argument, and replaces them with more productive structures and tech that will better serve people. The argument states that AI will reduce jobs, even create unemployment, but it will also create new jobs and reduce unemployment.

 

Being ethical in this case stems from making socially responsible economic decisions. The key is to ensure that economic decisions promote equality and that the benefits (and the risks) from AI are shared by everyone.

 

 

Bias and prejudice

 

Relying on machines to read data raises the question of data ethics, particularly around bias. AI is capable of processing masses of information, far more than humans, but it is not always neutral. Google is one of the leaders when it comes to AI. But one of Google’s services that depends on AI has provoked justifiable outcry. 

 

Google’s Photos service used AI to identify people, objects, and scenes, but the tech proved to have internal prejudices, shattering any sense of racial sensitivity. The software used to predict future criminals showed a clear and disturbing bias against black people.

 

AI is created by humans who have conscious and unconscious bias. That bias is then learnt and, in some instances, extended by AI. Other biases emerge from incomplete or unrepresentative data sets, or a reliance on erroneous or faulty information that reflects historical inequalities.

 

People in AI must mitigate the problem by sustaining representative and complete data sets and taking any other precautionary action. Regulation, as above, should aim to minimise data bias in AI.

 

But, at present, the onus rests for the most part with the organisations and people working with the data.

 

 

Legal issues

 

Another ethical conundrum revolves around the legal system. The question of liability becomes increasingly complicated and confused when dealing with AI. Consider, for example, liability for robots replacing human soldiers, machines that may take the lives of innocent people. How would we assign blame?

 

Or take the most commonly cited example: driverless cars. Driverless cars have already created myriad legal problems – and the roll out has been minimal. The question is not even close to being solved. In the case of an accident, for example, does liability lie with the driver, even though they were not driving, or the manufacturer? It’s an ongoing debate, with fiercely opposing opinions.

 

It becomes even more complex when we consider punishment. AI can prove obscuring in terms of assigning blame, making it easier to create plausible deniability and ultimately more difficult to hold individuals or organisations to account. Errors in AI implementation and integration can lead to horrific outcomes, costing lives, but punishment is difficult to enact, as culpability is difficult to place.

 

Another legal issue arises over AI rights. Machines are becoming increasingly life-like and human, possessing many qualities that we associate with the human, and ethical questions thus arise over our treatment of robots and machines. Should they be treated like animals of comparable intelligence? Should we acknowledge the suffering of robots? How should the legal system respond?

 

Debates rage of AI rights. The question is not whether AI should have similar rights to humans, though critics have made that argument, but rather implementing reasonable rights, justified by the rule of law and moral principles. As with much of the above, the decisions depend on the regulations enacted at a national and international level. It is clear, though, that the legal system will need to find ethical solutions that mitigate risk and amplify benefits provided by AI.

 

 

Climate change

 

Environmental damage has become an increasing concern, especially in the world of tech. We’ve written previously about the paradox around tech and the environment, showing that tech is both part of the problem and part of the solution. And AI sits neatly within that paradox.

 

According to the Council on Foreign Relations, for example, training a single AI system can emit more than 250,000 pounds of carbon dioxide. The use of AI across all sectors emits carbon dioxide at a level akin to the aviation industry.

 

According to a study carried out by the University of Massachusetts in 2019, the development of AI models for natural language processing entails an energy consumption equivalent to the emission of 280 tons of carbon dioxide. The energy cost of training that system is equivalent to 125 round trips between New York and Beijing. And all of that is particularly concerning when you consider that, according to a recent OpenAI study, the amount of power required to run large AI models doubles every three and a half months. That could cause huge problems in the future.

 

But AI is also contributes huge amounts to tackling climate change. AI self-driving cars, for instance, may reduce emissions by 50% by 2050 by identifying energy-efficient routes. Employing AI in agriculture produces higher yields, avoiding waste and supporting local economies. AI-driven monitoring systems can increase accountability of governments and other relevant bodies, ensuring they are acting in accordance with environmental standards.

 

recent report estimated that initiatives like using AI to improve efficiency of electric grids can help to significantly reduce overall emissions.

 

And, on top of all that, AI can help deal with the consequences of climate change, with AI-driven data analysis uncovering and predicting harsher weather conditions, helping form better reactions.

 

All of the above once again shows that the success of AI comes down to making the right ethical choices. AI raises economic, political, social, legal, and environmental dilemmas, and the future of civilisation will depend on making the right decisions within that framework. The decisions will depend on national and international governments working together to make AI work for us all.

 

 

The regulation of AI

 

Ethical questions lead us to consider regulation. National and international regulatory frameworks are necessary in order to amplify the benefits and minimise the risks of AI. AI must remain human-centred. AI should serve people, not the other way around. Legislative frameworks around AI always operate within the rule of law, always ensuring consistent application, transparency, and accountability.

 

AI is difficult to regulate, partly because it is moving at such a rapid pace and partly because of disparate opinions on many of the above ethical issues. In addition, decision-makers across the world are simply unaware of the potential and risks associated with AI, nor are they aware of the ethical dilemmas that it poses. Thus, the absence of regulation stems from delay, conflict, and confusion.

 

But there has been some progress. As cited in The Conversation, Australia has established the National AI Centre to develop the nation’s AI and digital ecosystem. Under that umbrella is the Responsible AI Network, which aims to drive responsible practise and provide leadership on laws and standards, which other countries across the world may choose to follow. But there is still no specific regulation governing AI and algorithmic decision-making, with the government opting for light-touch approach.

 

The U.S. has adopted a similarly light strategy. Lawmakers have shown little enthusiasm for regulating AI and – like many countries across the world – they have attempted to regulate AI using legislation or regulation that already exists. The U.S. Chamber of Commerce has called for regulation of AI to ensure that it doesn’t hinder growth or threaten national security, but no action has been taken.

 

The British government boasts a pro-innovation approach to AI regulation. Their Policy Paper outlines six cross-sectoral AI governance principles and confirms the British government, like most governments, has no plans to introduce new legislation to regulate AI. The principles – which include ‘Ensure AI is used safely’ and ‘Embed fairness into AI’ – add little to the conversation and nothing to the legal framework. They aid self-regulation but serve no other purpose.

 

Perhaps the most progressive AI legislation belongs to the EU, but their Artificial Intelligence Act has not yet been enacted. The AI Act proposes three risk categories for AI, assigning ‘unacceptable risk’ to systems that may need to face bans, ‘high risk’ to tools that demand specific legal oversight, and ‘no risk’ to other applications that may be largely unregulated. Critics suggest that the legislation has loopholes and exceptions, but ultimately the AI Act seems progressive. It at least offers a clear regulatory position on AI, as opposed to most other countries around the world.

 

It is worth mentioning the Recommendation on the Ethics of Artificial Intelligence, adopted by 193 United Nations Members States. The recommendation is the first global standard-setting instrument on the subject and aims to protect human rights, human dignity, and provide an ethical framework from which countries can act. The instrument is supposed to encourage other countries to build strong respect for rule of law in the digital world and promote effective governance of AI.

 

It is a progressive framework, one that may inform regulation in the future. But, as we have seen, AI remains largely unregulated. The question of the future of AI will depend on innovation, depend on growth, depend on how the new tech is applied in the real world.

 

But it will depend, too, on the ethical arguments and the regulation governments enact to attempt to mitigate AI risks and amplify AI benefits. It will also depend on collaboration, as many of the most pressing challenges posed by AI are borderless and may well depend on universal standards that traverse national politics.

 

 

Final words on AI

 

Growth of AI is inevitable. The question of the future revolves around the direction of growth. AI should be used to make our lives better, to improve our standards of living, to support the people who need support.

 

We should take heed of the abovementioned ethical dilemmas, effectively regulate to maximise benefits and mitigate risks, work together on an international scale, and ensure that AI works to our collective gains.

 

The success or failures of AI – the machinery that boasts human-level intelligence – still depend on human decisions. And it is vital that we make the right ones.


Related Articles

AI and the future of the charity sectorAI and the future of the charity sector
Can tech save the planet?Can tech save the planet?
Dell Technologies: Access to Discounted RatesDell Technologies: Access to Discounted Rates
How are charities using artificial intelligence in service delivery?How are charities using artificial intelligence in service delivery?
How to get free techHow to get free tech

Related Media

Generative AI and biasGenerative AI and bias

More on this topic

How to make Christmas cards and fundraise for charity

How to make Christmas cards and fundraise for charity

What is AI washing?

What is AI washing?

Charity Digital Academy

Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.

 

Tell me more

Recite Me toolbar