Insights
We look at why regulating AI may prove to be far harder than many people expect
The use of artificial intelligence (AI) is skyrocketing as businesses, governments, and charities adopt the technology. Some of the applications are relatively trivial – AI is used to provide recommendations to Amazon customers for things to buy, or to generate content for charity websites, for example.
But AI can also be used in more serious ways – helping doctors to diagnose diseases or helping charities with fundraising initiatives.
More sinisterly, AI can also be used by governments to keep tabs on citizens using facial recognition, or to predict which individuals are likely to commit a crime before they have done so.
Since AI is also not going to go away – once invented, a technology cannot be “uninvented” – many bodies including the EU and the UN consider some form or regulation is essential.
There are a variety of reasons why regulation may be desirable, including:
Ethical considerations: Ensuring that privacy and human rights are respected, and that AI systems do not incorporate bias or discrimination against particular groups of people.
Accountability and liability: To make it clear who is responsible in the event of accidents, injuries, discrimination, or errors caused by AI systems.
Transparency and explainability: Regulations that promote transparency and explainability can help build trust and ensure fairness in applications like finance, law, and health care.
Safety and risk mitigation: Regulation of AI in fields where there is risk of significant harm to humans, countries, or economies is seen by many as essential to avoid an “AI takes over the world” scenario.
Yet despite AI having been around for many years, and despite its explosion in capabilities recently, there is little in the way of regulation of AI at the moment. Nor does it seem that much regulation is on the cards.
The British government, for example, has no plans to introduce new legislation to regulate AI, and the United States government is adopting a similar stance. Both countries appear content, for the moment, to try to control the use of AI through existing privacy regulations and other laws.
A notable exception to this is the EU, which has prepared an AI Act which will likely become law by the end of 2023. This concentrates on regulating the identifiable risks associated with the use of AI.
Essentially it classifies four types of AI risk – unacceptable, high, limited, and negligible – and prescribes courses of actions that must be taken for each of these four types.
Unacceptable usage – such as using AI to identify people who are likely to commit a crime before they have done so – is banned.
High risk applications such as using AI with financial, health, and other personal data will be heavily regulated to mitigate the risks, while limited risk applications such as the use of chatbots will have very light touch regulation. This will likely include informing users that they are interacting with an AI, to promote transparency and trust.
The UN has not gone as far as the EU, preferring to publish a Recommendation on the Ethics of Artificial Intelligence which aims to protect human rights, human dignity, and provide an ethical framework from which countries can act.
Despite the EU’s attempt to regulate AI, its unlikely that a consistent global regulatory framework will emerge any time soon. That’s because before that can happen there are numerous challenges to be overcome.
Below we look at the major stumbling blocks to more widespread introduction of AI regulation.
Rapid advances in AI technology: The pace of AI development – for the time being at least - is much greater than lawmakers’ ability to draw up regulations. That means that regulations are likely to be out of date by the time they come in to force. Any regulations that are introduced must therefore be flexible and adaptable without being too vague to be useful. It is likely that more regulation will be introduced when the rapid pace of development slows and the technologies and applications of AI become more mature.
Lack of government expertise: Since AI technologies are evolving and improving quicky, it makes it extremely difficult for regulators to keep up with the way they work, how they may be applied, and the potential risks they pose. Without sufficient expertise in the field of AI, the risk of drawing up regulations which are impractical becomes high.
Risk of stiffling innovation: Overregulation can stifle innovation and slow down the development of AI technologies. Striking a balance between fostering innovation and safeguarding against risks is very hard for governments – especially when regulation in one country may result in expertise and jobs being transferred to another country. Stifling innovation can also put a country at a disadvantage from a security and defence perspective.
Global consistency Differing regulatory frameworks and standards can create confusion and hinder collaboration on AI-related issues. The temptation may therefore be for individual countries to do nothing until a global consensus emerges.
Unintended consequences: Regulations designed to address specific AI risks may have unintended consequences, stifling beneficial applications or hindering AI research. Anticipating and mitigating these unintended consequences is a significant challenge and it is not clear that the EU’s AI Act, for example, will enable the EU to reap all of the potential benefits of AI.
What all of this means is that, for the foreseeable future at least, AI will be a bit like the Wild West. That’s not to say that organisations can do whatever they like with AI, as data privacy and other laws still apply. But it may be a few years yet before the technology is tamed and more stingent regulations on how AI can be used begin to emerge.
For the sixth year in a row, we're bringing back an action-packed event filled with Digital Fundraising insights from the charity and tech sectors. Join us on 7th October 2024 for a free, one-day online event featuring informative webinars and interactive workshops.