Insights
Algorithmic systems are everywhere. How can charities and other social change organisations better understand the risks, impact and bias of algorithms?
This guest piece is contributed by Giselle Cory, Executive Director of DataKind UK, and Jenny Brennan, at the Ada Lovelace Institute.
An algorithm is a series of instructions that describe how to complete a task. It doesn’t have to be undertaken by a computer - a recipe for tonight’s dinner is an algorithm too.
Most data science projects will use algorithms in one way or another. Here are some examples from DataKind’s projects with social change organisations:
When we say algorithmic system, we’re referring to a project like this: a system that uses one or more algorithms and usually has computers and people involved too.
Once you’ve started using an algorithmic system, it can be tricky to understand what impact it is having on your service, its beneficiaries and wider society. For example, in the case of The Welcome Centre food bank, the algorithm might tell you that a particular client is likely to need extra support.
But how do you know it is doing so correctly, rather than directing support away from those who need it or having unintended consequences? There are a few different approaches you can take to understand this impact. These are summarised in a recent report, Examining the Black Box, by the Ada Lovelace Institute and DataKind UK.
In this article, we give an outline of these approaches and examples of where they’ve been used.
Before we get stuck in, it’s worth considering that these approaches are still being developed and working their way into mainstream data science processes. There’s still a lot to learn, and much to be gained by learning together. If you choose to make use of these approaches, consider sharing your experiences with other social change organisations so we can develop shared approaches and best practices for the social sector.
What do the regulators say?
A good place to start is to look at the guidance set out by regulators. Regulators across the world want to put in place rules and frameworks for organisations that use algorithms and have ways to check that these regulations are being followed.
At the moment, this type of regulation is nascent. A key UK regulator for the use of algorithmic systems is the Information Commissioner’s Office (ICO) who look at information rights and data protection. They have so far published draft guidance for a UK AI auditing framework, but nothing is confirmed yet.
Nonetheless, this draft guidance is a place to start to understand what regulators are hoping to see, such as:
What are the risks?
Ideally you’d consider the risks of your algorithmic system before putting it into use. Before you start to use it, you could consider doing an algorithmic risk assessment.
This is to assess possible societal impacts of an algorithmic system before it is in use (with ongoing monitoring often advised). Algorithmic risk assessments look at a whole algorithmic system and how it interacts with the people it affects to:
The AI Now institute has recommendations for public sector organisations, which could be applied to charities and social enterprises too. These assessments are now required by the Canadian Government.
One of the challenges here is that we don’t know much about how these algorithmic risk assessments work in practice - and even less so for charities. By trying out an algorithmic risk assessment and sharing your experiences, you can help others learn from them.
Annotation: The Ada Lovelace Institute / DataKind UK report looks at four types of assessment for algorithmic systems
What is its impact?
Now that you’ve looked at the potential risks and made the decision to move forward with the algorithmic system, it’s time to start using it. After it’s been running for a little while, that’s when you can start looking at its real-world impact. This is when you might consider working with researchers or social scientists to do an algorithmic impact evaluation. This looks to understand the societal impacts of an algorithmic system on the people and/or communities it affects, once it’s in use.
An example of this is Stanford’s ‘Impact evaluation of a predictive risk modeling tool for Allegheny County’s Child Welfare Office’ in which researchers looked at an algorithmic system created for use by social workers in children’s services and whether the outcomes for children were better or worse after the system was introduced.
Check for bias
One area that is often a worry is the risk of harmful impact from bias in algorithmic systems. In data science, bias refers to systematic error. For example, an algorithmic system might be much more likely to produce incorrect results for a particular group. Bias creeps into automated and human systems everywhere, and it can have far-reaching effects on society.
So how to identify bias in your algorithmic system? A bias audit is a targeted assessment to see if a system shows bias in a particular way - such as different results for people of different genders or ethnicities. For instance, the ‘Gender Shades’ bias audit looked at facial recognition systems and found that they were worst at identifying if a face was a man’s or a woman’s if the photo was of a darker skinned woman.
Whilst they’re not necessarily comprehensive, if you think there’s a risk of a particular bias in the results of your system or a system made by someone else, that impacts people your charity works with, bias audits are a way to test for it.
Conclusion
Algorithmic systems can help organisations to better support people and communities, by aiding decisions on how to direct resources, ensuring help gets to where it is most needed. But - as with any tool used to aid decision-making - these systems can also do harm if not properly evaluated and monitored.
Taking the technical steps needed to implement and evaluate an algorithmic system can be challenging - but there is help available! DataKind UK supports social change organisations who wish to explore, implement and/or evaluate an algorithmic system (or any form of data science).
Our community of pro-bono data scientists offer their data expertise to charities, social enterprises and public sector bodies - for free. If you’d like to get data science support, please get in touch on contact@datakind.org.uk.
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.