Insights
AI has been called the defining technology of our age. But what exactly is its role in the third sector? And how can the average small charity benefit?
The idea of an artificial mind that can think by itself has always loomed large in the human imagination - the ancient Greeks told myths of mechanical men. As an academic discipline, the field of Artificial Intelligence (AI) has been studied since the 1950s, after computer scientist Alan Turing first asked, “can machines do what we, as thinking entities, do?” But in 2018, AI is firmly out of the realms of science fiction or academic theory.
Last week UK prime minister Theresa May stood up in her keynote address to the World Economic Forum and announced her ambition to establish the UK as a “world leader” in AI, alongside plans for its ethical oversight.
These days, voice assistants like Google Now and Microsoft’s Cortana are in every smart device and computer, and smart speakers like the Amazon Echo and Google Home are selling in their tens of millions.
Voice-based virtual assistants might be the first thing that comes to mind when we think of AI today, but the term actually applies to a broad set of technologies programmed to mimic the cognitive function of human minds in many different ways.
Machine learning is an enabler for AI that gives computers the ability to learn and solve problems by themselves, expanding their knowledge as they get more data input. Systems are ‘trained’ on an initial amount of data, and then the algorithm is left to improve itself over time.
This technology is now everywhere – think natural language processing seen in the likes of Siri and Alexa, to facial recognition used in biometric security at airports, smarter Google searches and traffic prediction on Google Maps, recommendations on Netflix and Spotify, and arrival time and location estimation in Uber. Google even has AI built into its latest camera phone.
As well as front-of-house service delivery for most of the major tech companies, AI is also being used behind the scenes across most industries as a cost-effective and reliable way to do an enormous number of data-related tasks – everything from detecting fraud and calculating risk in insurance, to monitoring customer satisfaction and targeting people with social media or ad campaigns.
Technology analyst Forrester predicts up to 80% of firms will rely on ‘insights-as-a-service‘ in at least some capacity in 2018, using machine learning to process, trend and analyse data. Some of these robots are even being put to task solving the world’s most difficult social problems, such as sustainably managing resources, reducing traffic congestion, diagnosing a medical condition before a doctor can, and preventing the spread of diseases like HIV.
In the charity sector, one area where AI might might soon be replacing humans is where advice is offered online – the equivalent of retail sector chatbots like Amazon’s, which is already answering customer questions online in place of a human customer service agent. We’re already seeing examples of innovative charities experimenting with this.
Right now, chatbots are talking to people about climate advocacy, arthritis, homelessness, childcare and ageing. While most chatbots are still in their infancy and not able to accurately imitate human-to-human conversation, the big tech companies such as Google, Microsoft, Amazon and Facebook have all been putting heavy bets on chatbot services, and the technology is likely to become dramatically more useful and intelligent over the next few years.
As this happens, AI is likely to become a common way of interacting with organisations. Technology analyst Gartner projects that more than 85% of customer interactions around the world will be managed without a human by 2020. As Rhodri Davies, Head of Policy and Programmer Leader at the Charities Aid Foundation (CAF)‘s Giving Thought, explains: “Very few people are going to turn around and say they’d much rather have this charity service provided by a chatbot or robot. But actually, in the longer term, the more we become used to taking advice from AI in one form or another, the less weird it is going to seem in other contexts.”
In cases where there is a large amount of information buried in the pages of a website, charities could use text-based AI conversations to present that information in a way that is easier to navigate and interact with and saves time for human helpdesk-operators.These could also be online 24 hours a day, so someone in crisis in the middle of the night would not have to wait until the next morning to speak to a human for advice. They would be given the advice they need and then signposted to a human who could help them further during office hours.
In this way, AI technologies will augment human capabilities where appropriate with a combined approach. “In some contexts, people may still want certain information presented by a human being, even if it is found by an algorithm,” says Davies. “AI and automation throws a focus on the things that humans are good at and what are the things that systems able to crunch large quantities of data quickly good at. Repetitive tasks, data tasks - most of the time they are just better at it, so there’s little point having humans doing those same things as AI gets more sophisticated.”
The kinds of AI data-processing tasks that could be most useful for charities include things like identifying the best grant sources, reaching potential donors online in more personalised and intelligent ways, conducting research and unlocking hidden trends in beneficiaries’ behaviour and other data in order to serve them better.
HAL from 2001: A Space Odyssey, The Terminator, I, Robot. It’s a well-known science fiction trope: humans create AI beings - they rebel against their creators and try to annihilate us. Professor Stephen Hawking has even warned us that this could be our ultimate fate, should we fail to keep intelligent machines in check. But the ethical considerations of AI today fall a lot closer to home. In the near term, it will mean ensuring that AI tools are used fairly and appropriately for everybody, and the responsibility falls squarely in the hands of the humans that use it.
In 2016, Microsoft had to withdraw its AI experiment, a Twitter-based chatbot called Tay, after it began to spew racist and inflammatory Tweets, parroting the sentiments of Twitter users. It may have been funny, but this experiment raised serious questions. It showed us that putting any technology out into the world and expecting it to act morally of its own accord is not going to work, as like with any tools they are only as good as the people that create and use them.
The sophistication of AI is encouraging the development of high-stakes applications such as self-driving cars, automated surgical assistants and stock market trading. Because pretty much any task that involves processing large amounts of data will soon fall to AI, it is inevitable that this data-driven decision making will have some major ethical implications.
One of the big things charities need to be aware of, says Davies, is algorithmic bias. “Algorithms themselves don’t have entrenched social biases, but if they are not designed with those things in mind and you let them go to work on data sets that contains historical or statistical bias, then they start to not only demonstrate the same bias but demonstrate it even more starkly.” It seems like one step away from the dystopia portrayed in the film Minority Report, where police use technology to convict people for crimes before they happen - but in the US, this has already had negative consequences on real peoples’ lives.
An AI system (COMPAS) has been used in risk profiling, to forecast which prisoners are likely to reoffend if released from jail, informing decisions about bail and sentencing. These systems have been found to inaccurately identify black defendants as future criminals more often than whites. Researchers from the Alan Turing Institute are working on building better AI systems that prevent unfair discrimination by modelling how these sorts of incidents occur.
Matt Moorut, Head of Digital and Marketing at Charity Digital, says: “The ethical issues surrounding use of machine learning in profiling are particularly serious because human oversight is further removed, and because it allows you to profile a lot more people in one fell swoop than a human can.
“A lot of charities deal with really sensitive issues, which makes these issues even more acute as the damage done can be especially harmful.
“The flip side is that the technology can markedly improve operational efficiency, so the answer isn’t to avoid its use. The key is to agonise over all the variables and truly understand all of the implications of any given project going in, to make sure that any potential unintended effects are removed.”
Resources like this one from the BBC News Labs provide a good starting point for charities to gain a deeper understanding the implications of AI, so they can start to engage in meaningful discussions and stay one step ahead.
“There is a risk that if it is left up to the big, household-name charities to harness AI, this could tip the balance even more in their favour,” warns Davies. It’s already challenging for a smaller charity to rise above the noise and ‘algorithmic bias’ of platforms such as Facebook.
A typical automated newsfeed on Facebook tends to ‘funnel’ a user’s experience, recommending content based on what they have done before, what their friends have done, or what people similar to them have done. In the context of charity fundraising and philanthropy, this may already be having a negative on smaller, less well-known charities dealing with awkward or less-popular causes.
“Online marketing always favoured the larger players,” says Moorut. “The way Google ranks results is evidence of this and it’s apparent with organic reach on Facebook too. Small charities can still cut through though – they just need to have a really good handle on what makes them unique and who their audience is, and obsessively focus messaging on that.”
“Algorithms and online marketing aside though, AI is becoming more affordable and simpler to pick up and use, which means small charities can benefit just as easily as the big guys – maybe even easier in some ways, as they often don’t have the structural baggage that comes with being a long-established organisation. Of course, the real value you can derive from AI stems from having good, clean data, which is always a struggle – whether you’re big or small.”
A large charity looking to develop AI capabilities may have in-house developers, strategists and content producers to help. But the rise of ‘AI-as-a-service’ could soon create a more level playing field for smaller organisations to experiment with AI.
There are various entry points for AI at different skills levels, for those who want to begin building on their own, or just buy out of the box. Salesforce has recently announced cognitive capabilities in their products, for organisations that are already using Salesforce CRM and want to start making their processes smarter.
Microsoft offers Machine Learning Studio - a suite of on-demand services that includes many of the components for organisations get started with machine learning. It’s hosted in the cloud and works directly from Azure, so its customers can make use of their existing cloud and data assets. The company has also released a number of Cognitive Services APIs that provide developers with ways of adding features like speech recognition, facial recognition and language understanding into their own apps.
Similarly, Amazon have a suite of machine learning services that plug into Amazon Web Services, providing pre-trained machine learning services on a pay-as-you-go basis. And specialist agencies, such as non-profit machine learning consultancy Wood for Trees, can help bridge the gap between charities, their data, and machine learning capabilities.
Charities they have helped include Parkinson’s UK, who used predictive modelling to boost their cash giving by almost £500,000, and Dogs Trust, who have seen higher response rates and generated significant extra income by plugging intelligence into their fundraising campaigns.
As CAF‘s blog ‘The future of doing good’ shows, there are AI partnerships like this beginning to form all over the charity sector. In its early days, building these kinds of cross-sector relationships will be crucial for charities to harness the power of AI. And representative organisations like CAF are trying to encourage information sharing between charities, tech corporations, consultancies and others involved in the AI world.
“The charity sector needs to crack that nut of putting charities that are genuinely interested in this stuff together with the people who have the technical skills,” says Davies. “This is the key. And it might start at small scale, but most disruptive tech does. A lot of what is happening is coming from the startup side, just a few people in a bedroom somewhere, rather than large corporates. Why shouldn’t the same hold true for charities?"
Our courses aim, in just three hours, to enhance soft skills and hard skills, boost your knowledge of finance and artificial intelligence, and supercharge your digital capabilities. Check out some of the incredible options by clicking here.