Microsoft's AI for Good programme
is a major new global scheme to plug Artificial Intelligence (AI), machine learning and data science into solving humanitarian issues, advancing global sustainability, and amplifying human capability using machine learning and AI technology. Microsoft provides the tech, resources and expertise to those with the ideas.
In our previous article we've looked at how Microsoft is working alongside researchers, startups, and non-profit organisations to help protect wildlife and the environment using data and AI
We've also looked at a few of the groundbreaking AI for good projects
they're supporting around improving the world for those with mobility impairments, dyslexia and visual impairments.
And now Microsoft has announced a new pillar of its AI for Goood programme - AI for cultural heritage
, which will support and invest in AI projects that help preserve and celebrate culture and the arts around the world.
This time we spoke with Microsoft experts Andrew Quinn, Global Digital Advisor, and Kate Rosenshine, Head of Data and AI Cloud Solution Architecture, Finance Services, for the inside story on how charities can start to work with tech companies like Microsoft to bring their ideas to life with AI.
> See also: Five amazing tech innovations using AI and the cloud
Charity Digital News: First off, what's the Microsoft AI for Good project all about?
At Microsoft, we have invested over $100 million in our AI for Good initiative to maximise the opportunities of AI for humanitarian, environmental and accessibility causes. As a part of this initiative, Microsoft for Startups has been helping many young companies develop technology that can be used for social good. The inaugural cohort of 11 businesses in the AI for Good program, has launched young companies develop technology that can help people with disabilities safely navigate the areas they live in.
One of the recent graduates, WeWalk
, has created a product that fits onto any cane and uses ultrasound sensors to warn visually impaired people of high obstacles such as tree branches. The device can also be paired with a mobile phone for navigation and other digital features.
WeWalk emerged out of a hackathon two years ago and has recently completed an initial production run, with roughly 1,500 units now being used across the world. Another example is Access Earth, which has created a free app lets users find places that suit their own accessibility needs, based on the reviews of people who have already been there.
> See also: AbilityNet Tech4Good award winners revealed
CDN: What are some of the areas that you've seen AI make the most impact?
On a global scale, there are some truly transformative applications. In the agricultural space we've been working with the FarmBeats
project which looks at how to grow crops better using data. Sensors can measure understand the levels of water and pH in the acidity in the soil, to help understand the optimal times to water or fertilise crops for the best yield.
Agriculture is a fascinating one. In Guatemala, big banks are starting to work with us to support farmers, which in turn supports the economy, and discourages people emigrating away to the US and other places to find work. By using AI they can now advise the farmers where they might want to grow their crops for the best yield, which to grow and even how saturated the market might be in certain areas, so they can get the best price for what they grow.
It's examples like this that really show the circular economy that can come about and the kinds of partnerships that can happen.
> See also: Artificial intelligence - the future of the charity sector
CDN: What ethical challenges do charities and social enterprises need to be aware of around the use of AI?
For many charities and social enterprises who may be working with vulnerable populations, the importance of having an ethical framework to ensure that they do not inadvertently harm the populations they are trying to serve. Establishing a clear code of ethics, commitments, and values when it comes to AI’s development and use, is foundational to ensuring that AI is created in a way which fosters trust.
There was a well known example of an AI that was trained to answer whether a picture of a animal was a huskey or a wolf. The question was can AI detect the difference? It was getting it 100% correct, and what the researchers then realised was that the AI was becoming a snow detector - it wasn’t detecting huskeys or wolves, but only picking up in the fact that the huskeys were always against a snow backdrop. It’s just detecting snow!
Apply that to humans and there's a risk of bias there. If you’re looking for a housing application for example, you need to ask 'are we over-indexing people based on their sex, their previous economic record, or other factors' and at least then you can make your own judgement and acknowledge the bias. So part of being ethical means complete transparency about where the data has come from and how the algorithms work.
> See also: How AI is helping make the world greener
CDN: How can charities without big budgets or access to tech partnerships start exploring AI?
I think charities are in an equally great place compared to startups and companies, because they're probably speaking to and making lots of different connections. It just takes a bit of inquisitiveness and thinking outside of the box to think about the data they might have access to and be able to use.
The questions they need to ask are: what do we know about this situation, what data do we have right now and when interventions would be want to apply in an ideal world? Then our team can come in and start to remove that technical barrier where a charity might think 'this can't be done.'
Microsoft have the money, the core technology and the expertise to apply those to a vast range of problems. We just need charities to know the door is open and we can help them work out that middle ground. It all just started with charities thinking outside of the norm and asking "I wonder if we could do this?"