In recent years Artificial Intelligence (AI) has gone from being the preserve of science fiction plots to a reality in our everyday lives. From major breakthroughs in medical science to concerns about the ways in which social media is driving disinformation and division, automation and algorithms are reshaping how we live, work and communicate. As a result, we are increasingly aware not only of the opportunities this technology brings, but also the risks it poses.
So what does this mean for the charity sector?
What is AI?
When people currently talk about AI, what they generally mean is something involving machine learning (ML). This refers to a class of algorithms that are able to train themselves (or “learn”) to perform set tasks through a process of repeated trial, error and self-modification using large data sets.
“Artificial intelligence” is not necessarily the most helpful terminology. Not only does it conjure unrealistic images of robots with human-level intelligence, but often things that are cited as examples of AI are really nothing of the sort.
The Ofqaul grading fiasco last year, for instance, was caused by an algorithm, but one that merely automated existing fairly basic human decision processes. It wasn’t AI in any meaningful sense.
The existing picture in the charity sector
Charities are generally not early adopters of emerging technology. There are a range of barriers - from lack of technical expertise to low risk appetite. So it is perhaps unsurprising that the impact of ML on the charity sector has so far been relatively limited. The impact there has been falls in two main areas.
The first is the use of ML to produce new predictive models. This has happened primarily in areas where there are ready data sets and willing technology partners, with medical research being the most notable to date. We have seen partnerships like that between Parkinsons UK and Benevolent AI, or that between the British Heart Foundation and the Alan Turing Institute, aimed at developing new methods of earlier identification and intervention for life-threatening diseases.
The second area in which AI is starting to have an impact on the charity sector is in the use of automated chatbots and conversational assistants. Some organisations are using their own chatbots to raise awareness among supporters; like charity: water’s Walk With Yeshi, which utilised Facebook Messenger to give users information about the life of a young Ethiopian girl who has to collect water every day. Other chatbots, meanwhile, are being used to deliver services, such as Is This Ok?, which offers a chat advice service to teenagers at risk of criminal or sexual exploitation to help them find appropriate support quickly and discreetly. We are also beginning to see commercial providers enabling donations through voice interfaces (such as Amazon Alexa or Google Home), which may prove potentially lucrative new opportunities for fundraising.
But how else might AI affect charities in coming years, for better or worse?
A lot of the early wins for AI in the commercial world have been in finding ways to automate “back office” processes. The application of ML has broadened the field of “Robotic Process Automation” (RPA) to include many tasks involving skills that have traditionally required human involvement, such as image recognition or interpreting natural language, and thereby increased the scope for companies to reduce costs and free up human workers’ time. Many banks, for instance, now use image recognition algorithms to allow customers to deposit cheques using their phones; the use of CV assessment algorithms is commonplace in HR, and people are using a wide range of AI-powered tools for things like translation and transcription to facilitate events and meetings. These tools are often available in relatively low cost forms, so there is certainly a lot more scope for charities to benefit from them.
As well as recognising images, speech and other types of content, AI systems are now also able to generate them. This may be by modifying existing content, or by creating it from scratch. Many will already be aware of the emergence of “deepfakes”: algorithmically-generated video or audio content that purports to show a real individual saying or doing something that has never in fact happened.
This has raised widespread issues: in addition to a worrying trend towards using deepfakes to create pornographic content depicting women without their knowledge or consent, many are concerned about the potential for this technology to be used to create disinformation that could influence public opinion and political discourse.
If these concerns can be mitigated, however, one benefit of generative AI may be to lower the cost of materials for marketing and communications, such as music or imagery involving people, as it will be possible to create them without the need for human composers or models.
Natural Language Processing & Collective Intelligence
Natural Language Processing (NLP) refers to the application of ML systems to the task of interpreting and producing human languages. This is a major growth area right now, and the capabilities of NLP systems have increased dramatically in the last few years. Most famously, the GPT-3 text generation algorithm has garnered widespread attention for its ability to produce written material that is often indistinguishable from human work.
But in addition to generating natural language, ML systems are now increasingly good at interpreting it too. This is a particularly powerful application, as it has the knock-on effect of vastly expanding the range of data that can be usefully harnessed. Traditionally, to apply an algorithm required a “structured” data set in which entries had been formatted and labelled in the correct way, (often by laborious human involvement). However, NLP now allows a lot of that process to be automated so that unstructured data in the form of everyday language can be easily turned into something a computer can understand and use. This means that media reports, social media and similar sources can now be used as data sets. The startup Signal, for instance, is using NLP to measure how well companies are perceived to meet ESG standards (which assess environmental and social responsibility, and good governance, and are used to help investors make sure their money is used ethically). The AI does this by drawing on millions of news articles, allowing companies to benchmark themselves and enabling investors to track ESG on a daily basis.
As well as harnessing existing natural language data sources, NLP is also a key enabler of what is termed “collective intelligence”. This describes a range of approaches which seek to harness the power of drawing knowledge from large groups of people (sometimes termed the “wisdom of crowds”) to make predictions or identify innovations and ideas. The role of NLP is to automate the process of capturing inputs, and thus make it feasible to engage a sufficient number of people in the process. Collective intelligence is thought by many to have huge potential and is being explored across the public, private and civil society sectors. The social innovation funder Nesta, for instance, has established a Centre for Collective Intelligence Design to “explore how human and machine intelligence can be combined to develop innovative solutions to social challenges”.
NLP could have a range of powerful applications in the charity world - particularly when it comes to grantmaking. It could, for instance, enable funders to move away from rigid form-filling application processes (which are known to disadvantage smaller organisations and marginalised communities) by using NLP to allow applications to be made in a wider variety of forms using normal language (either in writing, or even verbally). More radically, perhaps, it could shift the entire balance of the grantmaking model: by using NLP to capture information from social media and other sources, or to harness collective intelligence input, funders would potentially be able to identify “under-the-radar” groups and organisations working on particular causes or in particular local areas and reach out to them (rather than waiting for them to make a grant application).
Patterns & Predictions
One area where current ML systems excel is in identifying patterns in big data, which can then be used to make predictions. As highlighted earlier, a number of charities are involved in efforts to do this in particular cause areas - like using ML trained on medical images to develop new ways of spotting extremely early warning signs of cancers and other conditions. But it might also have applications more generally in the charity sector. For instance, it might be possible to apply ML to data on existing grantmaking to identify patterns and predictive measures of success that could be used to inform future funding.
Looking slightly further ahead, as more and more real-world objects come to have ‘digital twins’ (online versions of themselves that replicate their features and properties exactly), new opportunities for experimentation, modelling and making data-driven predictions will emerge. We have already seen this happen in the form of the major advances that ML has recently facilitated in our understanding of protein folding (which is vital to many areas such as drug discovery, but which was traditionally thought to be almost impossibly difficult). Is it unrealistic to expect that as we become better able to model complex social or environmental systems, we may likewise be able to identify and test interventions digitally to identify the best way of addressing issues, thus radically altering the nature of experimentation and innovation in the philanthropic world?
The wider relevance of AI for charities
There are many ways in which the various actors within the charity sector could potentially harness the power of AI to improve their efficiency and impact, as outlined above. However, the growing prevalence of the technology in other industries and sectors means that even those who do not choose to engage actively will increasingly find that their organisation and the people or communities they serve are affected in a variety of ways.
The growing use of conversational interfaces, for instance, is likely to change the nature of how we find and access information over the internet, so organisations that have had to figure out Search Engine Optimisation (SEO) will now have to get their heads around Voice Search Optimisation (VSO). Financial services firms, meanwhile, are increasingly using algorithms to assess eligibility for financial products, or to determine risk. Charities may in the future find themselves on the wrong side of automated decisions that affect their ability to get bank accounts or to send and receive money.
A major concern about AI is that often the decision-making algorithms used are proprietary, so we do not know how decisions are made – they are “black boxes”. It might be almost impossible to ascertain why a decision has been made or to challenge it after the fact.
For charities this is problematic. Automated financial decision-making systems have a long history of disadvantaging charities, purely because they do not look like profit-making companies and therefore do not fit well into the banks’ standard models. If these systems grow more powerful, charities may well face growing systemic disadvantage.
Examples like this highlight the potential risks that come from allowing automated systems to exert so much control over our lives. Many people, for instance, are now aware of the dangers of “algorithmic bias”. Where ML algorithms are trained on data sets that are not sufficiently rich, or which reflect historical statistical bias for factors like race or gender, they come to exhibit those same biases (and often strengthen them) over time. As charities consider how they can harness AI, how it might affect them as organisations, and how it might affect those they work with, they need to understand these kinds of challenges and what can be done to address them.
This points to a wider truth: any technology as powerful as AI will bring significant unintended negative consequences alongside the positive opportunities it creates. Charities and funders, if they want to harness AI in their own work, therefore need to ensure that they are alive to the challenges it may pose and able to mitigate them appropriately. And even those organisations that have little interest in engaging actively with AI need to be aware of how it may affect them - and more importantly the people and communities they serve - so that they can speak up effectively on their behalf in the wider debate about the societal impact of AI. Only in that way can we hope to ensure that the technology is developed in a way that brings benefits for all, without causing harm.