02:10
0

AI can help you do some amazing things. But it carries two significant risks: that AI gives you the wrong answers, and that AI is used in ways that cause harm for individuals and society. For any organisation, validating its AI systems and monitoring for AI harms are essential aspects of AI governance.

These are key lessons from the development of AI across the commercial and public sectors. Groups such as SurvivAI, ValidateAI and The Operational Research (OR) Society’s AI Working Group have been formed to help identify and disseminate best practice to help AI users in all sectors mitigate these two risks.

It doesn’t matter how small or large your charity is, if you want to use AI then the first step is to establish effective AI governance structures and processes. These can be lightweight and agile but they must be in place before AI applications are developed and deployed as part of business operations.

Building trust first with effective governance

AI governance is an essential aspect of building trust, managing AI-related risks, and for ensuring that AI principles (such as fairness, accountability, and privacy) are implemented and adhered to.

When the GDPR was introduced, even the smallest of organisations (for example, allotment associations) had to make compliance arrangements and this is a great place to start thinking about the responsible use of AI. Without data, AI is useless; AI ingests data and churns out decisions. These decisions might be fully automated as part of a
charity’s business operations or they might be AI recommendations that are subject to human oversight. Thus, keeping data governance and AI governance arrangements closely coupled is not only efficient – it is essential. Charities – of any scale – must introduce effective governance structures and processes before embarking on their AI journey.

Regular giving and legacy pledgers

Examples of using AI, or more specifically machine learning, can be borrowed from the commercial sector and adapted to fundraising and donor journeys. Many commercial organisations will build ‘churn’ models to analyse which customers might stop a subscription or service, and these same techniques can be used for regular givers – with those who are assessed as “more likely to stop giving” being given a different communication plan. Or another example: legacy pledgers are some of the most committed charity donors and offer a vital source of income for many charities.

However, not everyone in a donor base would be the right person to target for a legacy ask; machine learning allows us to find similar people to previous legacy pledgers based on their interactions with the charity.

Applying AI principles to the subjects of AI – donors

Whilst these projects are a good use of charitable resource and ethical in their aims, trust must be established between the charity and its audience if they are also to be ethical in their means. SurvivAI (a collaboration between academics and a major ‘ethical AI’ consultancy) argues that we need to go beyond AI principles for organisations and also apply these AI principles to the subjects of AI (for example, donors and fundraisers).

SurvivAI propose an AI trust gateway in which the ethical principles of organisations lead to the identification of principles for AI subjects:

  • A charity should ensure it is transparent in its use of AI and ensure that donors are aware that AI is being deployed, and for what purposes. Transparency of the charity does not guarantee awareness in the donor, i.e., the use of AI by a charity must be communicated effectively to donors. Good practice might well involve
    complete transparency on a) what data a charity holds about each donor and b) how this information is used, both by AI and non-AI approaches. Further, this usage may change over time and donors will need to be kept abreast of new developments. Lastly, donors should be given the ability to opt out of their personal data being used for AI purposes;
  • A charity should be able to explain how AI-driven decisions are arrived at, and in such a way that an AI subject is able to understand how an AI-driven outcome has been achieved for them. Explainability of AI decisions by the charity does not guarantee understanding by the donor. For example, the explanation should avoid using technical language that the donor does not have the time or the background to make sense of;
  • A charity should provide a channel for donors to contest the use of AI and the decision outcomes, i.e., donors must be able to dispute an outcome. The provision of a contestability channel by the charity does not guarantee an effective disputation process for the donor. For example, it might be difficult to access the contestability channel, take a long time for the organisation to process disputes and respond, set a high bar for accepting a disputed decision, be costly, and lack teeth (e.g., the imposition of penalties and the redress of harm) once an outcome has been successfully disputed.

Including the user experience when using AI

Ethical principles and intentions are all well and good, but by themselves they do not translate into trust with donors and avenues for the redress of harm – intended or unintended – caused by the use of AI. Regardless, establishing an AI trust gateway need not be onerous. While a charity should and must think about transparency, explainability, and contestability in their deployment of AI applications, they should also bear in mind the other side of the fence, i.e., the user experience, which is concerned with awareness, understandability, and disputability. Here, a focus on user interface design and design thinking techniques can add considerable value and lead to AI implementations that are more effective in achieving a charity’s goals.

Governance must cover the lifecycle of AI solutions

From an internal perspective, governance plays an important role over the entire lifecycle of AI solutions. During the development, testing, deployment and operational use of AI systems, it is essential to have clear processes and structures in place that guide when to use and when not to use data sets, that regulate under which circumstances an AI model remains valid and thus can be applied, that put regular checks in place, and that clearly state fallback options in scenarios where an AI approach is not appropriate.

A fundamental aspect of good internal governance is AI validation, which ensures that systems are fit for purpose, safe, reliable, timely, maintainable and trustworthy.

ValidateAI’s five principles

ValidateAI, a community interest company working cross-sector and in partnership with the OR Society, argues that AI systems will only fulfil their promise for society if they can be relied upon to address five high level principles:

  1. Has the objective of the AI application been properly formulated?
  2. Is the AI system free of software bugs?
  3. Is the AI system based on properly representative data?
  4. Can the AI system cope with anomalies and inevitable data glitches?
  5. Are the decisions and actions recommended by the AI system sufficiently accurate?

How to manage AI systems during Covid19?

Taking these five principles into account, ValidateAI has applied its process to the major concern of managing AI systems during the COVID pandemic (for details, see the 2020 ValidateAI white paper). This analysis highlights the need to ensure there is an effective approach to monitoring, remedial re-alignment, and stress testing of AI systems, which in normal times are typically overlooked. This approach works hand in hand with practitioner-centric standards and frameworks to evidence technical rigour, effective governance and ethical acceptability. Much work is still to be done to design what these approaches would look like and how they will be implemented in organisations in what is still the very early stages of the AI revolution.

AI done well has great power. AI done badly can harm not just the people affected by the AI analysis, but also the organisation’s reputation and the trust of its stakeholders. Before any charity starts to use AI, it must establish how it will manage AI risks through establishing AI governance structures.

Governance for charities encompasses an external focus on creating an AI trust gateway for donors and an internal focus on AI validation. This oversight might be achieved through a large charity’s cross-functional governance committee, or a small charity’s head of operations reporting directly to the Board. Good oversight and governance can enable you to mitigate the risks and harness the power of AI safely.

The OR Society is a charity and learned society that promotes education in and awareness of Operational Research (the application of scientific methods including AI to organisational decision-making). Its Pro Bono Service has been serving charities for over 10 years.

Photo of Matthias Kern
Mathias Kern

Mathias Kern is senior research manager for resource management technologies and optimisation in BT’s Applied Research team. He is an experienced industrial researcher and business modelling specialist particularly interested in applying Artificial Intelligence, optimisation and simulation techniques to real-life problems.

Photo of Shakeel Khan
Shakeel Khan

Shakeel Khan is Artificial Intelligence Capability Building Lead at HM Revenue and Customs and co-founder of Validate AI CIC. He has 25+ years related experience leading AI projects in industry and government sectors as well as supporting international capability building. He works closely with academics to promote cutting edge AI applications and champion methodologies to Validate AI to build trust.

Photo of Richard Vidgen against an orange background
Richard Vidgen

Richard Vidgen is Emeritus Professor of Business Analytics at the University of New South Wales Business School (UNSW), Sydney, Emeritus Professor of Systems Thinking at the University of Hull, and a visiting professor at the University for the Creative Arts. His current research focuses on the management, organisational and ethical aspects of AI. He is a member of the UK Operational Research Society’s Analytics Development Group and a joint editor in chief for the Journal of Business Analytics. He is an associate at Ethical AI Advisory and a founder of SurvivAI.



from UK Fundraising https://ift.tt/3q2Sypp

0 comments:

Post a Comment