CCU095: Quantifying and mitigating bias and health inequalities induced by clinical risk models

Project lead:
Honghan Wu, University College London

Artificial intelligence (AI) refers to computer systems that aim to perform tasks typically requiring human intelligence, such as problem-solving and decision-making. AI is a rapidly growing area of technology and innovation, sparking excitement about its potential for improving healthcare outcomes. This has led to increased investment in AI to develop tools and methods for tackling problems such as forecasting patients’ health into the future or assisting clinicians with time-consuming tasks. However, AI models are only as good as the data they are trained with and their underlying design. This is because AI learns by recognising patterns in the data, so the quality of that data directly impacts its ability to make accurate and representative predictions. This is especially important in sensitive decision-making circumstances like healthcare where discussion and complex understanding of a patient is required but may not be reflected in the data available.

There are growing concerns that the suggestions and predictions from these models could be biased (i.e., making different decisions for people of different backgrounds despite similar clinical situations) against certain populations, which could negatively impact the fairness of patient care in practice, particularly in decision making surrounding the COVID-19 pandemic. This bias may originate from the choices made when gathering, processing and / or storing data, e.g. language gaps leading to poor communication and recording of symptoms; under-served communities having poorer access to care and so not being well-represented; structural and systemic discrimination, etc. Similarly, the design of the methods and models themselves can introduce bias through choices such as how to group patients together and which features in the data should be used to make predictions about a patient’s health.

The improvements in healthcare outcomes promised by AI are unlikely to be delivered fairly if poor choices are made in these areas. In some cases, they could end up making existing biases and fairness problems in the delivery of care worse. This poses a serious risk in situations where AI tools may be built and used in a rushed way to try and save resources or improve efficiency within the NHS. As a result, it is important that we develop comprehensive checks and audits to make sure we have a full understanding of their impacts on patient care moving forward.

A recent study published in Science shows a widely used AI tool in the US concludes that black patients are healthier than equally sick white patients, leading to a mismatch in the hospital beds, medication and staff-time given to each group. This is one example of many, and these situations are not limited to racial groups. The problems can also affect people along the lines of their gender, sex, age, income, socioeconomic background, and more.

This project aims to:

  1. Investigate national-level NHS datasets, discovering bias in the data so that AI developers can account for these issues when they are designing tools. We will also work with the maintainers of these datasets so that the issues can be resolved in the future through changes to the way data is collected, etc;
  2. Review and analyse widely used risk prediction models built by the clinical AI community for use on NHS data. We will focus on models predicting COVID-19 related risks of people with cardiovascular disease and diabetes. We will use what we have learned in step one as well as a new tool we have developed for measuring the fairness of AI models’ predictions in terms of the resources used relative to the deterioration of a patient;
  3. After identifying sources of inequity and bias in steps one and two, we will review the current best approaches for mitigating bias in AI tools. We will use this information to create our own approach for doing this within the NHS / clinical context, with the goal of it being interpretable and open for auditing purposes.

Our work will lead to actionable insights on real-world clinical data and risk models. We can use these insights to suggest ways of making AI tools fairer for all patients living in England. We will highlight the causes of bias in AI tools to encourage policy and NHS action to address the resulting disparities in care. These insights will make sure that proper auditing is carried out alongside any further development of AI tools aimed at assisting with or replacing the decision making currently done by doctors. These tools and frameworks aimed at resolving bias in care must be developed alongside the AI methods themselves so that the decisions they influence are as beneficial and as safe as possible for the patients affected.

Projects