Back to top anchor
News

Artificial intelligence and ethics in health research

Issue date:
AI

Article taken from Ethics Notes, November 2023 issue. Read the full issue.

By Associate Professor Mangor Pedersen
Department of Psychology and Neuroscience, Auckland University of Technology

Last year, I was struck by a paper written by Urbina and colleagues on safety in artificial intelligence (AI) 1. They have a company that uses AI tools to generate potential new treatments, including Ebola treatments, using chemical compounds. After being invited to a conference on AI risk, they inverted their ‘good’ AI algorithms to train a model that intends to do ‘bad’. In hours, they created tens of thousands of chemical compounds that could be used for chemical warfare. The authors even admitted that they had never considered that their tools could be used to cause harm.

Not all health research is at such an ‘extreme’ risk as the abovementioned example. Still, it serves as a reminder to us all in health research to consider known (and potentially unknown) risks in our data. AI does bring new challenges, ethically, due to its predictive and generative properties. Especially as we are moving towards a clinical support model using AI, I believe we need updated frameworks for dealing with challenges such as:

  • Who is responsible for AI-based decisions – developers, users and/or clinicians?
  • Is making clinical decisions based on human and/or machine-based information?
  • AI, particularly generative AI, can potentially generate biased results. For example, how do we deal with rare diseases in AI frameworks that need a large amount of data for robust training?

These points are not easy to address, but in the remainder of this article, I will focus on two issues worth tackling in the context, especially as we are in the infancy of AI-based health research. These two topics are bias and transparency in contemporary AI research.

In this context, I talk about bias as ‘unacceptable’ input data to AI models, leading to a predictive output that does not reflect the population. Stable diffusion, a remarkable imaging generator released last year, has received much attention for its biases. In trying to generate images of doctors, stable diffusion mainly generates stereotyped images of male doctors. This does not reflect reality, as approximately half of doctors in OECD countries are female. Generative AI bias also extends to socioeconomic and racial stereotypes 2. Data bias is a common issue in medical science, and even large-scale population studies such as the UK Biobank suffer from intrinsic data biases 3. In addition to ensuring a varied intake from the population, a way to reduce bias in AI models is to include important subject-specific information in the AI models. For example, Wang et al. 4 showed that having demographic, clinical, genetic and cognitive scores led to better prediction of brain imaging – showcasing the importance of integrating potential bias in AI models.

Transparency shifts the focus from data to people. For AI to become a successful clinical description support tool, we need transparent models, meaning that we can explain how the models are built and why they reach their predictions. Improving the education of an AI-minded workforce beyond computer science will likely determine AI's success in health. Augmented intelligence is a term for a symbiotic relationship between humans and machines working together 5. The idea behind augmented intelligence is to leverage human expertise and interpretability in AI, enabling computers to do what they do well, namely mining large amounts of data. In turn, I hope the next wave of medical AI tools will have increased transparency and explainability. Hence, as users, we have control and understand the predictions given to us by the AI models.

At this point, a question arises: How do we deal with emerging AI-ethics issues as medical researchers? As the AI lead of the Australian Epilepsy Project, I have witnessed rapid changes in AI over the last few years and felt the increasing need to improve AI ethics principles in health. We in the Australian Epilepsy Project have outlined five pillars of AI ethics (transparency, justice and fairness, non-maleficence, responsibility, and 
sustainability 6) that we will actively monitor and continuously assess throughout the project as we move into an AI-assisted future in health sciences. We believe such frameworks will safeguard clinical research projects using AI for clinical decision support. Regulatory approaches such as EU’s AI-act – https://artificialintelligenceact.eu/the-act/ – are welcomed in this context, as it will help ensure that health-related AI is used for ‘good’. Although it is the researcher’s responsibility to recognise and report potentially adverse, or even harmful, outcomes of AI-based research (as exemplified in Urbina et al.’s case where chemical compounds may be used for warfare), more work is likely needed to generate procedures around AI-safety reporting. In Aotearoa, the publication of the Trusted Research Guidelines 7 for Institutions and Researchers signals increasing institutional awareness of a range of research risks, including potential dual use, and provides one starting point for the work ahead.

References:
1. Urbina, F., Lentzos, F., Invernizzi, C. & Ekins, S. Dual use of artificial-intelligence-powered drug discovery. Nat Mach Intell 4, 189–191 (2022).
2. Bianchi, F. et al. Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency 1493–1504 (Association for Computing Machinery, 2023). doi:10.1145/3593013.3594095.
3. Fry, A. et al. Comparison of sociodemographic and health-related characteristics of UK Biobank participants with those of the general population. Am J Epidemiol 186, 1026–1034 (2017).
4. Wang, R., Chaudhari, P. & Davatzikos, C. Bias in machine learning models can be significantly mitigated by careful training: Evidence from neuroimaging studies. Proceedings of the National Academy of Sciences 120, e2211613120 (2023).
5. Gennatas, E. D. et al. Expert-augmented machine learning. Proceedings of the National Academy of Sciences 117, 4571–4577 (2020).
6. Pedersen, M. et al. Artificial intelligence is changing the ethics of medicine: Reflections from the Australian Epilepsy Project. Preprint at https://doi.org/10.31219/osf.io/kag75 (2023).
7. Protective Security Requirements, Science New Zealand, Te Pōkai Tara Universities New Zealand. Trusted Research: Guidance for Institutions and Researchershttps://protectivesecurity.govt.nz/assets/Campaigns/PSR-ResearchGuidancespreads-17Mar21.pdf. (2021