“Patient harm can be avoided if machine learning is used”
Machine learning can support clinicians, for example in recommending treatments. But what does it take for a doctor to trust a computer? Jonathan Rebane has deep dived into this topic and successfully defended his PhD thesis.
Congratulations, Dr. Jonathan Rebane! Tell us about your work.
“Thank you! The big picture is that machine learning has the ability to analyse and understand complex relations in data – far deeper than any human ever could. For healthcare, this means that machine learning can point out important information that a clinician might not see. As a result, unnecessary patient harm can be avoided if machine learning is used in medicine at the right time and place. This is especially true for so called ADEs – Adverse Drug Events. They include all types of harms and injuries that can happen from the use of medications. ADEs occur so often in healthcare but many can be prevented.”
Can you summarise the scientific contribution of your PhD thesis?
“The thesis provides new ways of extracting information from complex health data so that machine learning models can be as accurate as possible when making decisions about patient health. The thesis also investigates and provides new explanations to help clinicians understand the logic behind machine learning models. These model explanations can help clinicians to better trust machine learning decisions about whether a patient is sick or healthy. This high level of trust is necessary for clinicians to support the adoption of machine learning in medicine.”
Patients, clinicians and the society as a whole can benefit from these results
What are the implications of your results?
“The goal of this thesis was to create accurate machine learning models that can still be interpreted and trusted by medical professionals. Two framework approaches were found to outperform similar competitors for their respective domains. I also provide a new interpretability method which can be applied to some medical data types to help clinicians better understand machine learning predictions. Studies into the feasibility and performance of using deep learning for ADE prediction applications in medicine showed promising results in terms of clinician understanding and medical performance.”
“Overall, patients, clinicians and the society as a whole can benefit from these results by having improved quality of care, and trust in medical practice. But this domain requires more research, new approaches and implementation in real applications for this impact to scale within society.”
Are machine learning tools used in hospitals today?
“Some hospital systems have used or evaluated ADE alerts as part of a clinical decisions support systems. However, the vast majority of these systems are rule based. Machine learning systems often do not reach production because clinicians do not understand or trust them. Machine learning has the benefit of learning from the data in a way that rules do not, they have to be manually encoded by experts. Machine learning also has the potential to outperform the accuracy of just using rules.”
How did you develop your interest in machine learning, and why did you decide to become a PhD student?
“I became interested during my undergrad studies in cognitive science at Queen’s University in Canada, my home country. I studied intelligence from multiple angles including neuroscience, philosophy, AI, and computer science. I became most interested in how intelligence could be built with computer programs and naturally fascinated by how far the intelligence of such programs could extend beyond human intelligence. Practically speaking, I wanted to apply advanced intelligence for the good of society, and this naturally led to using machine learning in medicine.”
“I did the PhD to build a personal tool set of methods and knowledge. I wanted to better understand how to apply scientific methodology to machine learning, and to be confident towards executing my own projects in the field.”
I apply what I’ve learned from my PhD journey
What will you do next?
“I’m working with a startup which is focused on responsible AI. We are building software as a service platform which will integrate with both AI and management systems to ensure responsible AI governance within organisations. For example, the platform detects risks and mitigates against issues such as bias, discrimination, and privacy vulnerabilities. This service will be crucial for organisations who use high-risk AI and need to conform to upcoming regulations put forth in the EU AI Act.”
“In this work, I apply what I’ve learned from my PhD journey. This includes being able to achieve valid and reliable results, understanding the importance of having such results to back up what you say, being able to take on independent projects where answers are not clear from the onset, and making large impact at scale.”
A shorter version of this article is available in Swedish
More about the thesis
Jonathan Rebane successfully defended his PhD thesis “Learning from Complex Medical Data Sources” at the Department of Computer and Systems Sciences (DSV), Stockholm University, October 28, 2022.
Panagiotis Papapetrou, DSV, has been his principal supervisor and Isak Samsten, DSV, has been his supervisor.
Myra Spiliopoulou, Otto von Guericke University of Magdeburg, Germany, was the opponent at the defence. Members of the examining committee were Arno Knobbe, Leiden University, the Netherlands, Indrė Žliobaitė, Helsinki University, Stanley Greenstein, Stockholm University, and Hercules Dalianis, Stockholm University (alternate member).
Jonathan Rebane’s PhD thesis can be downloaded from Diva
He explains his research area in a blog post from 2020
Contact information for Jonathan Rebane
Last updated: November 9, 2022
Source: Department of Computer and Systems Sciences, DSV