Add to favorites

#Industry News

Medicine 5.0: machine learning algorithms in healthcare by Medica Magazine

Interview with Prof. Alena Buyx, Director of the Institute for History and Ethics of Medicine, Chair of Ethics in Medicine and Health Technologies, Technical University of Munich (TUM) and Member of the German Ethics Council

Artificial intelligence holds the promise of salvation when it comes to medicine: it is meant to unburden medical professionals, save time and money and perform tasks reliably and tirelessly. But before AI algorithms are allowed to diagnose diseases, many technical and ethical questions still need answers. Find out how AI and "Medicine 5.0" can transform healthcare at the MEDICA 2019 trade fair in the MEDICA ECON FORUM by TK.

In this interview with MEDICA-tradefair.com, Prof. Alena Buyx talks about machine learning black box algorithms and describes the challenges they pose for medicine and politics.

Prof. Buyx, what is "Medicine 5.0"?

Prof. Alena Buyx: "Medicine 5.0" refers to machine learning algorithms that can also make autonomous decisions. "Autonomous" in this setting means that they can teach themselves processing rules from big data, that is, large data sets, and use them to subsequently perform a diagnosis and recommend treatment. However, we do not always understand what is behind each individual step they used to make these recommendations.

Do these algorithms operate like black boxes whose processes we do not know?

Buyx: Medicine tries to avoid the use of classic black box algorithms. This is an ethical requirement, which is also among the views I assert: We still must be able to understand what happens inside the black box. This does not necessarily pertain to every single step, but we should identify the decision parameters an algorithm uses. These should be medical criteria and not criteria that are somehow statistically significant yet not clinically relevant.

A well-known example is an algorithm that was tasked with diagnosing tuberculosis by means of X-rays. Among other things, the algorithm looked at the edges of X-ray images and after teaching itself, determined that mobile X-ray images frequently display abnormalities consistent with tuberculosis. However, that is actually because mobile systems are more commonly used in high TB burden countries. Needless to say, that is an entirely non-medical factor that does not promote clinical accuracy. This black box model must not be allowed. It needs an annotation to describe the rough criteria. We must also be able to change the algorithm in ways where it no longer uses a certain criterion.

How would these algorithms impact the healthcare system?

Buyx: So far, autonomous algorithms have not seen extensive practical implementation because they are not yet good enough and involve many challenges. If we manage to design ethical algorithms, it can trigger a positive transformation in medicine. But only if they can diagnose more accurately than a physician or make authoritative therapeutic recommendations and we have enough insight into why they do that and whether these conclusions are based on a reasonable medical foundation. That’s when they could free up time for patients, avoid mistakes and reduce costs.

You have briefly touched on it earlier: Which ethical concerns or problems do these algorithms raise?

Buyx: First, the algorithms must present thorough evidence and perform reliably and accurately to avoid risks and harm. We simply must not fall prey to our obsession with technology. Second, this must not lead to broad misconceptions that algorithms and AI will replace doctors or other health care professionals. Algorithms perform a specific, well-defined task and are unable to search for other characteristics that a doctor sees when he examines a patient, meaning they cannot make a complete differential diagnosis.

Third, there must not be any algorithmic bias that stems from data sets or programming. We all heard about facial recognition algorithms where the respective data sets are not as diverse as the real world. That’s why these types of algorithms are great at identifying the faces of white men but struggle to recognize faces of women or people of color. We will have to correct that via the training data.

We also need to consider how and to what extent we educate patients about the role of algorithms and how we ensure patient autonomy when the algorithms attain the status of an actual medical consultation as assistance systems.

What do policymakers have to do to create the right framework?

Buyx: Policymakers must definitely provide a framework if these types of algorithms are to be approved as medical devices. Needless to say, the approval process differs from the procedure for an ultrasound machine. One of the major tasks in this connection is to make these processes sustainable, ethical and socially responsible. The biggest challenge lies in the commercial realm and health-related apps. The imposed requirements in this area are nowhere near as strict as those required of medical devices.

A number of mental health apps use artificially intelligent algorithms. We have to decide how we plan to manage the situation if these apps directly engage consumers or patients without physician supervision and involvement. If apps are meant to effectively provide clinical support that was previously within a physician’s scope of responsibilities (and rightfully so), we must ensure that these apps are classified and treated as potential medical devices.

Medicine 5.0: machine learning algorithms in healthcare by Medica Magazine

Details

  • Am Staad, 40474 Düsseldorf, Germany
  • Medica Magazine - Timo Roth translated by Elena O'Meara