- This event has passed.
Emanuele Ratti (Institute of Philosophy and Scientific Method, Johannes Kepler University Linz) “Explainable AI and medicine”
28 April | 17 h 00 min - 18 h 00 min
Talk : Explainable AI and medicine
Emanuele Ratti is a philosopher based in the Institute of Philosophy and Scientific Method at Johannes Kepler University Linz. Before his current appointment, he worked at the University of Notre Dame, and he holds a PhD in ethics and foundations of the life sciences from the European School of Molecular Medicine (SEMM), in Milan.
His research trajectory is in history and philosophy of science and technology (biomedicine and data science). In particular, he is interested in how data science and biomedicine shape one another, both in epistemic and non-epistemic terms.
In the past few years, several scholars have been critical of the use of machine learning systems (MLS) in medicine, in particular for three reasons. First, MLSs are theory agnostic. Second, MLSs do not track any causal relationship. Finally, MLSs are black-boxes. For all these reasons, it has been claimed that MLSs should be able to provide explanations of how they work – the so-called Explainable AI (XAI). Recently, Alex John London claims that these reasons do not stand scrutiny. As long as MLSs are thoroughly validated by means of rigorous empirical testing, we do not need XAI in medicine. London’s view is based on three assumption: (1) we should treat MLSs as akin to pharmaceuticals, for which we do not need an understanding of how they work, but only that they work; (2) XAI plays one role in medicine, which is to assess reliability and safety; (3) MLSs have unlimited interoperability and little transfer-costs. In this talk, I will question London’s assumptions, and I will elaborate an account of XAI that I call ‘explanation-by-translation’. In a nutshell, XAI’s goal is to integrate MLS tools in medical practice; and in order to fulfill this integration task, XAI translates or represent MLSs findings in a way that is compatible with the conceptual and representational apparatus used in that system of practice in which MLS has to be integrated. I will illustrate ‘explanation-by-translation’ in action in medical diagnosis, and I will show how this account is helpful for understanding, in different contexts, whether we need XAI, what XAI has to explain, and how XAI has to explain it.
Please find the video of the talk here :
Ann-Sophie Barwich (Indiana University Bloomington, USA), The Limits of Current Machine Learning Models in Olfaction13 October | 17 h 00 min - 18 h 30 min
Maya J. Goldenberg (University of Guelph, Canada), A War on Science? Rethinking Vaccine Hesitancy and Refusal19 October | 17 h 30 min - 19 h 00 min