Tag: Philosophy and Biology Seminar

Sarah-Maria Fendt (Professor of Oncology, KU Leuven, Belgium), Metabolic rewiring driving metastasis formation

Sarah-Maria Fendt is since 2013 a Principal Investigator at the VIB Center for Cancer Biology and Professor of Oncology at KU Leuven, Belgium. Sarah’s lab is specifically interested in elucidating general regulatory principles in metabolism, and understanding cancer metabolism during metastasis formation as well as during altered whole body physiology. To perform novel research in her fields of interest her group exploits their expertise in metabolomics and fluxomics. The research of Sarah’s lab is currently funded by multiple (inter)national grants and industry, which include an ERC consolidator grant. Sarah received several awards such as the EMBO Gold Medal.
Research focus of Fendt’s lab: Do cancer cells have placticity in their lipid metabolism?

Most tumours have an aberrantly activated lipid metabolism that enables them to synthesize, elongate and desaturate fatty acids to support proliferation. However, only particular subsets of cancer cells are sensitive to approaches that target fatty acid metabolism and, in particular, fatty acid desaturation. This suggests that many cancer cells contain an unexplored plasticity in their fatty acid metabolism. Here we discovered that some cancer cells can exploit an alternative fatty acid desaturation pathway. We identify various cancer cell lines, mouse hepatocellular carcinomas, and primary human liver and lung carcinomas that desaturate palmitate to the unusual fatty acid sapienate to support membrane biosynthesis during proliferation. Accordingly, we found that sapienate biosynthesis enables cancer cells to bypass the known fatty acid desaturation pathway that is dependent on stearoyl-CoA desaturase. Thus, only by targeting both desaturation pathways is the in vitro and in vivo proliferation of cancer cells that synthesize sapienate impaired. Our discovery explains metabolic plasticity in fatty acid desaturation and constitutes an unexplored metabolic rewiring in cancers.

Ned Block (Silver Professor of Philosophy and Psychology, New York University, USA), Perception is non-conceptual

Ned Block is Silver Professor in the Departments of Philosophy, Psychology and Center for Neural Science at New York University (NYU), NY, USA.
 
Abstract
This talk will argue that the reason that perception is fundamentally different from cognition is that perception is non-conceptual whereas cognition is conceptual.  I will review evidence that infants between the ages of 6 and 11 months can see colors but cannot accomplish even the simplest kinds of cognition involving colors.  Children of the same ages can see shapes and also exhibit cognition with shape concepts.  I will argue that the upshot is that color perception of these infants is non-conceptual and that one can extrapolate from this finding to all of perception.

David Bilder (Univ. Berkeley, USA), Ancient origins of tumor-host interactions: insights from the Drosophila model

The Bilder Lab (University of Berkeley, USA) studies the molecules and mechanisms that govern the polarity, growth, and morphogenesis of epithelia, the fundamental tissue of all animals and the major constituent of human organs. They also use Drosophila cancer models as a simple system to understand both how epithelial organization prevents tumor formation and how tumors actually kill their hosts.
Example of recent work:
Bilder et al., Tumour-host interactions through the lens of Drosophila, Nature Reviews Cancer (2021)
There is a large gap between the deep understanding of mechanisms driving tumour growth and the reasons why patients ultimately die of cancer. It is now appreciated that interactions between the tumour and surrounding non-tumour (sometimes referred to as host) cells play critical roles in mortality as well as tumour progression, but much remains unknown about the underlying molecular mechanisms, especially those that act beyond the tumour microenvironment. Drosophila has a track record of high-impact discoveries about cell-autonomous growth regulation, and is well suited to now probe mysteries of tumour – host interactions. Here, we review current knowledge about how fly tumours interact with microenvironmental stroma, circulating innate immune cells and distant organs to influence disease progression. We also discuss reciprocal regulation between tumours and host physiology, with a particular focus on paraneoplasias. The fly’s simplicity along with the ability to study lethality directly provide an opportunity to shed new light on how cancer actually kills.

John Dupré (Egenis, University of Exeter, UK), What are viruses? Parasites, processes, parts or all of the above?

John Dupré is Professor of Philosophy of science at the University of Exeter (UK), with a main focus on philosophy of biology. He is the Director of Egenis, the Centre for the Study of Life Sciences.

 
Abstract
People still often think that viruses are tiny little things that cause disease by parasitizing larger organisms. Here I argue that viruses are not things, but processes, and while some do, of course, cause serious disease, many or even most may be important positive contributors to larger biological systems. Finally, returning to the mistaken characterization of viruses as things rather than processes, I show how this erroneous reification may have seriously harmful consequences for research.
 

Stephen M. Downes (Utah), An Early History of the Heritability Coefficient Applied to Humans (1918–1960)

Stephen M. Downes is a Full Professor in the Philosophy Department at the University of Utah (USA). Most of his work is in philosophy of science with special focus on philosophy of biology, philosophy of social science and models and modeling across the sciences. He is also an Adjunct Professor in the School of Biological Sciences at the University of Utah, and a member of the PhilInBioMed network.
Detailed CV.

An Early History of the Heritability Coefficient Applied to Humans (1918–1960)

Stephen M. Downes (in collaboration with Eric Turkheimer)
(See full paper here)
 
Abstract
Fisher’s 1918 paper accomplished two distinct goals: unifying discrete Mendelian genetics with continuous biometric phe- notypes and quantifying the variance components of variation in complex human characteristics. The former contributed to the foundation of modern quantitative genetics; the latter was adopted by social scientists interested in the pursuit of Gal- tonian nature-nurture questions about the biological and social origins of human behavior, especially human intelligence. This historical divergence has produced competing notions of the estimation of variance ratios referred to as heritability. Jay Lush showed that they could be applied to selective breeding on the farm, while the early twin geneticists used them as a descriptive statistic to describe the degree of genetic determination in complex human traits. Here we trace the early history (1918 to 1960) of the heritability coefficient now used by social scientists.
Keywords
Behavior genetics · Heritability · Heritability coefficient · Human behavior genetics

Elliott Sober (Madison): “Natural selection, random mutations and gradualism: Fisher, Kimura, and connecting the dots”

Elliott Sober is Hans Reichenbach Professor and William F. Vilas Research Professor in the Department of Philosophy at University of Wisconsin–Madison, USA. He is one of the founders of the field of philosophy of biology, a major philosopher “in” science, and a specialist of evolutionary biology. He is also a member of the PhilInBioMed Scientific Committee.
 
Abstract
Evolutionary gradualism, the randomness of mutations, and the hypothesis that natural selection exerts a pervasive influence on evolutionary outcomes are pair-wise logically independent.  Can the claims about selection and mutation be used to formulate an argument for gradualism?  In his Genetical Theory of Natural Selection, R.A. Fisher made an important start at this project in his famous “geometric argument” about the fitness consequences of random mutations that have different sizes of phenotypic effect.  Kimura’s theory of how the probability of fixation depends on both the selection coefficient and the effective population size shows that Fisher’s argument for gradualism was mistaken.  Here we analyze Fisher’s argument and explain how Kimura’s theory leads to a conclusion that Fisher did not anticipate. We identify a fallacy that reasoning about fitness differences and their consequences for evolution should avoid.  We distinguish forward-directed from backward-directed versions of gradualism.  The backward-directed thesis may be correct, but the forward-directed thesis is not.

Johanna Joyce (Univ. Lausanne, Switzerland), Exploring & Therapeutically Exploiting the Tumor Microenvironment

Johanna Joyce is a cancer biologist and geneticist, and her research interests focus on exploring the critical functions of the tumor microenvironment in regulating cancer progression, metastasis and therapeutic response, with the ultimate goal of exploiting this knowledge to devise rational and effective therapies.
Her fascination with cancer genetics began during her undergraduate degree in Genetics at Trinity College Dublin, and continued during her PhD at the University of Cambridge, UK, where she investigated dysregulation of genomic imprinting in cancer predisposition syndromes. She did her postdoc at the University of California, San Francisco, in Doug Hanahan’s lab, focusing on mechanisms of tumour angiogenesis and invasion in pancreatic cancers.
In December 2004, she started her lab at Memorial Sloan Kettering Cancer Center, New York, USA and was promoted through the ranks to tenured Professor and Full Member in 2014.
In January 2016, she was recruited to the University of Lausanne, Switzerland and the Ludwig Institute of Cancer Research. Her lab continues to unravel the complex mechanisms of communication between cancer cells and their microenvironment that regulate tumor progression, metastasis, and response to anti-cancer therapy. They are especially intrigued by the study of brain tumors – including glioblastoma and brain metastases – with the ultimate goal of developing effective new therapies against these deadly cancers.
The seminar will be organised via Zoom (ID : 882 6482 8610)
 
Abstract
Cancers do not arise within a vacuum; rather they develop and grow within complex organs and tissue environments that critically regulate the fate of tumor cells at each sequential step of malignant progression. The tumor microenvironment (TME) can be viewed as an intricate ecosystem populated by diverse innate and adaptive immune cell types, stromal cells, extracellular matrix, blood and lymphatic vessel networks that are embedded along with the cancer cells. While bidirectional communication between cells and their microenvironment is critical for normal tissue homeostasis, this active dialog can become subverted in cancer leading to tumor initiation and progression. Through their exposure to tumor-derived molecules, normal cells can become “educated” to actually promote cancer development. As a consequence of this tumor-mediated education, TME cells produce a plethora of growth factors, chemokines, and matrix-degrading enzymes that together enhance the proliferation and invasion of the tumor. Moreover, these conscripted normal cells also provide a support system for cancer cells to fall back on following traditional therapies such as chemotherapy and radiation, and additionally contribute to a general immune-suppressive state, thus limiting the efficacy of immunotherapies. Consequently, multi-targeted approaches in which co-opted cells in the microenvironment are “re-educated” to actively fight the cancer represent a promising strategy for the effective long-term treatment of this devastating disease.
 

Emanuele Ratti (Institute of Philosophy and Scientific Method, Johannes Kepler University Linz) “Explainable AI and medicine”

Talk : Explainable AI and medicine
Speaker :
Emanuele Ratti is a philosopher based in the Institute of Philosophy and Scientific Method at Johannes Kepler University Linz. Before his current appointment, he worked at the University of Notre Dame, and he holds a PhD in ethics and foundations of the life sciences from the European School of Molecular Medicine (SEMM), in Milan.
His research trajectory is in history and philosophy of science and technology (biomedicine and data science). In particular, he is interested in how data science and biomedicine shape one another, both in epistemic and non-epistemic terms.
Abstract :
In the past few years, several scholars have been critical of the use of machine learning systems (MLS) in medicine, in particular for three reasons. First, MLSs are theory agnostic. Second, MLSs do not track any causal relationship. Finally, MLSs are black-boxes. For all these reasons, it has been claimed that MLSs should be able to provide explanations of how they work – the so-called Explainable AI (XAI). Recently, Alex John London claims that these reasons do not stand scrutiny. As long as MLSs are thoroughly validated by means of rigorous empirical testing, we do not need XAI in medicine. London’s view is based on three assumption: (1) we should treat MLSs as akin to pharmaceuticals, for which we do not need an understanding of how they work, but only that they work; (2) XAI plays one role in medicine, which is to assess reliability and safety; (3) MLSs have unlimited interoperability and little transfer-costs. In this talk, I will question London’s assumptions, and I will elaborate an account of XAI that I call ‘explanation-by-translation’. In a nutshell, XAI’s goal is to integrate MLS tools in medical practice; and in order to fulfill this integration task, XAI translates or represent MLSs findings in a way that is compatible with the conceptual and representational apparatus used in that system of practice in which MLS has to be integrated. I will illustrate ‘explanation-by-translation’ in action in medical diagnosis, and I will show how this account is helpful for understanding, in different contexts, whether we need XAI, what XAI has to explain, and how XAI has to explain it.
Please find the video of the talk here :