AI & Health Conference 2023: From Paper to Practice

AI & Health Conference 2023: From Paper to Practice

Join us for a day full of insights on the latest research in AI for health - with a special focus on how to bring the findings into practice

By VU Campus Center for AI & Health

Date and time

Fri, 2 Jun 2023 09:30 - 17:30 CEST

Location

Amstelzaal VUmc

1118 De Boelelaan 1081 HZ Amsterdam Netherlands

Refund Policy

Contact the organiser to request a refund.
Eventbrite's fee is nonrefundable.

About this event

Registrations for this event have now closed .

On June 2nd, we will gather in the Amstelzaal at the Amsterdam UMC (location VUMC) to hear about the latest research in AI for health and health care, and explore the practical implementation of academic research in clinical settings. Our event features research presentations, a panel discussion and technical workshops, plus ample opportunities to network and connect with colleagues from the AI & Health community. We look forward to seeing you there!

This event is organised by the VU Campus Center for AI & Health and Amsterdam Medical Data Science. Image credit for cover image: Alan Warburton / Better images of AI

### PROGRAMME ###

See below for abstracts

09:30 - 10:15: Registration & coffee

10:15 - 11:25: Welcome by Prof. Mark Hoogendoorn and Dr. Paul Elbers; opening presentation 'Medical Foundation Models' by Prof. Martijn Schut

11:30 - 12:30: Parallel session 1a: 'Regression discontinuity designs with applications to orthopaedic data' by Dr. Stéphanie van der Pas

11:30 - 12:30: Parallel session 1b: 'Data Sharing for Digital Health' by Dr. Marieke Bak and Dr. Ronald Cornet

12:30 - 13:30: Lunch

13:30 - 14:30: Parallel session 2a: 'Reinforcement Learning in the medical domain' by Dr. Vincent Francois-Lavet

13:30 - 14:30: Parallel session 2b: 'Integrating research into practice: where science and startups meet' by Tariq Dam, MD, Marketa Ciharova MSc and Dr. Ward van Breda

14:35 - 15:35: Parallel session 3a: 'AI for health monitoring and coaching' by Dr. Maryam Amir Haeri, Dr. Armağan Karahanoğlu and Dr. Tessa Beinema

14:35 - 15:35: Parallel session 3b: 'Towards the deployment of AI tools for diagnostic imaging into practice' by Dr. Sharon Ong

15:35 - 15:50: Coffee

15:50 - 17:10: Panel session: 'How can AI really generate impact in health and health care?' by Dr. Stéphanie van den Berg, Prof. Martijn Schut, Dr. Wilson Silva, Dr. Stefano Trebeschi, Tariq Dam MD and Dr. Halima Mouhib (moderator)

17:10: Closing words, then drinks

### PRESENTATION DETAILS ###

10:25 - 11:25 - Amstel

prof. dr. Martijn Schut (Amsterdam UMC) - Medical Foundation Models

11:30 - 12:30 - Amstel

dr. Stephanie van der Pas (Amsterdam UMC) - Causal conclusions from a cut-off

It is generally hard to establish the true cause of something. Are outcomes after a certain treatment better because the treatment is so good, or because only patients with a better prognosis received it? As it turns out, we can learn a surprising amount when a cut-off is used. Suppose for example that patients aged 65 or younger receive one treatment while older patients receive another. If the 64 year olds have better outcomes on average than the 66 year olds, we may ascribe this difference to the treatment. This is the intuition behind the regression discontinuity design. I will explain how the regression discontinuity design works, how the data to which it can be applied is different from data for which propensity score matching would be appropriate, and share some experiences from applying the regression discontinuity design to data from the Dutch Arthroplasty Register.

11:30 - 12:30 - Waver

dr. Marieke Bak & dr. Ronald Cornet (Amsterdam UMC) - Data sharing for digital health: technical and ethical aspects

13:30 - 14:30 - Amstel

dr. Vincent Francois-Lavet (VU) - Introduction to reinforcement learning (RL) and its applications to healthcare

This presentation will introduce the reinforcement learning paradigm, which is the part of machine learning that deals with learning a sequence of decisions to achieve some goals. The presentation will also discuss how these algorithms can be used in the domain of healthcare.

13:30 - 14:30 - Waver

Tariq Dam MD (Amsterdam UMC & VU) - Machine learning in the ICU

Imagine a bustling Intensive Care Unit (ICU), where patients' lives hang in the balance, and healthcare professionals work tirelessly to provide the best care possible. Despite the extraordinary efforts, there are gaps and challenges that limit our ability to leverage the vast amounts of critical data generated in these units. A call for collaboration during the COVID pandemic led to the first nationwide sharing of invaluable data of ICU patients. Together with industry partner Pacmed, we built a secure and scalable platform for transforming data from multiple sources into a multi-center research database. This included overcoming challenges in interoperability, data standardization, and many more for which the joint effort of academia and industry proved to be successful. Based on this success, we continue to share data under ICUdata to improve the quality of care and treatment of ICU patients.

13:30 - 14:30 - Waver

dr. Ward van Breda (VU) & Marketa Ciharova (VU) - Machine-learning prediction of mental states using vocal, facial and physiological cues: Experience from start-up/university collaboration

Machine learning algorithms have recently experienced fundamental progress thanks to important theoretical and technical breakthroughs, such as new deep learning architectures, as well as increased computational capacity available to researchers. These methods, if found reliable and valid, may directly contribute to the prediction of mental states, and subsequently the prognosis of mental disorder onset, course and/or treatment response in the future. A novel source of data for the purpose of machine-learning-based predictive modelling has been multi-modal natural behaviour in the form of unstructured communicative responses captured in text, audio or video modality. At the beginning of the presentation, it will be explained how such data can be used in machine-learning prediction of mental states. Subsequently, results and methods of two current studies will be reported: First, results of a scoping review including studies (N = 101) using of text-, audio- and video-based machine-learning algorithms for automatic prediction of anxiety and PTSD, providing overview of the current state of art of the field, in a narrative accessible to health professionals. Second, methods and preliminary results of an experiment will be described, in which the Trier Social Stress Test was administered to college students (n = 16), their voice and facial expressions were recorded, and cardiovascular and electrodermal physiology were measured by an ambulatory system. Performance of an artificial deep neural network algorithm based on vocal and facial cues in predicting current levels of stress was evaluated, using the State-Trait Anxiety Inventory (STAI)-State and Subjective Units of Distress Scale (SUDS), completed by the participants at several time points, as ground truth. Moreover, it was explored whether adding physiological data into the prediction would improve the performance of the algorithm.

14:35 - 15:35 - Amstel

dr. Armağan Karahanoğlu (TU Twente) - User perspectives in sense-making of stress data

Recently, a novel perspective has emerged in the field of self-tracking of health that emphasises the qualitative, subjective, and social aspects of learning about the self. This view suggests that data should not merely represent health indicators in numerical forms but should also enable individuals to gather meaningful insights from the data, facilitating a more holistic understanding of the self and informing actionable behavioural decisions. Relatedly, this presentation will delve into the sense making processes involved in stress self-tracking. In the presentation, I will introduce the “data sense making in the self-tracking framework” , which emerged from a systematic review we conducted in 2021. I will examine four sense making modes and seven activities individuals engage in to make sense of their technology-collected health data. In the end, I will discuss how stress measurement studies can benefit from a human-centred approach to stress-data sense making.

14:35 - 15:35 - Amstel

dr. Tessa Beinema (TU Twente) - Personalised intelligent interactions with conversational agents in eHealth

eHealth technology has the potential to support people in their daily life, for example, as a means of prevention or for self-management in between visits with healthcare professionals. However, user engagement and personally relevant content are essential prerequisites for such tools to be effective. Conversational agents can be used to provide engaging and interactive support, but their communication should be tailored to individual users. In this presentation I will present our work towards personalised intelligent interactions with conversational agents in eHealth, mainly within the domain of lifestyle coaching.

14:35 - 15:35 - Amstel

dr. Maryam Amir Haeri (TU Twente) - From Anomalies to Diagnosis: Combining Unsupervised and Supervised Deep Learning for Detection of Interictal Epileptiform Discharges

Interictal discharges (IEDs) in EEG recordings are important signatures of epilepsy as their presence is strongly associated with an increased risk of seizures. IEDs are relatively short-duration events (typically 70-250 ms) that can be viewed as stochastic anomalies in such recordings. Currently, visual analysis of the EEG by clinical experts is the gold standard. This process, however, is time-consuming, error-prone, and associated with a long learning period. Automatizing the detection of IEDs has the potential to significantly reduce review time and may serve to complement visual analysis. Supervised deep learning methods have shown potential for this purpose, but the scarceness of annotated data has limited their performance, which motivates exploring unsupervised and semi-supervised approaches that do not require extensive expert annotations. In this study, we aim to investigate the power of unsupervised and semi-supervised deep learning in detecting IEDs. By leveraging these approaches, we aim to overcome the limitations posed by the scarcity of annotated data and explore alternative methods for accurate and efficient IED detection.

14:35 - 15:35 - Waver

dr. Sharon Ong (Tilburg U) - Towards the deployment of AI tools for diagnostic imaging into practice

The number of patients and healthcare costs in hospitals are increasing substantially. Almost 90% of clinical data collected in ongoing patient care are images. These images include X-rays, CT-scans and MRI images. With the large amount of data from hospitals, my group develops and deploys algorithms for medical image analysis. Among our applications are the following: detection and localization of bone tumors from CT-scans, prediction of patient’s outcomes after surgery from MRI images combined with clinical information and cognitive test performance scores and detection of scaphoid fractures from X-rays. Our tools can aid in clinical decision-making. In radiology, our tools can relieve radiologists from repetitive, time-consuming tasks and detect objects which a radiologist may miss. Hence, radiologists or residents can use AI to enhance learning.

15:50 - 17:15 - Amstel

Panel session:

  • dr. Stéphanie van den Berg - Department of Learning, Data analytics and Technology (UTwente)
  • prof. dr. Martijn Schut - Translational Artificial Intelligence in Laboratory Medicine (AmsterdamUMC and VU)
  • Tariq Dam MD - Intensive Care Medicine (AmsterdamUMC), Computer Science (VU) and Pacmed BV
  • dr. Wilson Silva - Radiology Lab (Netherlands Cancer Institute)
  • dr. Stefano Trebeschi - Radiology Lab (Netherlands Cancer Institute)
  • dr. Halima Houhib (VU - moderator)

Organised by

Sales Ended