Stockholms universitet

Panagiotis PapapetrouProfessor, ställföreträdande prefekt

Om mig

Jag är för närvarande professor i data science vid Institutionen för data- och systemvetenskap vid Stockholms universitet och biträdande prefekt för samma institution sedan januari 2022.

Dessutom är jag chef för Data Science-gruppen vid Stockholms universitet och adjungerad professor vid Institutionen för datavetenskap vid Aalto-universitetet i Finland. Slutligen är jag styrelseledamot i Svenska föreningen för artificiell intelligens (SAIS).

Nedan följer en kort biografi:

  • April 2017 – nuvarande: Professor, Institutionen för data- och systemvetenskap, Samhällsvetenskapliga fakulteten, Stockholms universitet, Sverige
  • December 2013 – mars 2017: Docent, Institutionen för data- och systemvetenskap, Samhällsvetenskapliga fakulteten, Stockholms universitet, Sverige
  • September 2013 – november 2013: Universitetslektor (tillsvidareanställning), Institutionen för data- och systemvetenskap, Samhällsvetenskapliga fakulteten, Stockholms universitet, Sverige
  • September 2012 – november 2013: Lektor och programansvarig för IT Applications-programmet, Institutionen för datavetenskap och informationssystem, School of Business-Economics-Informatics, Birkbeck, University of London, Storbritannien
  • September 2009 – augusti 2012: Postdoktor vid Institutionen för datavetenskap, Aalto-universitetet, Finland
  • Juni 2009: Erhöll doktorsexamen (Ph.D.) i datavetenskap
  • September 2006: Erhöll magisterexamen (M.A.) i datavetenskap
  • Januari 2004: Antogs till MA/PhD-programmet vid Institutionen för datavetenskap vid Boston University, USA
  • Juni 2003: Erhöll kandidatexamen (B.Sc.) i datavetenskap vid Institutionen för datavetenskap vid Ioanninas universitet, Grekland

Undervisning

Tidigare kurser på Stockholms universitet

  • DAMI: Data Mining (HT 2013-2022)
  • ML: Machine Learning (VT 2022-2024)
  • DSHI: Data Science for Health Informatics (VT 2018-2019)
  • HIPI/PROHI: Projects in Health Informatics (HT 2013, VT 2014-2019)
  • 5HI021: Current Research and Trends in Health Informatics (HT 2018-2021)
  • BI: Business Intelligence (HT 2014-2015)

Tidigare kurser på andra institutioner

  • Algorithmic Methods of Data Mining, Aalto University, Finland (2010-2011)
  • Introduction to Computer Science, Boston University, USA (2008-2009)

Forskning

Min forskning fokuserar på algoritmisk datautvinning på stora och komplexa datamängder. Jag är särskilt intresserad av följande områden:

  • klassificering och prognostisering av tidsserier
  • tolkbar och förklarbar maskininlärning
  • sökning och datautvinning i stora och komplexa sekvenser
  • lärande från elektroniska patientjournaler

Läs mer om forskningsaktiviteterna och aktuella framsteg inom forskningsämnet Artificiell Intelligens och Data Science samt Data Science-gruppen på DSV.

 

Doktorandstudenter

Nuvarande:

  • Franco Rugolon
  • Lena Mondrejevski 
  • Maria Movin
  • Sayeh Sobhani
  • Tim Kreuzer
  • Ali Beikmohammadi (bihandledare)
  • Guilherme Dinis Chaliane Junior (bihandledare)

Disputerade:

  • Alejandro Kuratomi (2024, bihandledare)
  • Maria Bampa (2024)
  • Zed Lee (2024)
  • Luis Quintero (2024, bihandledare)
  • Muhammad Afzaal (2024, bihandledare)
  • Jonathan Rebane (2022)
  • Irvin Homem (2018)
  • Mohammad Jaber (2015)

 

Redaktionsmedlemskap

  • Machine Learning Journal: Action Editor, since 2024
  • Data Mining and Knowledge Discovery: Action Editor, since 2018
  • ECML/PKDD Journal Track: Guest Editorial Board for DAMI and MACH, since 2014
  • Data Mining and Knowledge Discovery: Editorial Board, 2016-2017
  • Big Data Research: Special Issue Guest Editor (Big Data in Epidemics), 2020-2021
  • PVLDB: Reviewer Board, 2016

 

Programkommittémedlemskap

Ordförande för område och senior PC-medlem

  • SDM: SIAM Conference on Data Mining, 2024 (Area Chair)
  • ECML/PKDD: European Conference on Machine Learning and Principles and Practice on Knowledge Discovery in Databases, 2020-2022 (Area Chair)
  • IJCAI: International Joint Conference on Artificial Intelligence, 2017-2022 (Senior PC member)
  • ICDM: IEEE International Conference on Data Mining, 2021 (Area Chair)
  • ICDE: International Conference on Data Engineering, 2020 (PC Vice-Chair)
  • IDA: International Symposium on Intelligent Data Analysis, 2012-2016, 2017-2020 (Senior PC member)

PC-medlem

  • SIGKDD: ACM Conference on Knowledge Discovery and Data Mining, 2011-2012, 2018-2024
  • SDM: SIAM Conference on Data Mining, 2022
  • ICDM: IEEE International Conference on Data Mining, 2020, 2022
  • ECML/PKDD: European Conference on Machine Learning and Principles and Practice on Knowledge Discovery in Databases, 2011-2019
  • AAAI: AAAI Conference on Artificial Intelligence, 2017-2018
  • Discovery Science: International Conference on Discovery Science, 2011-2013, 2016, 2018
  • CIKM: ACM International Conference on Information and Knowledge Management, 2010-2015, 2018-2020
  • VLDB: International Conference on Very Large Data Bases, 2016-2017
  • SSDBM: IEEE Conference on Scientific and Statistical Database Management, 2010

 

Tutorials på internationella konferenser

 

Regelbunden granskare för följande tidskrifter

  • DAMI: Data Mining and Knowledge Discovery
  • MACH: Machine Learning
  • TKDE: IEEE Transactions on Knowledge and Data Engineering
  • TODS: ACM Transactions on Database Systems
  • TKDD: ACM Transactions on Knowledge Discovery and Data Mining
  • Neuroinformatics
  • KAIS: Knowledge and Information Systems

 

Konferens-/workshoporganisation

 

Forskningsprojekt

Publikationer

Lär dig mer om min forskning genom att besöka DBLP eller Google Scholar.

I urval från Stockholms universitets publikationsdatabas

  • CounterFair: Group Counterfactuals for Bias Detection, Mitigation and Subgroup Identification

    2024. Alejandro Kuratomi Hernandez (et al.). IEEE International Conference on Data Mining (ICDM)

    Konferens

    Counterfactual explanations can be used as a means to explain a models decision process and to provide recommendations to users on how to improve their current status. The difficulty to apply these counterfactual recommendations from the users perspective, also known as burden, may be used to assess the models algorithmic fairness and to provide fair recommendations among different sensitive feature groups. We propose a novel model-agnostic, mathematical programming-based, group counterfactual algorithm that can: (1) detect biases via group counterfactual burden, (2) produce fair recommendations among sensitive groups and (3) identify relevant subgroups of instances through shared counterfactuals. We analyze these capabilities from the perspective of recourse fairness, and empirically compare our proposed method with the state-of-the-art algorithms for group counterfactual generation in order to assess the bias identification and the capabilities in group counterfactual effectiveness and burden minimization.

    Läs mer om CounterFair
  • Glacier: guided locally constrained counterfactual explanations for time series classification

    2024. Zhendong Wang (et al.). Machine Learning 113, 4639-4669

    Artikel

    In machine learning applications, there is a need to obtain predictive models of high performance and, most importantly, to allow end-users and practitioners to understand and act on their predictions. One way to obtain such understanding is via counterfactuals, that provide sample-based explanations in the form of recommendations on which features need to be modified from a test example so that the classification outcome of a given classifier changes from an undesired outcome to a desired one. This paper focuses on the domain of time series classification, more specifically, on defining counterfactual explanations for univariate time series. We propose Glacier, a model-agnostic method for generating locally-constrained counterfactual explanations for time series classification using gradient search either on the original space or on a latent space that is learned through an auto-encoder. An additional flexibility of our method is the inclusion of constraints on the counterfactual generation process that favour applying changes to particular time series points or segments while discouraging changing others. The main purpose of these constraints is to ensure more reliable counterfactuals, while increasing the efficiency of the counterfactual generation process. Two particular types of constraints are considered, i.e., example-specific constraints and global constraints. We conduct extensive experiments on 40 datasets from the UCR archive, comparing different instantiations of Glacier against three competitors. Our findings suggest that Glacier outperforms the three competitors in terms of two common metrics for counterfactuals, i.e., proximity and compactness. Moreover, Glacier obtains comparable counterfactual validity compared to the best of the three competitors. Finally, when comparing the unconstrained variant of Glacier to the constraint-based variants, we conclude that the inclusion of example-specific and global constraints yields a good performance while demonstrating the trade-off between the different metrics.

    Läs mer om Glacier
  • Ijuice: integer JUstIfied counterfactual explanations

    2024. Alejandro Kuratomi Hernandez (et al.). Machine Learning 113, 5731-5771

    Artikel

    Counterfactual explanations modify the feature values of an instance in order to alter its prediction from an undesired to a desired label. As such, they are highly useful for providing trustworthy interpretations of decision-making in domains where complex and opaque machine learning algorithms are utilized. To guarantee their quality and promote user trust, they need to satisfy the faithfulness desideratum, when supported by the data distribution. We hereby propose a counterfactual generation algorithm for mixed-feature spaces that prioritizes faithfulness through k-justification, a novel counterfactual property introduced in this paper. The proposed algorithm employs a graph representation of the search space and provides counterfactuals by solving an integer program. In addition, the algorithm is classifier-agnostic and is not dependent on the order in which the feature space is explored. In our empirical evaluation, we demonstrate that it guarantees k-justification while showing comparable performance to state-of-the-art methods in feasibility, sparsity, and proximity.

    Läs mer om Ijuice
  • M-ClustEHR: A multimodal clustering approach for electronic health records

    2024. Maria Bampa (et al.). Artificial Intelligence in Medicine 154

    Artikel

    Sepsis refers to a potentially life-threatening situation where the immune system of the human body has an extreme response to an infection. In the presence of underlying comorbidities, the situation can become even worse and result in death. Employing unsupervised machine learning techniques, such as clustering, can assist in providing a better understanding of patient phenotypes by unveiling subgroups characterized by distinct sepsis progression and treatment patterns. More concretely, this study introduces M-ClustEHR, a clustering approach that utilizes medical data of multiple modalities by employing a multimodal autoencoder for learning comprehensive sepsis patient representations. M-ClustEHR consistently outperforms traditional clustering approaches in terms of several internal clustering performance metrics, as well as cluster stability in identifying phenotypes in the sepsis cohort. The unveiled patterns, supported by existing medical literature and clinicians, highlight the importance of multimodal clustering for advancing personalized sepsis care.

    Läs mer om M-ClustEHR
  • Counterfactual Explanations for Time Series Forecasting

    2024. Zhendong Wang (et al.). 2023 IEEE International Conference on Data Mining (ICDM), 1391-1396

    Konferens

    Among recent developments in time series forecasting methods, deep forecasting models have gained popularity as they can utilize hidden feature patterns in time series to improve forecasting performance. Nevertheless, the majority of current deep forecasting models are opaque, hence making it challenging to interpret the results. While counterfactual explanations have been extensively employed as a post-hoc approach for explaining classification models, their application to forecasting models still remains underexplored. In this paper, we formulate the novel problem of counterfactual generation for time series forecasting, and propose an algorithm, called ForecastCF, that solves the problem by applying gradient-based perturbations to the original time series. The perturbations are further guided by imposing constraints to the forecasted values. We experimentally evaluate ForecastCF using four state-of-the-art deep model architectures and compare to two baselines. ForecastCF outperforms the baselines in terms of counterfactual validity and data manifold closeness, while generating meaningful and relevant counterfactuals for various forecasting tasks.

    Läs mer om Counterfactual Explanations for Time Series Forecasting
  • COMET: Constrained Counterfactual Explanations for Patient Glucose Multivariate Forecasting

    2024. Zhendong Wang (et al.). Annual IEEE Symposium on Computer-Based Medical Systems, 502-507

    Konferens

    Applying deep learning models for healthcare-related forecasting applications has been widely adopted, such as leveraging glucose monitoring data of diabetes patients to predict hyperglycaemic or hypoglycaemic events. However, most deep learning models are considered black-boxes; hence, the model predictions are not interpretable and may not offer actionable insights into medical practitioners’ decisions. Previous work has shown that counterfactual explanations can be applied in forecasting tasks by suggesting counterfactual changes in time series inputs to achieve the desired forecasting outcome. This study proposes a generalized multivariate forecasting setup of counterfactual generation by introducing a novel approach, COMET, which imposes three domain-specific constraint mechanisms to provide counterfactual explanations for glucose forecasting. Moreover, we conduct the experimental evaluation using two diabetes patient datasets to demonstrate the effectiveness of our proposed approach in generating realistic counterfactual changes in comparison with a baseline approach. Our qualitative analysis evaluates examples to validate that the counterfactual samples are clinically relevant and can effectively lead the patients to achieve a normal range of predicted glucose levels by suggesting changes to the treatment variables.

    Läs mer om COMET
  • Artificial intelligence in digital twins—A systematic literature review

    2024. Tim Kreuzer, Panagiotis Papapetrou, Jelena Zdravkovic. Data & Knowledge Engineering 151

    Artikel

    Artificial intelligence and digital twins have become more popular in recent years and have seen usage across different application domains for various scenarios. This study reviews the literature at the intersection of the two fields, where digital twins integrate an artificial intelligence component. We follow a systematic literature review approach, analyzing a total of 149 related studies. In the assessed literature, a variety of problems are approached with an artificial intelligence-integrated digital twin, demonstrating its applicability across different fields. Our findings indicate that there is a lack of in-depth modeling approaches regarding the digital twin, while many articles focus on the implementation and testing of the artificial intelligence component. The majority of publications do not demonstrate a virtual-to-physical connection between the digital twin and the real-world system. Further, only a small portion of studies base their digital twin on real-time data from a physical system, implementing a physical-to-virtual connection.

    Läs mer om Artificial intelligence in digital twins—A systematic literature review
  • MASICU: A Multimodal Attention-based classifier for Sepsis mortality prediction in the ICU

    2024. Lena Mondrejevski (et al.). 2024 IEEE 37th International Symposium on Computer-Based Medical Systems (CBMS), 326-331

    Konferens

    Sepsis poses a significant threat to public health, causing millions of deaths annually. While treatable with timely intervention, accurately identifying at-risk patients remains challenging due to the condition’s complexity. Traditional scoring systems have been utilized, but their effectiveness has waned over time. Recognizing the need for comprehensive assessment, we introduce MASICU, a novel machine learning model architecture tailored for predicting ICU sepsis mortality. MASICU is a novel multimodal, attention-based classification model that integrates interpretability within an ICU setting. Our model incorporates multiple modalities and multimodal fusion strategies and prioritizes interpretability through different attention mechanisms. By leveraging both static and temporal features, MASICU offers a holistic view of the patient’s clinical status, enhancing predictive accuracy while providing clinically relevant insights.

    Läs mer om MASICU
  • Z-Time: efficient and effective interpretable multivariate time series classification

    2024. Zed Lee, Tony Lindgren, Panagiotis Papapetrou. Data mining and knowledge discovery 38 (1), 206-236

    Artikel

    Multivariate time series classification has become popular due to its prevalence in many real-world applications. However, most state-of-the-art focuses on improving classification performance, with the best-performing models typically opaque. Interpretable multivariate time series classifiers have been recently introduced, but none can maintain sufficient levels of efficiency and effectiveness together with interpretability. We introduce Z-Time, a novel algorithm for effective and efficient interpretable multivariate time series classification. Z-Time employs temporal abstraction and temporal relations of event intervals to create interpretable features across multiple time series dimensions. In our experimental evaluation on the UEA multivariate time series datasets, Z-Time achieves comparable effectiveness to state-of-the-art non-interpretable multivariate classifiers while being faster than all interpretable multivariate classifiers. We also demonstrate that Z-Time is more robust to missing values and inter-dimensional orders, compared to its interpretable competitors.

    Läs mer om Z-Time
  • Explaining Black Box Reinforcement Learning Agents Through Counterfactual Policies

    2023. Maria Movin (et al.). Advances in Intelligent Data Analysis XXI, 314-326

    Konferens

    Despite the increased attention to explainable AI, explainability methods for understanding reinforcement learning (RL) agents have not been extensively studied. Failing to understand the agent’s behavior may cause reduced productivity in human-agent collaborations, or mistrust in automated RL systems. RL agents are trained to optimize a long term cumulative reward, and in this work we formulate a novel problem on how to generate explanations on when an agent could have taken another action to optimize an alternative reward. More concretely, we aim at answering the question: What does an RL agent need to do differently to achieve an alternative target outcome? We introduce the concept of a counterfactual policy, as a policy trained to explain in which states a black box agent could have taken an alternative action to achieve another desired outcome. The usefulness of counterfactual policies is demonstrated in two experiments with different use-cases, and the results suggest that our solution can provide interpretable explanations.

    Läs mer om Explaining Black Box Reinforcement Learning Agents Through Counterfactual Policies
  • Finding Local Groupings of Time Series

    2023. Zed Lee, Marco Trincavelli, Panagiotis Papapetrou. Machine Learning and Knowledge Discovery in Databases, 70-86

    Konferens

    Collections of time series can be grouped over time both globally, over their whole time span, as well as locally, over several common time ranges, depending on the similarity patterns they share. In addition, local groupings can be persistent over time, defining associations of local groupings. In this paper, we introduce Z-Grouping, a novel framework for finding local groupings and their associations. Our solution converts time series to a set of event label channels by applying a temporal abstraction function and finds local groupings of maximized time span and time series instance members. A grouping-instance matrix structure is also exploited to detect associations of contiguous local groupings sharing common member instances. Finally, the validity of each local grouping is assessed against predefined global groupings. We demonstrate the ability of Z-Grouping to find local groupings without size constraints on time ranges on a synthetic dataset, three real-world datasets, and 128 UCR datasets, against four competitors.

    Läs mer om Finding Local Groupings of Time Series
  • Measuring the Burden of (Un)fairness Using Counterfactuals

    2023. Alejandro Kuratomi Hernandez (et al.). Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 402-417

    Konferens

    In this paper, we use counterfactual explanations to offer a new perspective on fairness, that, besides accuracy, accounts also for the difficulty or burden to achieve fairness. We first gather a set of fairness-related datasets and implement a classifier to extract the set of false negative test instances to generate different counterfactual explanations on them. We subsequently calculate two measures: the false negative ratio of the set of test instances, and the distance (also called burden) from these instances to their corresponding counterfactuals, aggregated by sensitive feature groups. The first measure is an accuracy-based estimation of the classifier biases against sensitive groups, whilst the second is a counterfactual-based assessment of the difficulty each of these groups has of reaching their corresponding desired ground truth label. We promote the idea that a counterfactual and an accuracy-based fairness measure may assess fairness in a more holistic manner, whilst also providing interpretability. We then propose and evaluate, on these datasets, a measure called Normalized Accuracy Weighted Burden, which is more consistent than only its accuracy or its counterfactual components alone, considering both false negative ratios and counterfactual distance per sensitive feature. We believe this measure would be more adequate to assess classifier fairness and promote the design of better performing algorithms.

    Läs mer om Measuring the Burden of (Un)fairness Using Counterfactuals
  • ORANGE: Opposite-label soRting for tANGent Explanations in heterogeneous spaces

    2023. Alejandro Kuratomi Hernandez (et al.). 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA), 1-10

    Konferens

    Most real-world datasets have a heterogeneous feature space composed of binary, categorical, ordinal, and continuous features. However, the currently available local surrogate explainability algorithms do not consider this aspect, generating infeasible neighborhood centers which may provide erroneous explanations. To overcome this issue, we propose ORANGE, a local surrogate explainability algorithm that generates highaccuracy and high-fidelity explanations in heterogeneous spaces. ORANGE has three main components: (1) it searches for the closest feasible counterfactual point to a given instance of interest by considering feasible values in the features to ensure that the explanation is built around the closest feasible instance and not any, potentially non-existent instance in space; (2) it generates a set of neighboring points around this close feasible point based on the correlations among features to ensure that the relationship among features is preserved inside the neighborhood; and (3) the generated instances are weighted, firstly based on their distance to the decision boundary, and secondly based on the disagreement between the predicted labels of the global model and a surrogate model trained on the neighborhood. Our extensive experiments on synthetic and public datasets show that the performance achieved by ORANGE is best-in-class in both explanation accuracy and fidelity.

    Läs mer om ORANGE
  • Style-transfer counterfactual explanations: An application to mortality prevention of ICU patients

    2023. Zhendong Wang (et al.). Artificial Intelligence in Medicine 135

    Artikel

    In recent years, machine learning methods have been rapidly adopted in the medical domain. However, current state-of-the-art medical mining methods usually produce opaque, black-box models. To address the lack of model transparency, substantial attention has been given to developing interpretable machine learning models. In the medical domain, counterfactuals can provide example-based explanations for predictions, and show practitioners the modifications required to change a prediction from an undesired to a desired state. In this paper, we propose a counterfactual solution MedSeqCF for preventing the mortality of three cohorts of ICU patients, by representing their electronic health records as medical event sequences, and generating counterfactuals by adopting and employing a text style-transfer technique. We propose three model augmentations for MedSeqCF to integrate additional medical knowledge for generating more trustworthy counterfactuals. Experimental results on the MIMIC-III dataset strongly suggest that augmented style-transfer methods can be effectively adapted for the problem of counterfactual explanations in healthcare applications and can further improve the model performance in terms of validity, BLEU-4, local outlier factor, and edit distance. In addition, our qualitative analysis of the results by consultation with medical experts suggests that our style-transfer solutions can generate clinically relevant and actionable counterfactual explanations.

    Läs mer om Style-transfer counterfactual explanations
  • Demonstrator on Counterfactual Explanations for Differentially Private Support Vector Machines

    2023. Rami Mochaourab (et al.). Machine Learning and Knowledge Discovery in Databases, 662-666

    Konferens

    We demonstrate the construction of robust counterfactual explanations for support vector machines (SVM), where the privacy mechanism that publicly releases the classifier guarantees differential privacy. Privacy preservation is essential when dealing with sensitive data, such as in applications within the health domain. In addition, providing explanations for machine learning predictions is an important requirement within so-called high risk applications, as referred to in the EU AI Act. Thus, the innovative aspects of this work correspond to studying the interaction between three desired aspects: accuracy, privacy, and explainability. The SVM classification accuracy is affected by the privacy mechanism through the introduced perturbations in the classifier weights. Consequently, we need to consider a trade-off between accuracy and privacy. In addition, counterfactual explanations, which quantify the smallest changes to selected data instances in order to change their classification, may become not credible when we have data privacy guarantees. Hence, robustness for counterfactual explanations is needed in order to create confidence about the credibility of the explanations. Our demonstrator provides an interactive environment to show the interplay between the considered aspects of accuracy, privacy, and explainability.

    Läs mer om Demonstrator on Counterfactual Explanations for Differentially Private Support Vector Machines
  • EpidRLearn: Learning Intervention Strategies for Epidemics with Reinforcement Learning

    2022. Maria Bampa (et al.). Artificial Intelligence in Medicine, 189-199

    Konferens

    Epidemics of infectious diseases can pose a serious threat to public health and the global economy. Despite scientific advances, containment and mitigation of infectious diseases remain a challenging task. In this paper, we investigate the potential of reinforcement learning as a decision making tool for epidemic control by constructing a deep Reinforcement Learning simulator, called EpidRLearn, composed of a contact-based, age-structured extension of the SEIR compartmental model, referred to as C-SEIR. We evaluate EpidRLearn by comparing the learned policies to two deterministic policy baselines. We further assess our reward function by integrating an alternative reward into our deep RL model. The experimental evaluation indicates that deep reinforcement learning has the potential of learning useful policies under complex epidemiological models and large state spaces for the mitigation of infectious diseases, with a focus on COVID-19.

    Läs mer om EpidRLearn
  • Excite-O-Meter: an Open-Source Unity Plugin to Analyze Heart Activity and Movement Trajectories in Custom VR Environments

    2022. Luis Quintero (et al.). 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 46-47

    Konferens

    This article explains the new features of the Excite-O-Meter, an open-source tool that enables the collection of bodily data, real-time feature extraction, and post-session data visualization in any custom VR environment developed in Unity. Besides analyzing heart activity, the tool supports now multidimensional time series to study motion trajectories in VR. The paper presents the main functionalities and discusses the relevance of the tool for behavioral and psychophysiological research.

    Läs mer om Excite-O-Meter
  • FLICU: A Federated Learning Workflow for Intensive Care Unit Mortality Prediction

    2022. Lena Mondrejevski (et al.). 2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS), 32-37

    Konferens

    Although Machine Learning can be seen as a promising tool to improve clinical decision-making, it remains limited by access to healthcare data. Healthcare data is sensitive, requiring strict privacy practices, and typically stored in data silos, making traditional Machine Learning challenging. Federated Learning can counteract those limitations by training Machine Learning models over data silos while keeping the sensitive data localized. This study proposes a Federated Learning workflow for Intensive Care Unit mortality prediction. Hereby, the applicability of Federated Learning as an alternative to Centralized Machine Learning and Local Machine Learning is investigated by introducing Federated Learning to the binary classification problem of predicting Intensive Care Unit mortality. We extract multivariate time series data from the MIMIC-III database (lab values and vital signs), and benchmark the predictive performance of four deep sequential classifiers (FRNN, LSTM, GRU, and 1DCNN) varying the patient history window lengths (8h, 16h, 24h, and 48h) and the number of Federated Learning clients (2, 4, and 8). The experiments demonstrate that both Centralized Machine Learning and Federated Learning are comparable in terms of AUPRC and F1-score. Furthermore, the federated approach shows superior performance over Local Machine Learning. Thus, Federated Learning can be seen as a valid and privacy-preserving alternative to Centralized Machine Learning for classifying Intensive Care Unit mortality when the sharing of sensitive patient data between hospitals is not possible.

    Läs mer om FLICU
  • JUICE: JUstIfied Counterfactual Explanations

    2022. Alejandro Kuratomi Hernandez (et al.). Discovery Science, 493-508

    Konferens

    Complex, highly accurate machine learning algorithms support decision-making processes with large and intricate datasets. However, these models have low explainability. Counterfactual explanation is a technique that tries to find a set of feature changes on a given instance to modify the models prediction output from an undesired to a desired class. To obtain better explanations, it is crucial to generate faithful counterfactuals, supported by and connected to observations and the knowledge constructed on them. In this study, we propose a novel counterfactual generation algorithm that provides faithfulness by justification, which may increase developers and users trust in the explanations by supporting the counterfactuals with a known observation. The proposed algorithm guarantees justification for mixed-features spaces and we show it performs similarly with respect to state-of-the-art algorithms across other metrics such as proximity, sparsity, and feasibility. Finally, we introduce the first model-agnostic algorithm to verify counterfactual justification in mixed-features spaces.

    Läs mer om JUICE
  • Post Hoc Explainability for Time Series Classification. Toward a signal processing perspective

    2022. Rami Mochaourab (et al.). IEEE signal processing magazine (Print) 39 (4), 119-129

    Artikel

    Time series data correspond to observations of phenomena that are recorded over time [1] . Such data are encountered regularly in a wide range of applications, such as speech and music recognition, monitoring health and medical diagnosis, financial analysis, motion tracking, and shape identification, to name a few. With such a diversity of applications and the large variations in their characteristics, time series classification is a complex and challenging task. One of the fundamental steps in the design of time series classifiers is that of defining or constructing the discriminant features that help differentiate between classes. This is typically achieved by designing novel representation techniques [2] that transform the raw time series data to a new data domain, where subsequently a classifier is trained on the transformed data, such as one-nearest neighbors [3] or random forests [4] . In recent time series classification approaches, deep neural network models have been employed that are able to jointly learn a representation of time series and perform classification [5] . In many of these sophisticated approaches, the discriminant features tend to be complicated to analyze and interpret, given the high degree of nonlinearity.

    Läs mer om Post Hoc Explainability for Time Series Classification. Toward a signal processing perspective
  • Automated Grading of Exam Responses: An Extensive Classification Benchmark

    2021. Jimmy Ljungman (et al.). Discovery Science, 3-18

    Konferens

    Automated grading of free-text exam responses is a very challenging task due to the complex nature of the problem, such as lack of training data and biased ground-truth of the graders. In this paper, we focus on the automated grading of free-text responses. We formulate the problem as a binary classification problem of two class labels: low- and high-grade. We present a benchmark on four machine learning methods using three experiment protocols on two real-world datasets, one from Cyber-crime exams in Arabic and one from Data Mining exams in English that is presented first time in this work. By providing various metrics for binary classification and answer ranking, we illustrate the benefits and drawbacks of the benchmarked methods. Our results suggest that standard models with individual word representations can in some cases achieve competitive predictive performance against deep neural language models using context-based representations on both binary classification and answer ranking for free-text response grading tasks. Lastly, we discuss the pedagogical implications of our findings by identifying potential pitfalls and challenges when building predictive models for such tasks.

    Läs mer om Automated Grading of Exam Responses
  • Assessing the Clinical Validity of Attention-based and SHAP Temporal Explanations for Adverse Drug Event Predictions

    2021. Jonathan Rebane (et al.). 2021 IEEE 34th International Symposium on Computer-Based Medical Systems, 235-240

    Konferens

    Attention mechanisms form the basis of providing temporal explanations for a variety of state-of-the-art recurrent neural network (RNN) based architectures. However, evidence is lacking that attention mechanisms are capable of providing sufficiently valid medical explanations. In this study we focus on the quality of temporal explanations for the medical problem of adverse drug event (ADE) prediction by comparing explanations globally and locally provided by an attention-based RNN architecture against those provided by more a more basic RNN using the post-hoc SHAP framework, a popular alternative option which adheres to several desirable explainability properties. The validity of this comparison is supported by medical expert knowledge gathered for the purpose of this study. This investigation has uncovered that these explanation methods both possess appropriateness for ADE explanations and may be used complementarily, due to SHAP providing more clinically appropriate global explanations and attention mechanisms capturing more clinically appropriate local explanations. Additional feedback from medical experts reveal that SHAP may be more applicable to real-time clinical encounters, in which efficiency must be prioritised, over attention explanations which possess properties more appropriate for offline analyses.

    Läs mer om Assessing the Clinical Validity of Attention-based and SHAP Temporal Explanations for Adverse Drug Event Predictions
  • Random subspace and random projection nearest neighbor ensembles for high dimensional data

    2022. Sampath Deegalla (et al.). Expert systems with applications 191

    Artikel

    The random subspace and the random projection methods are investigated and compared as techniques for forming ensembles of nearest neighbor classifiers in high dimensional feature spaces. The two methods have been empirically evaluated on three types of high-dimensional datasets: microarrays, chemoinformatics, and images. Experimental results on 34 datasets show that both the random subspace and the random projection method lead to improvements in predictive performance compared to using the standard nearest neighbor classifier, while the best method to use depends on the type of data considered; for the microarray and chemoinformatics datasets, random projection outperforms the random subspace method, while the opposite holds for the image datasets. An analysis using data complexity measures, such as attribute to instance ratio and Fisher’s discriminant ratio, provide some more detailed indications on what relative performance can be expected for specific datasets. The results also indicate that the resulting ensembles may be competitive with state-of-the-art ensemble classifiers; the nearest neighbor ensembles using random projection perform on par with random forests for the microarray and chemoinformatics datasets.

    Läs mer om Random subspace and random projection nearest neighbor ensembles for high dimensional data
  • Learning Time Series Counterfactuals via Latent Space Representations

    2021. Zhendong Wang (et al.). Discovery Science, 369-384

    Konferens

    Counterfactual explanations can provide sample-based explanations of features required to modify from the original sample to change the classification result from an undesired state to a desired state; hence it provides interpretability of the model. Previous work of LatentCF presents an algorithm for image data that employs auto-encoder models to directly transform original samples into counterfactuals in a latent space representation. In our paper, we adapt the approach to time series classification and propose an improved algorithm named LatentCF++ which introduces additional constraints in the counterfactual generation process. We conduct an extensive experiment on a total of 40 datasets from the UCR archive, comparing to current state-of-the-art methods. Based on our evaluation metrics, we show that the LatentCF++ framework can with high probability generate valid counterfactuals and achieve comparable explanations to current state-of-the-art. Our proposed approach can also generate counterfactuals that are considerably closer to the decision boundary in terms of margin difference.

    Läs mer om Learning Time Series Counterfactuals via Latent Space Representations
  • RTEX: A novel framework for ranking, tagging, and explanatory diagnostic captioning of radiography exams

    2021. Vasiliki Kougia (et al.). JAMIA Journal of the American Medical Informatics Association 28 (8), 1651-1659

    Artikel

    Objective: The study sought to assist practitioners in identifying and prioritizing radiography exams that are more likely to contain abnormalities, and provide them with a diagnosis in order to manage heavy workload more efficiently (eg, during a pandemic) or avoid mistakes due to tiredness.Materials and MethodsThis article introduces RTEx, a novel framework for (1) ranking radiography exams based on their probability to be abnormal, (2) generating abnormality tags for abnormal exams, and (3) providing a diagnostic explanation in natural language for each abnormal exam. Our framework consists of deep learning and retrieval methods and is assessed on 2 publicly available datasets.

    Results: For ranking, RTEx outperforms its competitors in terms of nDCG@k. The tagging component outperforms 2 strong competitor methods in terms of F1. Moreover, the diagnostic captioning component, which exploits the predicted tags to constrain the captioning process, outperforms 4 captioning competitors with respect to clinical precision and recall.

    Discussion: RTEx prioritizes abnormal exams toward the improvement of the healthcare workflow by introducing a ranking method. Also, for each abnormal radiography exam RTEx generates a set of abnormality tags alongside a diagnostic text to explain the tags and guide the medical expert. Human evaluation of the produced text shows that employing the generated tags offers consistency to the clinical correctness and that the sentences of each text have high clinical accuracy.

    Conclusions: This is the first framework that successfully combines 3 tasks: ranking, tagging, and diagnostic captioning with focus on radiography exams that contain abnormalities.

    Läs mer om RTEX
  • SMILE

    2021. Jonathan Rebane (et al.). Data mining and knowledge discovery 35 (1), 372-399

    Artikel

    In this paper, we study the problem of classification of sequences of temporal intervals. Our main contribution is a novel framework, which we call SMILE, for extracting relevant features from interval sequences to construct classifiers.SMILE introduces the notion of utilizing random temporal abstraction features, we define as e-lets, as a means to capture information pertaining to class-discriminatory events which occur across the span of complete interval sequences. Our empirical evaluation is applied to a wide array of benchmark data sets and fourteen novel datasets for adverse drug event detection. We demonstrate how the introduction of simple sequential features, followed by progressively more complex features each improve classification performance. Importantly, this investigation demonstrates that SMILE significantly improves AUC performance over the current state-of-the-art. The investigation also reveals that the selection of underlying classification algorithm is important to achieve superior predictive performance, and how the number of features influences the performance of our framework.

    Läs mer om SMILE
  • Z-Embedding: A Spectral Representation of Event Intervals for Efficient Clustering and Classification

    2021. Zed Lee, Sarunas Girdzijauskas, Panagiotis Papapetrou. Machine Learning and Knowledge Discovery in Databases, 710-726

    Konferens

    Sequences of event intervals occur in several application domains, while their inherent complexity hinders scalable solutions to tasks such as clustering and classification. In this paper, we propose a novel spectral embedding representation of event interval sequences that relies on bipartite graphs. More concretely, each event interval sequence is represented by a bipartite graph by following three main steps: (1) creating a hash table that can quickly convert a collection of event interval sequences into a bipartite graph representation, (2) creating and regularizing a bi-adjacency matrix corresponding to the bipartite graph, (3) defining a spectral embedding mapping on the bi-adjacency matrix. In addition, we show that substantial improvements can be achieved with regard to classification performance through pruning parameters that capture the nature of the relations formed by the event intervals. We demonstrate through extensive experimental evaluation on five real-world datasets that our approach can obtain runtime speedups of up to two orders of magnitude compared to other state-of-the-art methods and similar or better clustering and classification performance.

    Läs mer om Z-Embedding
  • Z-Miner

    2020. Zed Lee, Tony Lindgren, Panagiotis Papapetrou. KDD '20, 524-534

    Konferens

    Mining frequent patterns of event intervals from a large collection of interval sequences is a problem that appears in several application domains. In this paper, we propose Z-Miner, a novel algorithm for solving this problem that addresses the deficiencies of existing competitors by employing two novel data structures: Z-Table, a hierarchical hash-based data structure for time-efficient candidate generation and support count, and Z-Arrangement, a data structure for efficient memory consumption. The proposed algorithm is able to handle patterns with repetitions of the same event label, allowing for gap and error tolerance constraints, as well as keeping track of the exact occurrences of the extracted frequent patterns. Our experimental evaluation on eight real-world and six synthetic datasets demonstrates the superiority of Z-Miner against four state-of-the-art competitors in terms of runtime efficiency and memory footprint.

    Läs mer om Z-Miner

Visa alla publikationer av Panagiotis Papapetrou vid Stockholms universitet