Alejandro Kuratomi HernandezTeaching assistant
About me
I am a Ph.D. student at the Department of Computer and Systems Sciences (DSV) at Stockholm University. My main research area is Applied Machine Learning and Machine Learning Interpretability. I hold a M.Sc. in Mechatronics from Kungliga Tekniska Högskolan (KTH) in Stockholm, Sweden, a B.Sc. in Mechanical Engineering and a B.Sc. in Industrial Engineering from Universidad de Los Andes in Bogotá, Colombia. I have 4 years of professional experience and I like working in collaboration with industry, heping develop projects applying Artificial Intelligence that add value to companies in different industrial sectors. More information on research interests, experience and publications at: https://alku.blogs.dsv.su.se/
Teaching
2024
-
Teaching Assistant: Machine Learning (VT 2024), M. Sc. Course, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden. Direct supervisor: Professor Panagiotis Papapetrou.
2023
- Master Thesis Main Supervisor: Master thesis title: Image Counterfactual Explanations using Deep Generative Adversarial Networks. (15 ECTS). Supervised students: Ning Wang.
- Master Thesis Main Supervisor: Master thesis title: A Distance Measure For Both Continuous and Categorical features in a Data Vector. (30 ECTS). Supervised students: Salam Hilmi, Elina Zake.
- Master Thesis Main Supervisor: Master thesis title: Application of Inherently Interpretable, Highly Accurate Machine Learning. (30 ECTS). Supervised students: Maria Luiza Chirita, Bernardo Cunha de Miranda.
- Master Thesis Main Supervisor: Master thesis title: Developing a Highly Accurate, Locally Interpretable Neural Network for Medical Image Analysis (30 ECTS). Supervised student: Rony Ventura.
- Master Thesis Main Supervisor: Master thesis title: Improving XAI Explanations for Clinical DecisionMaking – the Physicians’ Perspective (30 ECTS). Supervised student: Ulf Lesley.
- Teaching Assistant: Principles and Foundations of Artificial Intelligence (HT 2023), M. Sc. Course, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden. Direct supervisor: Tony Lindgren, Ph.D.
2022
- Master Thesis Co-supervisor: Master thesis title: Design of an Interpretability Measuring Scale (30 ECTS). Supervised students: Tauri Viil. Main supervisor: Jaakko Hollmén, Ph.D.
- Master Thesis Co-supervisor: Master thesis title: Discovering Characteristics and Actionable Features for the Prevention of Cancer (30 ECTS). Supervised students: Sigrid Sandström. Main supervisor: Jaakko Hollmén, Ph.D.
- Teaching Assistant: Programming for Data Science (HT 2022), M. Sc. Course, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden. Direct supervisor: Jaakko Hollmén, Ph.D.
2021
- Master Thesis Co-supervisor: Master thesis title: Specifying and Weighting Criteria to Choose a Long-Term Romantic Partner (30 ECTS). Supervised students: Maria de las Mercedes Contreras Arteaga, Katrina Marie Novakovic. Main supervisor: Sindri Magnússon, Ph.D.
- Master Thesis Co-supervisor: Master thesis title: Machine Learning post-hoc, model-agnostic interpretable algorithm comparison (30 ECTS). Supervised student: Nina Brudermanns. Main supervisor: Jaakko Hollmén, Ph.D.
- Teaching Assistant: Programming for Data Science (HT 2021), M. Sc. Course, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden. Direct supervisor: Jaakko Hollmén, Ph.D.
2020
- Teaching Assistant: Programming for Data Science (HT 2020), M. Sc. Course, Department of Computer and Systems Sciences, Faculty of Social Sciences, Stockholm University, Sweden. Direct supervisor: Jaakko Hollmén, Ph.D.
2018
- Teaching Assistant: Robust Mechatronics, M.Sc. Course, Department of Machine Design, School of Industrial Engineering and Management, KTH – Royal Institute of Technology, Sweden. Direct supervisor: Mikael Hellgren, Ph.D.
Research projects
Publications
A selection from Stockholm University publication database
-
Prediction of Global Navigation Satellite System Positioning Errors with Guarantees
2021. Alejandro Kuratomi Hernandez, Tony Lindgren, Panagiotis Papapetrou. Machine Learning and Knowledge Discovery in Databases: Applied Data Science Track, 562-578
ConferenceIntelligent Transportation Systems employ different localization technologies, such as the Global Navigation Satellite System. This system transmits signals between satellite and receiver devices on the ground which can estimate their position on earth’s surface. The accuracy of this positioning estimate, or the positioning error estimation, is of utmost importance for the efficient and safe operation of autonomous vehicles, which require not only the position estimate, but also an estimation of their operation margin. This paper proposes a workflow for positioning error estimation using a random forest regressor along with a post-hoc conformal prediction framework. The latter is calibrated on the random forest out-of-bag samples to transform the obtained positioning error estimates into predicted integrity intervals, which are confidence intervals on the positioning error prediction with at least 99.999% confidence. The performance is measured as the number of ground truth positioning errors inside the predicted integrity intervals. An extensive experimental evaluation is performed on real-world and synthetic data in terms of root mean square error between predicted and ground truth positioning errors. Our solution results in an improvement of 73% compared to earlier research, while providing prediction statistical guarantees.
-
JUICE: JUstIfied Counterfactual Explanations
2022. Alejandro Kuratomi Hernandez (et al.). Discovery Science, 493-508
ConferenceComplex, highly accurate machine learning algorithms support decision-making processes with large and intricate datasets. However, these models have low explainability. Counterfactual explanation is a technique that tries to find a set of feature changes on a given instance to modify the models prediction output from an undesired to a desired class. To obtain better explanations, it is crucial to generate faithful counterfactuals, supported by and connected to observations and the knowledge constructed on them. In this study, we propose a novel counterfactual generation algorithm that provides faithfulness by justification, which may increase developers and users trust in the explanations by supporting the counterfactuals with a known observation. The proposed algorithm guarantees justification for mixed-features spaces and we show it performs similarly with respect to state-of-the-art algorithms across other metrics such as proximity, sparsity, and feasibility. Finally, we introduce the first model-agnostic algorithm to verify counterfactual justification in mixed-features spaces.
-
ORANGE: Opposite-label soRting for tANGent Explanations in heterogeneous spaces
2023. Alejandro Kuratomi Hernandez (et al.). 2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA), 1-10
ConferenceMost real-world datasets have a heterogeneous feature space composed of binary, categorical, ordinal, and continuous features. However, the currently available local surrogate explainability algorithms do not consider this aspect, generating infeasible neighborhood centers which may provide erroneous explanations. To overcome this issue, we propose ORANGE, a local surrogate explainability algorithm that generates highaccuracy and high-fidelity explanations in heterogeneous spaces. ORANGE has three main components: (1) it searches for the closest feasible counterfactual point to a given instance of interest by considering feasible values in the features to ensure that the explanation is built around the closest feasible instance and not any, potentially non-existent instance in space; (2) it generates a set of neighboring points around this close feasible point based on the correlations among features to ensure that the relationship among features is preserved inside the neighborhood; and (3) the generated instances are weighted, firstly based on their distance to the decision boundary, and secondly based on the disagreement between the predicted labels of the global model and a surrogate model trained on the neighborhood. Our extensive experiments on synthetic and public datasets show that the performance achieved by ORANGE is best-in-class in both explanation accuracy and fidelity.
-
Measuring the Burden of (Un)fairness Using Counterfactuals
2023. Alejandro Kuratomi Hernandez (et al.). Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 402-417
ConferenceIn this paper, we use counterfactual explanations to offer a new perspective on fairness, that, besides accuracy, accounts also for the difficulty or burden to achieve fairness. We first gather a set of fairness-related datasets and implement a classifier to extract the set of false negative test instances to generate different counterfactual explanations on them. We subsequently calculate two measures: the false negative ratio of the set of test instances, and the distance (also called burden) from these instances to their corresponding counterfactuals, aggregated by sensitive feature groups. The first measure is an accuracy-based estimation of the classifier biases against sensitive groups, whilst the second is a counterfactual-based assessment of the difficulty each of these groups has of reaching their corresponding desired ground truth label. We promote the idea that a counterfactual and an accuracy-based fairness measure may assess fairness in a more holistic manner, whilst also providing interpretability. We then propose and evaluate, on these datasets, a measure called Normalized Accuracy Weighted Burden, which is more consistent than only its accuracy or its counterfactual components alone, considering both false negative ratios and counterfactual distance per sensitive feature. We believe this measure would be more adequate to assess classifier fairness and promote the design of better performing algorithms.
Show all publications by Alejandro Kuratomi Hernandez at Stockholm University