My work is centered on advancing the field of machine learning, with a focus on two themes that are crucial for many applications: mastering learning from a small number of examples and unraveling the decision-making processes of trained models. I am also interested in graph signal processing, both for its potential in data modelling and its capacity to enhance learning methodologies. Below is a selection of articles illustrating my contributions in these areas.

Explainable machine learning for genomic data

In this paper, I delve into the application of machine learning models for classifying phenotypes from bulk RNA-sequencing data. Specifically, I explore the limitations surrounding the a posteriori explainability of the predictions of these models, emphasizing methodological and biological considerations.

A Comparative Analysis of Gene Expression Profiling by Statistical and Machine Learning Approaches

Learning with few examples on neuroimaging data

Aiming to unveil the intricate relationship between brain activity and cognitive functions, I propose leveraging deep learning techniques, adapted from computer vision’s few-shot learning, for analyzing neuroimaging data, particularly functional magnetic resonance imaging. By addressing the challenge of limited data availability, this approach holds promise for advancing our understanding in clinical and cognitive neuroscience.

Few-shot Decoding of Brain Activation Maps

Evaluation of supervised models trained with few examples

In this study, I investigate alternative methodologies for evaluating the generalization performance of supervised models trained with limited annotated examples. Recognizing the challenge posed by insufficient data diversity, I explore novel evaluation techniques to ensure robust assessments of model efficacy.

Predicting the Generalization Ability of a Few-Shot Classifier

Improving deep learning models using graph signal theory

This paper introduces a novel loss function designed to enhance the performance of deep learning architectures in classification tasks. By integrating principles from graph signal theory, this approach prioritizes the smoothness of label signals on similarity graphs, fostering nuanced distinctions between classes while maintaining computational efficiency.

Introducing Graph Smoothness Loss for Training Deep Learning Architectures