Proposition de post-doc 18 mois en explicabilité dans le cadre du projet ARD Junon

When:
01/12/2024 all-day
2024-12-01T01:00:00+01:00
2024-12-01T01:00:00+01:00

Offre en lien avec l’Action/le Réseau : HELP/– — –

Laboratoire/Entreprise : LIFAT Université de Tours
Durée : 18 mois
Contact : nicolas.labroche@univ-tours.fr
Date limite de publication : 2024-12-01

Contexte :
The Junon project aims at building AI tools for making predictions about environmental objects. The first issue and main scientific challenge of the Prediction project within Junon are to propose new predictive AI methods adapted to the specific features of environmental problems such as multi-source and heterogeneous data, non-stationarity, adaptation for digital twins transfer, and reusability and evaluation of the proposed model.

This post-doc takes place in the Prediction and more precisely in the evaluation process of AI models for digital twins with the development of new algorithms for the explainability of AI/ML models, also termed as eXplainable Artificial Intelligence (or XAI for short).

Current XAI approaches suffer from two main limitations. The first is related to the complexity of the explanation process itself, which involves the particular characteristics of “the training data, the precise shape of the decision surface, and the selection of one explanatory algorithm over another”. As a result, there is a risk of accepting plausible explanations that only reflect spurious correlations between internal layers of DL with input features. Bordt (2022) emphasizes the need for explanations methods “that cast doubt on certain features of AI systems”. These results call for (i) a more thorough consideration of inner relationships in the data and how models use this information, (ii) methods to assist users in selecting explanation methods based on objective metrics. The second limitation is related to the lack “of XAI approaches tackling real-world machine learning issues” that would “help to clarify what is currently feasible and what is not feasible when employing XAI techniques”..

Sujet :
The Junon post-doc aims to address fundamental issues related to the quality and the applicability of explanations produced for DL models driven by the recent vision of actionable explainable AI (aXAI). In particular, we focus on more expressive forms of explanations that can answer not only why questions but also action-guiding explanations such as how-to and what-if as illustrated hereafter :

• why: why do we obtain a specific prediction, given the features of input observations?\
• how-to: what are the necessary actions to change the prediction of a specific input observation?
• what-if: what are the necessary and minimal sets of actions on input observations required to obtain an alternative prediction?

During the postdoc we envision several research questions attached to the aforementioned objectives of quality and applicability of XAI approaches:

Benchmark existing quality metrics (Nauta, 2023) for the task of explanations exploration

The recruited post-doc will contribute with a preliminary literature survey on explanation quality metrics for feature influence, counterfactual, or other causal explanation methods. An emphasis will be made on coherence with the predictive model and plausibility as an alignment with user knowledge (see next item), but also accuracy to ground-truth or alternative solutions with diversity. The post-doc may produce a reference implementation library for these metrics.

Model prior knowledge from BRGM experts and derive new quality metrics for explanations based on their covering, novelty, or interestingness

Modeling of prior knowledge will likely be based on causal models and knowledge graphs. Novelty and interestingness metrics will take inspiration from what is done in Exploratory Data Analysis (EDA) that have a long record of research works to guide the user towards interesting patterns or insights hidden in very large databases.

Build a user-oriented protocol and carry out user studies with BRGM experts
The goal is to qualify how well quality measures match the expert’s expectations in terms of quality, and whether it is possible to learn better explanation quality measures (i.e. depending on the context of the analysis and previous analysis as done in exploratory data analysis). Different use cases will be considered, such as debugging the ML model for digital twins, understanding predicted parameters for a simulation model like Gardenia, and observing common and distinctive important factors from one predictive problem to the next (e.g. snalysis of similarities between prediction of underground water resources in different places).

In conjunction with the ARD Junon project team, the successful candidate will be responsible for implementing the research program outlined in the preceding three points in consultation with their supervisor and the project partners. The successful candidate will also be involved in the animation of research groups interested in XAI such as (but not limited to) Explain’AI (GT EGC), Help (GdR Madics), Explicon (GdR RADIA).

Profil du candidat :
PhD thesis in Computer Sciences

Formation et compétences requises :
PhD in Computer Science, specializing in artificial intelligence (explainability, possibly deep learning), experience in processing temporal data (multivariate time series and multivariate event sequences). Experience working with libraries offering implementations of XAI and deep learning models. Experience in setting user protocol would be appreciated.

Adresse d’emploi :
The hired candidate will work in Tours, Faculté des Sciences et Techniques, avenue Monge 37200 Tours. This is a fully in-office position, although one day a week of remote work may be allowed by the supervisor.

Document attaché : 202410140745_post_doc_Junon.pdf