IHM '22 – Workshop HCI and XAI

Human-Computer Interaction and Explainability in Artificial Intelligence

05 avril 2022

Namur, Belgium

Participate

Goal of the Workshop

Explainability in artificial intelligence (AI), and in particular in machine learning (ML), is a rapidly growing research area today. This is due to multiple factors stemming from the needs of different stakeholders in the development and use of ML techniques. These include developers (research and industry), service providers (private and public companies), end-users and third parties (audit structures, regulators, etc.).

Explainability in ML obviously concerns the technical capacity to understand the functioning of the different types of ML models, but it is also concerned with the intelligibility of the explanations generated according to the users and the targeted contexts of use. It thus joins the concerns of researchers in human-computer interaction (HCI). Indeed, the explicability of ML models has the user at its center and therefore cannot do without advances in HCI. Unfortunately, despite the evidence of this filiation, collaborations between researchers in HCI and ML remain few. The general objective of this workshop is therefore to bring together researchers in HCI and ML in order to provide an overview of research activities on explainability in ML involving both communities.

The workshop will take place during the 33rd International Francophone Conference on Human-Computer Interaction, which will be held from April 05 to 08, 2022 in Namur, Belgium.

Program

09:00-09:30 – Welcome : Welcome of the participants.

09:30-12:30 - Presentations The aim is to propose a panorama of research activities around the theme carried out by researchers in the HCI and ML communities. This panorama will allow researchers to highlight their research projects (completed or in progress), but also to better understand this research area, especially in view of potential collaborations. In this perspective, local initiatives (e.g. TRAIL1 and ARIAC2), as well as international ones, will also be presented.

List of interventions:

  • 09:30 : Word of welcome
  • 09:40 : Presentation of the theme of the workshop, by Benoît Frenay (UNamur, Belgium) and Bruno Dumas (UNamur, Belgium), slides.
  • 10:00 : Liens entre confiance et acceptabilité dans un dispositif IA, by Alexandre Agossah (Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR, Digital Design Lab, L'École de Design Nantes Atlantique, Groupe Sigma, France), Lucie Lévêque (Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR, France), Frédérique Krupa (Digital Design Lab, L'École de Design Nantes Atlantique, France), Guillaume Deconde (Groupe Sigma, France), Matthieu Perreira Da Silva (Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR, France) and Patrick Le Callet (Nantes Université, Ecole Centrale Nantes, CNRS, LS2N, UMR, France), slides.
  • 10:15 : Approche Hybride pour des Classifieurs Ontologiquement Explicables, by Grégory Bourguin (LISIC, France), Arnaud Lewandowski (LISIC, France), Mourad Bouneffa (LISIC, France) and Adeel Ahmad (LISIC, France), slides.
  • 10:30 : Explainable feature selection in Self-Service BI with Ontology-based Recommender Systems, by Sarah Pinon (NaDI, UNamur, Belgium), Isabelle Linden (NaDI, UNamur, Belgium) and Corentin Burnay (NaDI, UNamur, Belgium), slides.
  • 10:45 : Towards Informed Decision-making: Triggering Curiosity in Explanations to Non-expert Users, by Astrid Bertrand (Télécom Paris, France), slides.
  • 11:00 : Break
  • 11:30 : Sur l'explicitation de l'apprentissage inductif logique avec le notebook Andante, by Simon Jacquet (NaDI, UNamur, Belgium), Sarah Pinon (NaDI, UNamur, Belgium), Jean-Marie Jacquet (NaDI, UNamur, Belgium), Isabelle Linden (NaDI, UNamur, Belgium) and Wim Vanhoof (NaDI, UNamur, Belgium), slides.
  • 11:45 : Pourquoi les explications linéaires ne sont-elles pas toujours satisfaisantes ?, by Julien Delaunay (Inria/IRISA, France), Luis Galarraga (Inria/IRISA, France) and Christine Largouët (Agrocampus Ouest/IRISA, France), slides.
  • 12:00 : BIOT: An R Package for Explaining the Axes of MDS Visualizations, by Rebecca Marion (UNamur, Belgium), Adrien Bibal (UCLouvain, Belgium), Rainer von Sachs (UCLouvain, Belgium) and Benoît Frénay (UNamur, Belgium), slides.
  • 12:15 : Interpréter les prédictions de séries temporelles obtenues à l'aide d'un modèle de type Transformer : une étude portant sur les prix de déséquilibre des marchés de l'électricité, by Jeremie Bottieau (PSMR, UMons, Belgium), Zacharie De Grève (PSMR, UMons, Belgium), François Valllée (PSMR, UMons, Belgium) and Jean-François Toubeau (PSMR, UMons, Belgium), slides.

13:30-18:00 - World Café & Networking In order to encourage exchanges on research ideas, activities, events, initiatives, etc. in the field, a world café will be organized. Its principle is to divide the participants into different thematic discussion tables. Periodically, the participants are invited to change tables and therefore themes. Once the cycle is over, participants return to their starting tables to structure the results of the discussions for presentation to the entire group. Finally, a networking session will allow participants to share their research projects/ideas and to set up possible collaborations.

Participate

To participate in the workshop, you must register on the conference website.

Register for the workshop

In order to facilitate the organization of the networking, a preliminary registration is desired.

Register for the networking

Submit a Presentation

The expected presentations concern research projects (completed or in progress), existing tools, and local or international initiatives, groups or laboratories related to the theme of the workshop. They will last 15 minutes. Finally, although the official language of HMI '22 is French, presentations in English are accepted.

Non-exhaustive List of Topics:

  • Taking users into account in the design of explainability methods,
  • Design of interfaces supporting explainability methods,
  • Modalities for presenting explanations,
  • Methods/practices of evaluation of explainability methods taking into account/involving users,
  • Online evaluation, study with users, case study, etc.),
  • Perception and interpretation of explanations by users, underlying mental models,
  • Perceived qualities (trust, transparency, etc.),
  • Identification of properties/qualities related to the notion of explanation,
  • Study of the use of explainability tools and methods,
  • Combination of ML, infovisualization and HCI models,
  • Interaction between users and ML models through explanations,
  • Etc.

Interested participants should send a title and abstract (maximum 1 page, including references, in conference format) by Friday, February 18, 2022. Submissions will be evaluated by the workshop organizers and notifications of acceptance will be sent to authors on Friday, February 25, 2022. And final drafts will be submitted by Friday, March 04, 2022. Finally, for those who wish, a compilation of the abstracts will be published on Arxiv.

Submit

Contact

Do not hesitate to contact us if needed.

Contact