Skip to main content

Bringing Explainability to Autoencoding Neural Networks Encoding Aircraft Trajectories

Paper ID

SIDs-2023-39

Conference

SESAR Innovation Days

Year

2023

Theme

Machine learning and artificial intelligence

Project Name

Keywords:

Autoencoders, eXplainable Artificial Intelligence, Interpretability

Authors

Zakaria Ezzahed, Antoine Chevrot, Christophe Hurter and Xavier Olive

DOI

https://doi.org/10.61009/SID.2023.1.23

Project Number

Abstract

Autoencoders, a class of neural networks, have emerged as a valuable tool for anomaly detection and trajectory clustering: they produce a compressed latent space and capture essential features in the data. However, their lack of interpretability poses challenges in the context of ATM, where clearcut explanations are crucial. In this paper, we investigate this issue by exploring visual methods to enhance the interpretability of autoencoders applied to aircraft trajectory data. We propose techniques to extract meaningful information from the structure of the latent space, and to promote a better understanding of generative models behaviours. We present insights from two simplified and real-world datasets and evaluate the structure of the latent space of autoencoders. Furthermore, we introduce suggestions for more realism in trajectory generation based on Variational Autoencoders (VAE). This study offers valuable recommendations to developers in the field of ATM, fostering improved interpretability and thus safety for generative AI in air traffic management.