Bringing Explainability to Autoencoding Neural Networks Encoding Aircraft Trajectories
Paper ID
Conference
Year
Theme
Project Name
Keywords:
Authors
DOI
Project Number
Abstract
Autoencoders, a class of neural networks, have emerged as a valuable tool for anomaly detection and trajectory clustering: they produce a compressed latent space and capture essential features in the data. However, their lack of interpretability poses challenges in the context of ATM, where clearcut explanations are crucial. In this paper, we investigate this issue by exploring visual methods to enhance the interpretability of autoencoders applied to aircraft trajectory data. We propose techniques to extract meaningful information from the structure of the latent space, and to promote a better understanding of generative models behaviours. We present insights from two simplified and real-world datasets and evaluate the structure of the latent space of autoencoders. Furthermore, we introduce suggestions for more realism in trajectory generation based on Variational Autoencoders (VAE). This study offers valuable recommendations to developers in the field of ATM, fostering improved interpretability and thus safety for generative AI in air traffic management.