Human interpretable input for Machine Learning In Air Traffic Control
Paper ID
Conference
Year
Theme
Project Name
Keywords:
Authors
DOI
Project Number
Abstract
Increasing airspace demand requires an increase in effectiveness and efficiency of the ATC system. Automation, and specifically Machine Learning (ML), may present good prospects for increasing system performance and decreasing workload of ATCOs. AI, however, is typically a “black box” making it hard to include in a socio-technical environment. This exploratory research aims to increase operator trust and acceptance and move towards a more “cooperative” approach to automation in ATC. It focuses on building upon previous efforts by using two different approaches: Strategically Conformal AI and Explainable AI methods to AI-Human interactions. Strategic Conformance aims to increase acceptance by producing individual-sensitive advisories. Explainable AI focuses on producing more optimal solutions and providing a clear explanation for these solutions. In this article, we propose the use of a single visual representation for tactical conflict detection and resolution, called the Solution Space Diagram (SSD), to serve as a common ground for both explainable and conformal AI. Through this research, it has become clear that there needs to be a careful definition given both to optimality and conformance. Likewise, the training of the AI agents comes with requirements for a large amount of data to be available and displaying these solutions in a human-interpretable way, while maintaining optimality, has its own unique challenges to overcome.