Skip to main content

Human interpretable input for Machine Learning In Air Traffic Control

Paper ID

SIDs-2021-92

Conference

SESAR Innovation Days

Year

2021

Theme

Human Factors

Project Name

Keywords:

Decision Support Systems, Human Machine Interaction, Machine learning

Authors

Tiago Monteiro Nunes, Erik-Jan van Kampen, Clark Borst, Brian Hilburn and Carl Westin

DOI

Project Number

Abstract

Increasing airspace demand requires an increase in effectiveness and efficiency of the ATC system. Automation, and specifically Machine Learning (ML), may present good prospects for increasing system performance and decreasing workload of ATCOs. AI, however, is typically a “black box” making it hard to include in a socio-technical environment. This exploratory research aims to increase operator trust and acceptance and move towards a more “cooperative” approach to automation in ATC. It focuses on building upon previous efforts by using two dif­ferent approaches: Strategically Conformal AI and Explainable AI methods to AI-Human interactions. Strategic Conformance aims to increase acceptance by producing individual-sensitive advisories. Explainable AI focuses on producing more optimal solutions and providing a clear explanation for these solutions. In this article, we propose the use of a single visual representation for tactical conflict detection and resolution, called the Solution Space Diagram (SSD), to serve as a common ground for both explainable and conformal AI. Through this research, it has become clear that there needs to be a careful definition given both to optimality and conformance. Likewise, the training of the AI agents comes with requirements for a large amount of data to be available and displaying these solutions in a human-interpretable way, while maintaining optimality, has its own unique challenges to overcome.