Skip to main content

Measuring Speech Recognition And Understanding Performance in Air Traffic Control Domain Beyond Word Error Rates

Paper ID

SIDs-2021-02

Conference

SESAR Innovation Days

Year

2021

Theme

Human Factors

Project Name

SESAR 2020 ER4 project HAAWAII, SESAR 2020 IR Wave 1 project PJ16 CWP HMI, SESAR 2020 IR Wave 2 project PJ05-W2 DTT, SESAR 2020 IR Wave 2 project PJ10-W2 PROSA

Keywords:

air traffic control, ATC, command recognition rate, language understanding, unclassified word rate, word error rate

Authors

Hartmut Helmke, Shruthi Shetty, Matthias Kleinert, Oliver Ohneiser, Heiko Ehr, Amrutha Prasad, Petr Motlicek, Aneta Cerna and Christian Windisch

DOI

Project Number

734141

Project Number

874464

Project Number

874470

Project Number

884287

Abstract

Applying Automatic Speech Recognition (ASR) in the domain of analogue voice communication between air traffic controllers (ATCo) and pilots has more end user requirements than just transforming spoken words into text. It is useless for, e.g., readback error detection support, if word recognition is perfect, as long as the semantic interpretation is wrong. For an ATCo it is of almost no importance if the words of a greeting are correctly recognized. A wrong recognition of a greeting should, however, not disturb the correct recognition of, e.g., a “descend” command. More important is the correct semantic interpretation. What, however, is the correct semantic interpretation especially when ATCos or pilot, deviate more of less from published standard phraseology? For comparing performance of different speech recognition applications, 14 European partners from Air Traffic Management (ATM) domain have recently agreed on a common set of rules, i.e., an ontology on how to annotate the speech utterances of an ATCo on semantic level. This paper first presents the new metric of “unclassified word rate”, extends the ontology to pilot utterances, and introduces the metrics of command recognition rate, command recognition error rate, and command recognition rejection rate. This enables the comparison of different speech recognition and understanding instances on semantic level. The implementation used in this paper achieves a command recognition rate better than 96% for Prague Approach, even if word error rate is above 2.5% based on more than 12,000 ATCo commands – recorded in both operational and lab environment. This outperforms previous published rates by 2% absolute.