Measuring Speech Recognition And Understanding Performance in Air Traffic Control Domain Beyond Word Error Rates
Paper ID
Conference
Year
Theme
Project Name
Keywords:
Authors
DOI
Project Number
Project Number
Project Number
Project Number
Abstract
Applying Automatic Speech Recognition (ASR) in the domain of analogue voice communication between air traffic controllers (ATCo) and pilots has more end user requirements than just transforming spoken words into text. It is useless for, e.g., readback error detection support, if word recognition is perfect, as long as the semantic interpretation is wrong. For an ATCo it is of almost no importance if the words of a greeting are correctly recognized. A wrong recognition of a greeting should, however, not disturb the correct recognition of, e.g., a “descend” command. More important is the correct semantic interpretation. What, however, is the correct semantic interpretation especially when ATCos or pilot, deviate more of less from published standard phraseology? For comparing performance of different speech recognition applications, 14 European partners from Air Traffic Management (ATM) domain have recently agreed on a common set of rules, i.e., an ontology on how to annotate the speech utterances of an ATCo on semantic level. This paper first presents the new metric of “unclassified word rate”, extends the ontology to pilot utterances, and introduces the metrics of command recognition rate, command recognition error rate, and command recognition rejection rate. This enables the comparison of different speech recognition and understanding instances on semantic level. The implementation used in this paper achieves a command recognition rate better than 96% for Prague Approach, even if word error rate is above 2.5% based on more than 12,000 ATCo commands – recorded in both operational and lab environment. This outperforms previous published rates by 2% absolute.