Skip to main content

Grammar Based Speaker Role Identification for Air Traffic Control Speech Recognition

Paper ID

SIDs-2022-068

Conference

SESAR Innovation Days

Year

2022

Theme

Automatic Speech Recognition I

Project Name

Clean Sky 2 project ATCO2, SESAR 2020 ER4 project HAAWAII

Keywords:

Air Traffic Management, Assistant Based Speech Recognition, Kaldi, multitask acoustic modeling, speaker role classification

Authors

Amrutha Prasad, Juan Pablo Zuluaga Gómez, Petr Motlicek, Seyyed Saeed Sarfjoo, Iuliia Nigmatulina, Oliver Ohneiser and Hartmut Helmke

DOI

Project Number

884287

Project Number

864702

Abstract

Automatic Speech Recognition (ASR) for air traffic control is generally trained by pooling Air Traffic Controller (ATCO) and pilot data into one set. This is motivated by the fact that pilot’s voice communications are more scarce than ATCOs. Due to this data imbalance and other reasons (e.g., varying acoustic conditions), the speech from ATCOs is usually recognized more accurately than from pilots. Automatically identifying the speaker roles is a challenging task, especially in the case of the noisy voice recordings collected using Very High Frequency (VHF) receivers or due to the unavailability of the push-to-talk (PTT) signal, i.e., both audio channels are mixed. In this work, we propose to (1) automatically segment the ATCO and pilot data based on an intuitive approach exploiting ASR transcripts and (2) subsequently consider an automatic recognition of ATCOs’ and pilots’ voice as two separate tasks. Our work is performed on VHF audio data with high noise levels, i.e., signal-to-noise (SNR) ratios below 15 dB, as this data is recognized to be helpful for various speech-based machine-learning tasks. Specifically, for the speaker role identification task, the module is represented by a simple yet efficient knowledge-based system exploiting a grammar defined by the International Civil Aviation Organization (ICAO). The system accepts text as the input, either manually verified annotations or automatically generated transcripts. The developed approach provides an average accuracy in speaker role identification of about 83%. Finally, we show that training an acoustic model for ASR tasks separately (i.e., separate models for ATCOs and pilots) or using a multitask approach is well suited for the noisy data and outperforms the traditional ASR system where all data is pooled together.