Easy Adaptation of Speech Recognition to Different Air Traffic Control Environments using the DeepSpeech Engine
Paper ID
Conference
Year
Theme
Project Name
Keywords:
Authors
DOI
Project Number
Project Number
Abstract
Nowadays, recognizing and understanding human speech is quite popular through systems like Alexa®, the Google Assistant or Siri®. Speech also plays a major role in air traffic control (ATC) as voice communication between air traffic controllers (ATCos) and pilots is essential for ensuring safe and efficient air traffic. This communication is still analogue and ATCos are forced to enter the same communication content again into digital systems with additional input devices. Automatic speech recognition (ASR) is a solution to automate this digitization process and an important necessity in optimizing ATCo workflow. This paper investigates the applicability of DeepSpeech, an open source, easy to adapt, end-to-end speech recognition engine from the Mozilla Corporation, as a speech-to-text solution for ATC speech. Different training approaches such as training a model from scratch and adapting a model pre-trained on non-ATC speech are explored. Model adaptation is performed by employing techniques such as fine-tuning, transfer learning, and layer freezing. Furthermore, the effect of employing an additional language model in conjunction with the end-to-end trained model is evaluated and shown to lead to a considerable relative improvement of 61% in word error rate. Overall, a word error rate of 6.0% is achieved on voice recordings from operational and simulation environment of different airspaces, resulting in command recognition rates between 85% and 97%. The achieved results show that DeepSpeech is a highly relevant solution for ATC-speech, especially when considering that it includes easy to use adaptation mechanisms also for non-experts in speech recognition.