Nasz artykuł "COMPARISON OF LANGUAGE MODELS TRAINED ON WRITTEN TEXTS AND SPEECH TRANSCRIPTS IN THE CONTEXT OF AUTOMATIC SPEECH RECOGNITION" został przyjęty na konferencję FedCSIS. Praca opisuje nasze eksperymenty statystyczne nad różnicami w stosowaniu transkrypcji mowy i innych tekstów do modelowania języka przy rozpoznawaniu mowy.
We investigate whether language models used in automatic speech recognition (ASR) should be trained on speech transcripts rather than on written texts. By calculating log-likelihood statistic for part-of-speech (POS) n-grams, we show that there are significant differences between written texts and speech transcripts. We also test the performance of language models trained on speech transcripts and written texts in ASR and show that using the former results in greater word error reduction rates (WERR), even if the model is trained on much smaller corpora. For our experiments we used the manually labeled one million subcorpus of the National Corpus of Polish and an HTK acoustic model.