Fisher Language Model

From SpeechWiki

Revision as of 00:23, 16 July 2008 by Arthur (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The design choices behind the various language models built on the Fisher Corpus are described below. The implementation uses the SRILM toolkit and the scripts are based on this gigaword LM recipie.

Contents

Source Text

The language model is built using only the repaired training data transcriptions, where each unique partial word is treated as a separate word.

N-gram order

2-gram and 3-gram models are built. 4-grams and more do not yield significant improvements and are not worth the extra computational resources they require in an ASR system, and essentially nothing is gained beyond 5-grams (.06 bits lower entropy, going from 4-grams to 5-grams<ref name="goodman2001abit"/>) .

Vocab Sizes

The LMs generated use only N most frequent words, mapping the rest to the UNK token. Where N is:

N Reason
500 Svitchboard vocab size used by JHU06 workshop
9800 Switchboard vocab size used by JANUS speech recognition group
20k WSJ/NAB vocab size used in the 1995 ARPA continuous speech evaluation
~70k All repaired words in the training data.

Which of these sizes will actually be used in the recognizer still remains to be seen.

Smoothing

Modified Kneser-Ney is used, since it performs best across a variety of n-gram counts, and training corpora sizes<ref name="chen1998empirical">Stanley Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. Technical report TR-10-98, Harvard University, August 1998.</ref> .

Caching, Clustering and all that

Not worth trying due to trigrams being fairly space efficient (See 11.2 All hope abandon, ye who enter here in <ref name="goodman2001abit">Joshua Goodman. A Bit of Progress in Language Modeling, Extended Version Microsoft Research Technical Report MSR-TR-2001-72.</ref>).

In particular, they have this to say about the best smoothing strategy they themselves have developed:

"Kneser-Ney smoothing leads to improvements in theory, but in practice, most language models are built with high count cutoffs, to conserve space, and speed the search; with high count cutoffs, smoothing doesn’t matter."

Model Quality

The model wikipedia:Perplexity


<references/>

Personal tools