Fisher Language Model

From SpeechWiki

Revision as of 23:33, 26 July 2008 by Arthur (Talk | contribs)
Jump to: navigation, search

The design choices behind the various language models built on the Fisher Corpus are described below. The implementation uses the SRILM toolkit and some of the scripts are taken from this gigaword LM recipie. The main script is go_all.sh, and supporting scripts are here.

Contents

Source Text

The language model is built using only the repaired training data transcriptions, where each unique partial word is treated as a separate word.

N-gram order

2-gram, 3-gram and 4-gram models are built. 4-grams and more do not yield significant improvements and are not worth the extra computational resources they require in an ASR system, and essentially nothing is gained beyond 5-grams (.06 bits lower entropy, going from 4-grams to 5-grams<ref name="goodman2001abit"/>) .

Vocab Sizes

The LMs generated use only N most frequent words, mapping the rest to the UNK token. Where N is:

N Comparable vocab sizes
500 Svitchboard vocab size used by JHU06 workshop
1000
5000
10000 Similar to Switchboard vocab size used by JANUS speech recognition group (9800)
20k WSJ/NAB vocab size used in the 1995 ARPA continuous speech evaluation
70957 All repaired words in the training data.

Which of these sizes will actually be used in the recognizer still remains to be seen.

Smoothing

Modified Kneser-Ney is used, since it performs best across a variety of n-gram counts, and training corpora sizes<ref name="chen1998empirical">Stanley Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. Technical report TR-10-98, Harvard University, August 1998.</ref>.

Pruning

Some experiments

Differences between ngram-count and make-big-lm

There is some subtle difference between ngram-count and make-big-lm, which I cannot track down.

make-big-lm -name .big -order 3 -sort -read ./LM/counts/ngrams -kndiscount -lm try -unk -debug 1

and

ngram-count -order 3 -read LM/counts/ngrams -kn1 LM/counts/kn1-3.txt -kn2 LM/counts/kn2-3.txt -kn3 LM/counts/kn3-3.txt -kn4 LM/counts/kn4-3.txt -kn5 LM/counts/kn5-3.txt -kn6 LM/counts/kn6-3.txt -kn7 LM/counts/kn7-3.txt -kn8 LM/counts/kn8-3.txt -kn9 LM/counts/kn9-3.txt -kndiscount1 -kndiscount2 -lm try -interpolate -debug 1

give different results, even though the KN discounts are identical in both case above.

ngram-count seems to do better on vocab sizes < 20k, and make-big-lm is slightly better on the full vocab.

I suspect it has something to do with make-big-lm allocating .05 of total probability to the unk token for some reason, while ngram-count allocates almost nothing to it.


The cross-entropy (bits per word) and out-of-vocabulary (OOV) percentages (max is 100%) on the dev and test sets, for the language models generated by the original giga-word recipie script. The cross entropy values can be compared with those used in <ref name="chen1998empirical"/>.
n-gram
order
vocab ngram
count
dev
entropy
dev
OOV %
test
entropy
test
OOV %
2-gram 5k ngram 1=5000
ngram 2=580572
ngram 3=0
6.924 3.117 6.958 2.917
2-gram 20k ngram 1=20000
ngram 2=980894
ngram 3=0
7.200 0.810 7.217 0.748
2-gram all ngram 1=70957
ngram 2=1180725
ngram 3=0
7.286 0.271 7.297 0.254
3-gram 5k ngram 1=5000
ngram 2=776554
ngram 3=3234773
6.530 3.117 6.566 2.917
3-gram 20k ngram 1=20000
ngram 2=1176182
ngram 3=4038261
6.809 0.810 6.827 0.748
3-gram all ngram 1=70957
ngram 2=1342196
ngram 3=4201973
6.898 0.271 6.910 0.254

Caching, Clustering and all that

Not worth trying due to trigrams being fairly space efficient (See 11.2 All hope abandon, ye who enter here in <ref name="goodman2001abit">Joshua Goodman. A Bit of Progress in Language Modeling, Extended Version Microsoft Research Technical Report MSR-TR-2001-72.</ref>).

In particular, they have this to say about the best smoothing strategy they themselves have developed:

"Kneser-Ney smoothing leads to improvements in theory, but in practice, most language models are built with high count cutoffs, to conserve space, and speed the search; with high count cutoffs, smoothing doesn’t matter."

Model Quality

The model wikipedia:Perplexity


<references/>

Personal tools