Fisher Corpus
From SpeechWiki
The fisher corpus is still relatively new and rough, and this page is to help people quickly build a basic speech recognizer with it.
Contents |
Train/Devel/Test partition
I've split the entire Fisher corpus into 80/10/10 percent for Train/Devel/Test partitions
The utterance id file is in filelists/uttIds.txt And the splits are as follows:
Set | Conversation Sides | Lines in uttIds.txt |
---|---|---|
Training | 00001A to 09360B | |
Devel | 09361A to 10530B | |
Test | 10531A to 11699B |
Dictionaries
total non-empty utterances | 2223159 | |
---|---|---|
total uncertain words or phrases enclosed in (( )) (e.g. (( NO WAY )) ) | 283935 | |
total word tokens in corpus (including uncertain words) | 21905137 | 100% |
total non-speech markers enclosed in [] (e.g. [LAUGH])) | 559629 | 2.555% |
total partial words (starting or ending in -) | 153098 | 0.6990% |
total partial words that could be repaired | 101550 | 0.4636% |
total unique words | 64924 | 100% |
---|---|---|
unique words occuring once in the corpus | 23192 | 35.72% |
unique words occuring once or twice in the corpus | 31272 | 48.17% |
corpus coverage if vocab does not include words occuring once in the corpus | 99.894% | |
corpus coverage if vocab does not include words occuring once or twice in the corpus | 99.857% |
In Fisher, the partial words (those starting or ending with a '-'), often have the complete word in the vicinity (within 6 words of the same conversation side). I've replaced the - with the missing part of the word completed from the nearby word having the same non '-' part and enclosed in [] brackets. Statistics for this new vocabulary are below:
total unique words | 79742 | 100% |
---|---|---|
unique words occuring once in the corpus | 32703 | 41.01% |
unique words occuring once or twice in the corpus | 42967 | 53.88% |
words not in the cmudict 0.6 dictionary | 18588 | 23.3% |
Corpus coverage by words not in the cmudict 0.6 dictionary | ?? |
Note that there two uses for the [] brackets:
- A complete word in [] brackets denotes non-speech events, e.g. [LAUGH] or [SIGH]
- A word with only part of the word enclosed in [] brackets denotes a partial word, with the word in [] brackets missing, (e.g. RA[THER] => R AE).
Phonetic Dictionary
A word pronunciation was derived using Phonetic Transcription Tool, in this order of preference:
- If a word is in the dictionary, use the dictionary definition.
- If a word contains numbers spell out the single digits.
- If a word contains underscores, treat it as an acronym and spell out single letters.
- If a partial word (has [] brackets) but the whole word is in the dictionary, do forced alignment.
- Otherwise do viterbi decoding.
- Phonetic Transcription Tool still could not handle some of the words. These I transcribed by hand, and they are listed here.
The final dictionary containing every word in the repaired fisher corpus is here.
Mixed Unit Dictionary
Language Model
Acoustic Model
There are two sets of PLP feature vectors created for the entire corpus.
PLPs for MLP classifiers
PLPs created in exactly the same way as the training data for MLPs described in <ref name="frankel2007articulatory">J. Frankel et al., “Articulatory feature classifiers trained on 2000 hours. of telephone speech,” ICASSP, 2007</ref> The hcopy config file to generate PLP features for MLP input is here. This way, we can use the MLPs presented in the above paper for segmenting the speech for timeshrinking experiments.
mean and variance normalized, ARMAed PLPs for gaussian mixtures
The second set of features is used to construct the mixture gaussian models. The features are PLPs, deltas and accelerations generated with this hcopy config. The following aspects are slightly non-standard:
- The mel-frequency filter bank is constructed only over the band of 125hz-3800khz, and not over the entire telephone speech range of 0-4000hz. There is some slight benefit to this found in <ref name="MVA">MVA: a noise-robust feature processing scheme</ref>, although in <ref name="Hain1998Htk"/> band-limiting has an ambiguous affect on accuracy.
- The 0th cepstral coefficient is used, instead of the log-energy again due to experiments in <ref name="MVA"/>.
At this point, only the frames which correspond to transcribed audio are extracted, and the following steps are performed only on frames from time periods of transcribed audio. The features are still stored in one file per conversation side.
Normalization
- The cepstral coefficients, the deltas and accelerations are each normalized to 0-mean, unit-variance. as in <ref name="MVA"/>. This is different from the HTK book, which normalizes only the coefficients, and takes the deltas and accelerations afterwards (deltas and accelerations are not re-normalized). Normalization is done per conversation side as recommended in <ref name="Hain1998Htk">Hain 1998, The 1998 HTK System For Transcription of Conversational Telephone Speech</ref>.
- Finally a order-2 ARMA filter is used. The whole thing is made easy by this MVA program written by Chia-ping Chen.
<references/>