Background

From SpeechWiki

Revision as of 22:37, 9 February 2010 by Hkim17 (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

There are several motivations for generating a set of articulatory feature-level transcriptions:

  • To serve as reference for measuring feature classifier accuracy
  • To train pronunciation models separately from acoustic models
  • To study asynchrony and reduction effects

In the past, classifier accuracy has been measured by comparison against a reference phonetic transcription, assuming some mapping from phones to feature values. However, especially for conversational speech, we cannot assume that such a mapping would give us accurate reference feature values; there is too much coarticulation and reduction.

We are not aware of any data set that has been labeled at the feature level. There are, of course, some corpora of measured articulation, such as MOCHA or the Wisconsin X-ray microbeam database. These could also be used, but the mapping from measurements to feature values is non-trivial, and often the measurements do not include some important information, such as nasality. This motivates us to generate this new data set.

Personal tools