HMM-Based Speech SynthesisErica CooperCS4706Spring 2011Concatenative SynthesisHMM Synthesis A parametric model Can train on mixed data from many speakers Model takes up a very small amount of space Speaker adaptationHMMs Some hidden process has generated some visible observation.HMMs Some hidden process has generated some visible observation.HMMs Hidden states have transition probabilities and emission probabilities.HMM Synthesis Every phoneme+context is represented by an HMM.The cat is on the mat.The cat is near the door.< phone=/th/, next_phone=/ax/, word='the', next_word='cat', num_syllables=6, .... > Acoustic features extracted: f0, spectrum, duration Train HMM with these examples.HMM Synthesis Each state outputs acoustic features (a spectrum, an f0, and duration)HMM Synthesis Each state outputs acoustic features (a spectrum, an f0, and duration)HMM Synthesis Many contextual features = data sparsity Cluster similar-sounding phones e.g: 'bog' and 'dog'the /aa/ in both have similar acoustic features, even though their context is a bit different Make one HMM that produces both, and was trained on examples of both.Experiments: Google, Summer 2010 Can we train on lots of mixed data? (~1 utterance per speaker) More data vs. better data 15k utterances from Google Voice Search as training dataace hardware rural supplyMore Data vs. Better Data Voice Search utterances filtered by speech recognition confidence scores50%, 6849 utterances75%, 4887 utterances90%, 3100 utterances95%, 2010 utterances99%, 200 utterancesFuture Work Speaker adaptation Phonetically-balanced training data Listening experiments Parallelization Other sources of data Voices for more languagesReference
View Full Document