Search

Search Funnelback University

Search powered by Funnelback
1 - 10 of 19 search results for katalk:PC53 24 / |u:mi.eng.cam.ac.uk where 0 match all words and 19 match some words.
  1. Results that match 1 of 2 words

  2. 5 Apr 2016: Speech andSignal Processing, ICASSP 2015, South Brisbane, Queens-land, Australia, April 19-24, 2015, 2015, pp. ... IEEE, 2015, pp. 4315–4319. [24] Mark JF Gales, “Cluster adaptive training of hidden markovmodels,” Speech and Audio Processing, IEEE
  3. 24 Jun 2016: Such lattices can be efficiently generatedusing standard HMM-based approaches [24]. It is simple to no-tice that the inference problem in equation (5) or its lattice basedapproximation includes equation (4) ... 24] J. J. Odell, “The use of context in
  4. Investigation of multilingual speech-to-text systems for use in…

    mi.eng.cam.ac.uk/~kmk/presentations/UEdin_Feb14_Knill.pdf
    12 May 2016: CUED Lorelei TeamBABEL Program. Seminar at Edinburgh University February 2014 24.
  5. template.dvi

    mi.eng.cam.ac.uk/~ar527/ragni_is2016.pdf
    10 Nov 2016: Data-based schemes instead. make use of data to initialise [27], train [20, 23] or adapt [24] the. ... The amount. of training data in VLLP conditions is 31,959 and 24,703 words.
  6. 12 Jul 2016: When the segment level features are used, the log-linear modelparameters η̂ could be considered as phone dependent acousticmodel scales [24]. ... In joint decoding, 2% relative WER performance gain wasachieved over the hybrid system, from 11.24% to
  7. Knill_CUEDSeminar_20140403.dvi

    mi.eng.cam.ac.uk/~kmk/presentations/CUED_Apr14_Knill.pdf
    12 May 2016: CUED Lorelei Team. Babel ProgramSeminar at Cambridge University April 2014 24.
  8. System Combination with Log-linear Models

    mi.eng.cam.ac.uk/~mjfg/icassp16_yang.pdf
    5 Apr 2016: When the segment level features are used, the log-linear modelparameters η̂ could be considered as phone dependent acousticmodel scales [24]. ... In joint decoding, 2% relative WER performance gain wasachieved over the hybrid system, from 11.24% to
  9. slides_part2.dvi

    mi.eng.cam.ac.uk/~kmk/presentations/TutorialIC_Sep2015_part2_Knill.pdf
    12 May 2016: unintelligible, mispronounce, fragment words. • Convert PCM, 48KHz, 24-bit to A-law, 8KHz, 8-bit. ... yt. xt1. W. Timedelay. h. ht2. t1. • Use the hidden state values as a compact history representation [23, 24]. –
  10. Multi-Language Neural Network Language Models

    mi.eng.cam.ac.uk/~mjfg/interspeech16_MLNNLMs.pdf
    26 Sep 2016: Data-based schemes instead. make use of data to initialise [27], train [20, 23] or adapt [24] the. ... The amount. of training data in VLLP conditions is 31,959 and 24,703 words.
  11. slides_part1.dvi

    mi.eng.cam.ac.uk/~kmk/presentations/TutorialIC_Sep2015_part1_Knill.pdf
    12 May 2016: Maximum Mutual Information (MMI) [23, 24]: maximise. Fmmi(λ) =1. R. ... Theory, 1991. Cambridge University. Engineering Department60. DNNs for Speech Processing. [24] P.

Refine your results

Search history

Recently clicked results

Recently clicked results

Your click history is empty.

Recent searches

Recent searches

Your search history is empty.