Search

Search Funnelback University

Search powered by Funnelback
1 - 20 of 24 search results for TALK:ZA31 24 / |u:mi.eng.cam.ac.uk where 0 match all words and 24 match some words.
  1. Results that match 1 of 2 words

  2. UKspeech2017

    mi.eng.cam.ac.uk/UKSpeech2017/posters/y_wang.pdf
    17 Nov 2017: L1, proficiency level, recordingSpontaneous responses increase difficulty, e.g. disfluenciesTranscribing is challenging inter-annotator error rate about 24.7%.
  3. Deep Learning for Speech Recognition

    mi.eng.cam.ac.uk/~mjfg/LxMLS17.pdf
    29 Nov 2017: Network Interpretation [24]. Standard /ay/ Stimulated /ay/. • Deep learning usually highly distributed - hard to interpret• awkward to adapt/understand/regularise• modify training - add stimulation regularisation• improves ASR performance.
  4. Template.dvi

    mi.eng.cam.ac.uk/~ar527/chen_icassp2017a.pdf
    22 Mar 2017: Mongolian FLP 511K 24.0K - 4.19 12.19WEB 139M 199.8K 0.93 2.10 5.62. ... 24,no. 11, pp. 2146–2157, 2016. [15] Xie Chen, Yongqiang Wang, Xunying Liu, Mark Gales, andP.
  5. .poster_jeremy_v2.tex.dvi

    mi.eng.cam.ac.uk/UKSpeech2017/posters/j_wong.pdf
    17 Nov 2017: 207Vseparate 45.8 46.0 46.6MT 47.7 47.8 47.3MT-TS 45.7 45.7 46.3. AMIseparate 24.5 24.6 24.6MT 25.4 25.5 ... 25.1MT-TS 24.3 24.4 24.6.
  6. STIMULATED TRAINING FOR AUTOMATIC SPEECH RECOGNITION ANDKEYWORD…

    mi.eng.cam.ac.uk/~ar527/ragni_icassp2017b.pdf
    22 Mar 2017: These weretrained on FLP data of 24 Babel languages and CTS data of 4 addi-tional languages, English, Spanish, Arabic and Mandarin, releasedby LDC. ... Stacked Hybrids were trained withand without stimulated training using monophone initialisation
  7. Automa(c Analysis of Mo(va(onal Interviewing with Diabetes Pa(ents…

    mi.eng.cam.ac.uk/UKSpeech2017/posters/x_wei.pdf
    20 Nov 2017: Results:. Model Senone No. WER (%). Baseline DNN-‐HMM 3981 53.13. MI adapted DNN 3981 47.24. ... Hhit NF NREF PRC RCL. lium 35 144 93 0.24 0.38 ivector 42 159 93 0.26 0.45.
  8. 22 Mar 2017: FLP Web FLP Web (#) ASR KWSSwahili 294 – 24.4 0 8.2 8.5 19.6Dholuo 467 1,217 17.5 18.8 6.1 3.0 10.0Amharic 388 ... 4, pp. 1738–1752, 1990. [24] P. Ghahremani, B. BabaAli, D. Povey, K.
  9. Future Word Contexts in Neural Network Language Models

    mi.eng.cam.ac.uk/UKSpeech2017/posters/x_chen.pdf
    17 Nov 2017: dev evalwords (w/s) PPL. ng4 - - 80.4 23.8 24.2uni-rnn - 4.5K 66.8 21.7 22.1. ... ng4 - 23.8 23.5 24.2 23.9uni-rnn - 21.7 21.5 21.9 21.7.
  10. Experimental Studies on Teacher-student Training of Deep Neural…

    mi.eng.cam.ac.uk/UKSpeech2017/posters/q_li.pdf
    20 Nov 2017: 22. 23. 24. 25. 26. 27. 28. 29. 30. PER. (%). 3-layer (100) Baseline3-layer (100) Student3-layer (250) Baseline3-layer (250) Student3-layer (500) Baseline3-layer (500) Student4-layer (500) ... PER (%)7-layer (500) 24.55 23.55RNN 23.84 20.59Ensemble 23.73
  11. Use of Graphemic Lexicons for Spoken Language Assessment

    mi.eng.cam.ac.uk/UKSpeech2017/posters/k_knill.pdf
    17 Nov 2017: Decoder Gujarati Mixed(word) %PER %GER %PER %GER. Ph 25.8 24.9 33.9 32.9Gr 29.0 23.7 36.6 30.8.
  12. Modular Construction of Complex Deep Learning Architectures in HTK

    mi.eng.cam.ac.uk/UKSpeech2017/posters/f_kreyssig.pdf
    20 Nov 2017: I All models used 24 log-Mel filter bank coefficients with their and values as input features, except the CNN which used40 without any.
  13. A learned emotion space for emotion recognition and emotive speech…

    mi.eng.cam.ac.uk/UKSpeech2017/posters/z_hodari_poster.pdf
    23 Dec 2017: Table 1: Performance classifying; happy, sad, angry, neutral. Model Inputs AccuracyRandom N/A 24.14%Most common N/A 33.00%LSTM eGeMAPS LLDs 43.17%TD-CNN Spectrogram
  14. Template.dvi

    mi.eng.cam.ac.uk/~mjfg/CUED-Chen-RNNLMKWS.pdf
    22 Mar 2017: Mongolian FLP 511K 24.0K - 4.19 12.19WEB 139M 199.8K 0.93 2.10 5.62. ... 24,no. 11, pp. 2146–2157, 2016. [15] Xie Chen, Yongqiang Wang, Xunying Liu, Mark Gales, andP.
  15. Low-Resource Speech Recognition and Keyword-Spotting

    mi.eng.cam.ac.uk/~mjfg/SPECOM_2017.pdf
    29 Nov 2017: 23/63. Stimulated Systems. /ey//em/. /sil/. /sh/. /ow/ /ay/. 24/63. Stimulated Network Training. •
  16. MORPH-TO-WORD TRANSDUCTION FOR ACCURATE AND EFFICIENT AUTOMATICSPEECH …

    mi.eng.cam.ac.uk/~mjfg/CUED-Ragni-Morph-To-Word.pdf
    22 Mar 2017: FLP Web FLP Web (#) ASR KWSSwahili 294 – 24.4 0 8.2 8.5 19.6Dholuo 467 1,217 17.5 18.8 6.1 3.0 10.0Amharic 388 ... 4, pp. 1738–1752, 1990. [24] P. Ghahremani, B. BabaAli, D. Povey, K.
  17. STIMULATED TRAINING FOR AUTOMATIC SPEECH RECOGNITION ANDKEYWORD…

    mi.eng.cam.ac.uk/~mjfg/CUED-Ragni-Stimulated-ASR-KWS.pdf
    22 Mar 2017: These weretrained on FLP data of 24 Babel languages and CTS data of 4 addi-tional languages, English, Spanish, Arabic and Mandarin, releasedby LDC. ... Stacked Hybrids were trained withand without stimulated training using monophone initialisation
  18. 24 May 2017: Tt=1. p(yt|ht,h̃t) (1.24). where the normalisation term ensures that this is valid PDF. ... 24. Fcml(λ;D) =n. i=1. log( p(w(i)1:L(i)|Y(i). 1:T (i) ; λ)) (1.100).
  19. How Does the Femoral Cortex Depend onBone Shape? A ...

    mi.eng.cam.ac.uk/reports/svr-ftp/gee_tr704.pdf
    15 Jun 2017: How Does the Femoral Cortex Depend onBone Shape? A Methodology for the Joint. Analysis of Surface Texture and Shape. A. H. Gee, G. M. Treece and K. E. S. Poole. CUED/F-INFENG/TR 70415 June 2017. Cambridge University Engineering DepartmentTrumpington
  20. University of CambridgeEngineering Part IB Information Engineering…

    mi.eng.cam.ac.uk/~cipolla/lectures/PartIB/old/2017-DNN-lecture-3.pdf
    18 May 2017: the 9 filters create 9 images, which have 9 24 24 = 5184pixels, and thus we need 51,840 parameters to reduce those.
  21. IB-interestpoints.dvi

    mi.eng.cam.ac.uk/~cipolla/lectures/PartIB/old/2017-IB-handout2.pdf
    18 May 2017: outliers in the output of the corner detector. 24 Engineering Part IB: Paper 8 Image Matching.

Refine your results

Search history

Recently clicked results

Recently clicked results

Your click history is empty.

Recent searches

Recent searches

Your search history is empty.