Search

Search Funnelback University

Search powered by Funnelback
1 - 50 of 468 search results for katalk:za31 24 / / / / / |u:mi.eng.cam.ac.uk where 0 match all words and 468 match some words.
  1. Results that match 1 of 2 words

  2. 20 Feb 2018: Trial # users average # calls median # callsAMT 140 6.5 2Cambridge 17 24.4 20. ... vol. 24, no. 2, pp.150–174, 2010. [7] Amazon, “Amazon Mechanical Turk,” 2011.
  3. PowerPoint プレゼンテーション

    mi.eng.cam.ac.uk/UKSpeech2017/posters/e_tsunoo.pdf
    3 Jul 2018: 24,000 fishermen a year. Mostly in storms. And not every country keeps accurate records.
  4. 4 Nov 2018: 23] L. Breiman. Bagging predictors. Machine learning,24(2):123–140, 1996. [24] O. Siohan, B. ... IEEE/ACM Transactions on Audio, Speech,and Language Processing, 24(8):1438–1449, 2016. Introduction. Graphemic English systems.
  5. Simplifying very deep convolutional neural network architectures for…

    mi.eng.cam.ac.uk/UKSpeech2017/posters/j_rownicka.pdf
    3 Jul 2018: training set of Aurora4. Model A B C D AVGDNN/clntr 2.71 43.00 24.06 58.66 45.48VDCNN-max-4FC/clntr 2.32 35.99 21.20 ... 24, no. 12, pp. 2263-2276, Dec. 2016. Contact: j.m.rownicka@sms.ed.ac.uk.
  6. 20 Feb 2018: error rate of 33.2 %, and for the first and secondorder derivatives the error rates of the classifiers are 33.1 %and 24.2 %, respectively. ... 24.8 42.4par 22.7 13.1 21.7 32.4 32.7 25.2 27.4 45.1.
  7. 15 Jun 2018: Sig-nificant WER improvements were observed after interpolatingwith the n-gram LM for n-best rescoring – a common practicefor speech recognition [8, 24, 25]. ... 24] S. Kombrink, T. Mikolov, M. Karafiát, and L. Burget, “Recurrentneural network
  8. ICSLPDataCollection-10

    mi.eng.cam.ac.uk/~sjy/papers/wiyo04b.pdf
    20 Feb 2018: Per-turn. WER. Per-dialog WER. None 2 6 24 83 % 0 % 0 % Low 4 12 48 83 % 32 % 28 % Med 4 12 48 77 % 46 % 41 % Hi 2 6 24 ... Dataset. Metrics (task & user sat). R2 Significant predictors. ALL User-S 52 % 1.03 Task ALL User-C 60 % 5.29 Task – 1.54
  9. Template.dvi

    mi.eng.cam.ac.uk/~ar527/chen_asru2017.pdf
    15 Jun 2018: LM rescoredev eval. Vit CN Vit CNng4 - 23.8 23.5 24.2 23.9. ... LM #succ words dev evalng4 23.8 24.2. uni-rnn - 21.7 22.1.
  10. The Effect of Cognitive Load on a Statistical Dialogue ...

    mi.eng.cam.ac.uk/~sjy/papers/gtht12.pdf
    20 Feb 2018: Computer Speech and Language,24(4):562–588. O Tsimhoni, D Smith, and P Green. ... Computer Speech andLanguage, 24(2):150–174.
  11. paper.dvi

    mi.eng.cam.ac.uk/~ar527/ragni_is2018a.pdf
    15 Jun 2018: The stage onesystem used an HTK [24] configuration that had been previ-ously employed for all Babel tasks [25, 26], multi-genre En-glish broadcast transcription [27] and many others. ... 25, no. 3, pp. 373–377, 2017. [24] S. J. Young, G.
  12. 20 Feb 2018: The Knowledge Engineering Review, Vol. 00:0, 1–24. c 2006, Cambridge University PressDOI: 10.1017/S000000000000000 Printed in the United Kingdom.
  13. 15 Jun 2018: In [24] phonetic pronunciation features consisting ofa set of phone-pair distances were proposed for vowels and ap-plied to read speech. ... 8, no. 4, pp. 369–394, 1994. [24] N. Minematsu, S. Asakawa, and K.
  14. 20 Feb 2018: wwpos 24.52 11.29 18.47wspos 11.33 4.91 3.31wpof s 1.13 4.82 8.82wppof s 24.27 6.49 10.54wonset 15.08 0.33 ... 47.3% 52.7%. 75.3% 24.7%. Figure 3: Categorical quality ratings for spectral conversion duration conversion HMM-based contour generation.
  15. 15 Jun 2018: Word levelconfidence scores are returned from the Kaldi [24] decoderwhich are frame weighted and undergo a piece-wise mappingfor use in error detection. ... 3660–3664. [24] D. Povey et al., “The Kaldi Speech Recognition Toolkit,” in Proc.of the
  16. 20 Feb 2018: 3.3. The agenda-based simulated user. The agenda-based user simulator [24, 25] factorises the user stateinto an agenda and a goal. ... 23] TopTable, “TopTable,” 2012, https://www.toptable.com. [24] J Schatzmann, Statistical User and Error Modelling
  17. ./plot_entropy.eps

    mi.eng.cam.ac.uk/~ar527/chen_is2017.pdf
    15 Jun 2018: 24,no. 8, pp. 1438–1449, 2016.
  18. 20 Feb 2018: In control tests by humanusers, the success rate of the system was 24.5% higher thanthe baseline Lets Go! ... Com-pared to the BASELINE system, the BUDSLETSGO systemimproves the dialogue success rate by 24.5% and the worderror rate by 9.7%.
  19. 20 Feb 2018: 24, no. 4, pp. 562–588, 2010. [13] M Gašić, C Breslin, M Henderson, D Kim, M Szummer,B Thomson, P Tsiakoulis, and S Young, “POMDP-based dia-logue manager adaptation
  20. 20 Feb 2018: SFRName Reward Success #Turnsbest prior 8.66 0.35 85.40 2.19 8.32 0.20adapted 9.62 0.30 89.60 1.90 8.24 0.19. ... The systemwas deployed in a telephone-based set-up, with subjects recruited via Ama-zon MTurk and a recurrent neural network model was used
  21. 20 Feb 2018: The static feature set comprised 24 Mel-Cepstral coefficients,logarithm of F0 and aperiodic energy components in five frequency. ... 12 sentences werethen randomly selected to make up a testset for each listener, leadingto 24 wave files pairs (12 for
  22. 20 Feb 2018: Computer Speech and Language,24(4):562–588. B Thomson, M Gašić, M Henderson, P Tsiakoulis, andS Young. ... Computer Speech and Language, 24(2):150–174. B Zhang, Q Cai, J Mao, E Chang, and B Guo.2001.
  23. 20 Feb 2018: S. Young et al. / Computer Speech and Language 24 (2010) 150–174 151. ... 152 S. Young et al. / Computer Speech and Language 24 (2010) 150–174.
  24. 20 Feb 2018: scr-10% 2.24 2.03 2.00 1.92. p <0.05, p <0.005Table 2: Human evaluation for utterance quality intwo domains.
  25. IEEE TRANS. ON ASLP, TO APPEAR, 2011 1 Continuous ...

    mi.eng.cam.ac.uk/~sjy/papers/yuyo11.pdf
    20 Feb 2018: This mixed excitation model hasbeen shown to give significant improvements in the quality ofthe synthesized speech [24]. ... 63.5% 36.5%Male. CF-HMM. 75.5% 24.5%. 0% 25% 50% 75% 100%. Female.
  26. main.dvi

    mi.eng.cam.ac.uk/~sjy/papers/youn07
    20 Feb 2018: Hence, an itera-tive algorithm can be implemented which repeatedlyscans through the vocabulary, testing each word tosee if moving it to some other class would increasethe likelihood [24]. ... Thed p nuisance dimensions are modelled by a
  27. 20 Feb 2018: To obtain a closed formsolution of (24), the policy π must be differentiable with respect to θ. ... 10. To lower the variance of the estimate of the gradient, a constant baseline, B, can beintroduced into (24) without introducing any bias [22].
  28. 20 Feb 2018: An advantage of this sparsification approach is that itenables non-positive definite kernel functions to be used in theapproximation, for example see [24]. ... It has already beenshown that active learning has the potential to lead to fasterlearning [24]
  29. 20 Feb 2018: Com-puter Speech & Language, 24(4):562–588, 2010. Y. Tokuda, T. Yoshimura, T. ... Computer Speech and Language,24(2):150–174, 2010.
  30. crosseval_diff-reward2b.ps

    mi.eng.cam.ac.uk/~sjy/papers/kgjm10.pdf
    20 Feb 2018: Yu. 2009. The Hidden InformationState model: a practical framework for POMDPbased spoken dialogue management.ComputerSpeech and Language, 24(2):150–174.
  31. PHONETIC AND GRAPHEMIC SYSTEMS FOR MULTI-GENRE BROADCASTTRANSCRIPTION …

    mi.eng.cam.ac.uk/~mjfg/ALTA/publications/ICASSP2018_YuWang.pdf
    12 Sep 2018: 23] L. Breiman. Bagging predictors. Machine learning,24(2):123–140, 1996. [24] O. Siohan, B. ... IEEE/ACM Transactions on Audio, Speech,and Language Processing, 24(8):1438–1449, 2016. Introduction. Graphemic English systems.
  32. 3 Jul 2018: shows that the CEDM learns to address a relationin up to 24.5% of all dialogues for r = 1.0. ... Computer Speech & Lan-guage, 24(2):150–174. Steve J. Young, Milica Gašić, Blaise Thomson, and Ja-son D.
  33. 20 Feb 2018: 24, no. 4, Oct. 2010. [16] S. J. Young, G. Evermann, M.
  34. 20 Feb 2018: The feature set includes 24 spectralcoefficients, log F0 and 5 aperiodic component features.
  35. 20 Feb 2018: In Proceedings of ACL, 2017. [24] Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gašić, Pei-Hao Su, DavidVandyke, Tsung-Hsien Wen, and Steve Young. ... Computer Speech & Language, 24(2):150–174, 2010. [53] Steve Young, Milica
  36. 20 Feb 2018: 564 B. Thomson, S. Young / Computer Speech and Language 24 (2010) 562–588. ... B. Thomson, S. Young / Computer Speech and Language 24 (2010) 562–588 565.
  37. yeyo06.dvi

    mi.eng.cam.ac.uk/~sjy/papers/yeyo06.pdf
    20 Feb 2018: g(i)jq =. T. t=1. v(t)ii d. (t)jq j, q = 1, , (d 1) (24). ... Markel,”Distance measures for speech processing”,IEEE Transactions on Acoustics, Speech, and Signal Processing, vol.ASSP-24, no.5, pp.380-391, October 1976.
  38. 20 Feb 2018: 24, no. 4, pp. 562–588, 2010. [3] R. Sutton and A.
  39. 20 Feb 2018: The COMMUNICATOR systems in contrast onlyrequest between 24% and 43% of the unknown slots in each state.
  40. An Expressive Text-Driven 3D Talking Head

    mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/2013-SIGGRAPH-3D-expressive-head.pdf
    13 Mar 2018: 2005. Ex-pressive speech-driven facial animation. ACM TOG 24, 4, 1283–1302. WANG, L., HAN, W., SOONG, F., AND HUO, Q.
  41. Uncertain RanSaC Ben Tordoff and Roberto CipollaDepartment of…

    mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/2005-MVA-Tordoff.pdf
    13 Mar 2018: Comm. ACM, 24(6):381–395, 1981. [5] G.H. Golub and C.F. Van Loan, editors. ... Int. Journal of Computer Vision, 24(3):271–300,September 1997. [16] G. Xu and Z.
  42. 91_20090306_170604

    mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/2009-MVA-Mavaddat.pdf
    13 Mar 2018: 95. Table 2: Feature definitions. Features 1-24 Differences of mean and standarddeviation features based on Yuilleand Chen box features. ... Features 24-82 Differences of mean and standarddeviation features of 18 blocks, de-noted as ‘Extended
  43. 15 Jun 2018: Theacoustic models are trained on 108.6 hours of BULATS test data(Gujarati L1 speakers) using the HTK v3.5 toolkit [24, 25].A Kneser-Ney trigram language model is trained ... INTERSPEECH, 2015. [24] S. Young et al., The HTK book (for HTK Version 3.4.1).
  44. Photo-Realistic Expressive Text to Talking Head Synthesis Vincent…

    mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/2013-Interspeech-EVTTS.pdf
    13 Mar 2018: 6] Cao, Y., Tien, W., Faloutsos, P. and Pighin, F., “Expressivespeech-driven facial animation”, ACM TOG, 24(4):1283–1302,2005.
  45. 20 Feb 2018: 24].3 Available at http://mi.eng.cam.ac.uk/˜farm2/emphasis.4 Cohen’s Kappa cannot be used here because the phrases are not distinct elements. ... Interspeech, 2010, pp. 410–413. [24] S. Young, G. Evermann, M. Gales, T.
  46. 20 Feb 2018: partition ex-plicitly records the fact that x = a and the existing partitionis updated to record the fact that x = ā [24].
  47. Boosted Manifold Principal Angles for Image Set-Based Recognition…

    mi.eng.cam.ac.uk/~cipolla/publications/article/2007-PR-Kim.pdf
    13 Mar 2018: recognition from face motion manifolds.Image and Vision Computing, 24(5),. 2006. (in press). ... 24] R. O. Duda, P. E. Hart, and D. G. Stork.Pattern Classification.
  48. An Expressive Text-Driven 3D Talking Head

    mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/2013-Siggraph-Expressive-3D-VTTS.pdf
    13 Mar 2018: 2005. Ex-pressive speech-driven facial animation. ACM TOG 24, 4, 1283–1302. WANG, L., HAN, W., SOONG, F., AND HUO, Q.
  49. 20 Feb 2018: Thomson and Young2010] Blaise Thomson and SteveYoung. 2010. Bayesian update of dialogue state:A pomdp framework for spoken dialogue systems.Computer Speech and Language, 24:562–588.
  50. Efficiently Combining Contour and TextureCues for Object Recognition…

    mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/2008-BMVC-Shotton.pdf
    13 Mar 2018: Puzicha. Shape matching and object recognition using shape contexts. PAMI,24(24):509–522, 2002. ... PAMI, 24(5),2002. [4] P. Dollár, Z. Tu, H. Tao, and S.
  51. Camera calibration from vanishing points in images of architectural…

    mi.eng.cam.ac.uk/~cipolla/publications/inproceedings/1999-BMVC-photobuilder-copy.pdf
    13 Mar 2018: scaling parameters, i. In particular:. 24. u1 u2 u3v1 v2 v31 1 1. ... u4v41. 35 =. 24. p11 p12 p13 p14p21 p22 p23 p24p31 p32 p33 p34.

Refine your results

Search history

Recently clicked results

Recently clicked results

Your click history is empty.

Recent searches

Recent searches

Your search history is empty.