Search

Search Funnelback University

Search powered by Funnelback
151 - 200 of 1,000 search results for katalk:za33 24 |u:mi.eng.cam.ac.uk where 0 match all words and 1,000 match some words.
  1. Results that match 1 of 2 words

  2. 20 Feb 2018: 1.2% 2.0%Request 17.4% 24.5% 18.4% 24.4%. ... 24, no. 4, pp. 562–588, 2010. [22] J Peters and S Schaal, “Natural Actor-Critic,” Neurocomput-ing, vol.
  3. 20 Feb 2018: on Mancorpora)a 91.40 90.17 90.24. 90.20Auto (Google MT) 90.81 90.77 87.72 89.223. ... 24, no. 2, pp. 150–174, April 2010. [8] P. Koehn, H.
  4. 20 Feb 2018: 0.24. 0.22. 0.20. 0.18. 0.16. Log-. likel. ihoo. dpe. rmic. ro-tu. ... 24, no. 4, pp. 562–588, 2010. [14] SpaceBook. EC FP7/2011-16, grant number 270019.
  5. acl2010.dvi

    mi.eng.cam.ac.uk/~sjy/papers/gjkm10.pdf
    20 Feb 2018: Computer Speech and Language, 24(2):150–174.
  6. tech.dvi

    mi.eng.cam.ac.uk/~sjy/papers/bghk13.pdf
    20 Feb 2018: 24, pp. 562–588, 2010. [3] G. Aist, J. Allen, E. Campana, C. ... 24] M. Henderson, M. Gašić, B. Thomson, P. Tsiakoulis, K.Yu,and S.
  7. 8 Sep 2010: Speech Lang.,vol. 24, no. 4, pp. 648–662, 2010. [8] B. Taskar, “Learning structured prediction models: a large marginapproach,” Ph.D.
  8. sigdial11_sdc10-Feb27-V2

    mi.eng.cam.ac.uk/~sjy/papers/bbch11.pdf
    20 Feb 2018: 6% 24.6% 14.7% 9.6%. ... Length (s) Turns/call Words/turn. SYS1 control 155 18.29 2.87 (2.84) SYS1 live 111 16.24 2.15 (1.03) SYS2 control 147 17.57 1.63
  9. 20 Feb 2018: 24, no. 2, pp.150–174, 2010. [5] B. Thomson and S. Young, “Bayesian update of dialogue state:A POMDP framework for spoken dialogue systems,” ComputerSpeech and Language, vol. ... 24, no. 4, pp. 562–588, 2010. [6] M. Gašić, C. Breslin, M.
  10. 20 Feb 2018: Computer Speech and Language,24:562–588. Jason D. Williams and Steve Young. 2007.
  11. 20 Feb 2018: In mostcases, a data-driven approach is followed, either by detect-ing/annotating emphasized words in existing corpora [23, 10] orby collecting speech corpora specifically designed for emphasismodeling [24]. ... Appointment Booking Task
  12. Uncertainty management for on-line optimisation of a…

    mi.eng.cam.ac.uk/~sjy/papers/dgcg11.pdf
    20 Feb 2018: 24, no. 2, pp. 150–174,2010. [6] W. Eckert, E. Levin, and R. ... 9] O. Pietquin, M. Geist, S. Chandramohan, and H. Frezza-Buet,“Sample-Efficient Batch Reinforcement Learning for DialogueManagement Optimization,” ACM Transactions on Speech
  13. 5 Apr 2016: Speech andSignal Processing, ICASSP 2015, South Brisbane, Queens-land, Australia, April 19-24, 2015, 2015, pp. ... IEEE, 2015, pp. 4315–4319. [24] Mark JF Gales, “Cluster adaptive training of hidden markovmodels,” Speech and Audio Processing, IEEE
  14. Learning Domain-Independent Dialogue Policies via…

    mi.eng.cam.ac.uk/~sjy/papers/wsws15.pdf
    20 Feb 2018: Computer Speech andLanguage, 24(4):562–588. Zhuoran Wang and Oliver Lemon. 2013. A simpleand generic belief tracking mechanism for the Dia-log State Tracking Challenge: On the believabilityof observed information.
  15. 20 Feb 2018: 24, pp. 562–588, 2010. [20] M. Lukoeviius and H. Jaeger, “Reservoir computing approachesto recurrent neural network training,” Computer Science Review,vol. ... abs/1412.2306, 2014. [24] G. Mesnil, Y. Dauphin, K. Yao, Y. Bengio, L.
  16. 20 Feb 2018: When errors are correlated belief tracking is less accurate be-cause it tends to over-estimate alternatives in the N-best list[24]. ... 24, no. 4, pp. 562–588, 2010. 17. T. Minka, “Expectation Propagation for Approximate Bayesian Inference,” in
  17. is-05-hvs6_final

    mi.eng.cam.ac.uk/~sjy/papers/seyo05.pdf
    20 Feb 2018: 52 class n-gram 26.3 25.0 24.9 HVS_52 21.7 20.4 20.1. Table 1: Perplexity for models of varied stack depths trained for 250 iterations.
  18. poyosp08

    mi.eng.cam.ac.uk/~sjy/papers/dpyo08.pdf
    20 Feb 2018: T - R 0.54 0.22 0.24. R - O 0.52 0.31 0.17.
  19. 29 Sep 2016: 24 of 67. S2S: Generative Models [5, 6]. • Consider two sequences L T: input: x1:T = {x1, x2,. , ... 47 of 67. ASR: Sequence Training [24]. • Cross-Entropy using fixed alignment standard criterion (RNN).
  20. LEARNING BETWEEN DIFFERENT TEACHER AND STUDENT MODELS IN ASR ...

    mi.eng.cam.ac.uk/~mjfg/ALTA/ASRU2019_TS.pdf
    20 Dec 2019: The derivatives of the per-frame. observation log-likelihoods with respects to the parameters are [24]. ... Work in [24] suggests several methods to improve gra-dient descent training of a GMM.
  21. yokou

    mi.eng.cam.ac.uk/~sjy/papers/toyo09.pdf
    20 Feb 2018: where. x(d)q = 2. “c(d)q. Dc(d)q. E”(v(cq) v(c)) p(d)v. (24). 4.3.
  22. 12 Apr 2022: default:cout << "Trading error: trading system failure." << endl;exit(-1);. }}. 24.
  23. Online_ASRU11.dvi

    mi.eng.cam.ac.uk/~sjy/papers/gjty11.pdf
    20 Feb 2018: 24, no. 2, pp. 150–174, 2010. [9] B. Thomson and S. ... 24, no. 4, pp. 562–588, 2010. [10] M. Gǎsić, S. Keizer, F.
  24. 20 Feb 2018: 24, no. 4, pp. 562–588, 2010. [13] M Gašić, C Breslin, M Henderson, D Kim, M Szummer,B Thomson, P Tsiakoulis, and S Young, “POMDP-based dia-logue manager adaptation
  25. 12 Jul 2016: When the segment level features are used, the log-linear modelparameters η̂ could be considered as phone dependent acousticmodel scales [24]. ... In joint decoding, 2% relative WER performance gain wasachieved over the hybrid system, from 11.24% to
  26. JOINT MODELLING OF VOICING LABEL AND CONTINUOUS F0 FOR ...

    mi.eng.cam.ac.uk/~sjy/papers/yuyo11a.pdf
    20 Feb 2018: Mixed excitation using STRAIGHT was employed [12].The speech features used were 24 Mel-Cepstral spectral coefficients,the logarithm of F0, and aperiodic components in five frequencybands (0 to 1, 1
  27. System Combination with Log-linear Models

    mi.eng.cam.ac.uk/~mjfg/icassp16_yang.pdf
    5 Apr 2016: When the segment level features are used, the log-linear modelparameters η̂ could be considered as phone dependent acousticmodel scales [24]. ... In joint decoding, 2% relative WER performance gain wasachieved over the hybrid system, from 11.24% to
  28. 20 Feb 2018: Sec. 2. the reward [20, 21, 22, 23] by using the PARADISE frame-work [24]. ... 2356–2361. [24] M. Walker, D. J. Litman, C. A. Kamm, and A.
  29. Low-Resource Speech Recognition and Keyword-Spotting

    mi.eng.cam.ac.uk/~mjfg/SPECOM_2017.pdf
    29 Nov 2017: 23/63. Stimulated Systems. /ey//em/. /sil/. /sh/. /ow/ /ay/. 24/63. Stimulated Network Training. •
  30. 13 Jun 2013: Better performance couldbe achieved by gradually increasing C. Equation (16) is also known as the training criterion ofthe structural SVM [23, 24]. ... ACM, 2004. [24] Shi-Xiong Zhang and Mark Gales, “Structured SVMs for auto-matic speech
  31. slides.dvi

    mi.eng.cam.ac.uk/~mjfg/Bilbao14/talk.pdf
    25 Jun 2014: Cambridge University. Engineering DepartmenteNTERFACE June 2014 24. Controllable and Adaptable Statistical Parametric Speech Synthesis Systems. ... Integrated Expressive Speech Training [24]. Training. Expressive State. Prediction. ExtractionAcoustic
  32. Tsiakoulis_Pirros_1293

    mi.eng.cam.ac.uk/~sjy/papers/tghp12.pdf
    20 Feb 2018: Proc. ICASSP, Taipei, Taiwan. Thomson, B. & Young, S. (2010)“Bayesian Update of Dialogue State: A POMDP framework for spoken dialogue systems.” Computer Speech and Language 24(4):562-588.
  33. 20 Feb 2018: 24, no. 4, pp. 562 – 588, 2010. [2] R. Sutton and A.
  34. main.dvi

    mi.eng.cam.ac.uk/~sjy/papers/youn07
    20 Feb 2018: Hence, an itera-tive algorithm can be implemented which repeatedlyscans through the vocabulary, testing each word tosee if moving it to some other class would increasethe likelihood [24]. ... Thed p nuisance dimensions are modelled by a
  35. paper.dvi

    mi.eng.cam.ac.uk/~mjfg/yw293_ASRU11.pdf
    19 Jan 2012: and. J(m)xδ =. g. xtδ. µ(m)xe ,µl,µn. (24). Thus the model parameters are compensated by. ... Forexample, using the initial noise estimate, RVTSJ performancevaried from 27.5% to 31.7%, while the performance of MLestimated noise only varied from 24.3%
  36. 20 Feb 2018: 6. RELATED WORK. In machine learning in general much research has looked atadaptation of statistical models [21, 22, 23] however researchinto adaptation of SDS components to new domains [24, 25,26, ... 24, pp.562–588, 2010. [16] Milica Gašić and
  37. 20 Feb 2018: F1 ICE. Slot4 95.29% 90.89% 95.72% 93.24% 0.478 89.92% 74.73% 61.56% 67.51% 0.743. ... Task - - - - - 97.12% 83.24% 64.93% 72.95% 0.175.
  38. 20 Feb 2018: scr-10% 2.24 2.03 2.00 1.92. p <0.05, p <0.005Table 2: Human evaluation for utterance quality intwo domains.
  39. 11 Mar 2016: In order to solve this problem, recentlythere has increasing research interest in deriving efficient paralleltraining algorithms for RNNLMs [22, 23, 24, 25]. ... 9.1 2.9 117.85GRNN 5472.1 170.0 24.3 7.9 2.4 117.6.
  40. SSVM_LVCSR_ASRU11.dvi

    mi.eng.cam.ac.uk/~mjfg/sxz20_ASRU11.pdf
    19 Jan 2012: 23)), finds the most violated constraint (Eq. (24)), andadds it to the working set. ... Parallelingthe loop for Eq. (24) will lead to a substantial speed-up in thenumber of threads.
  41. 20 Feb 2018: 24,no. 4, pp. 562–588, Oct. 2010. [7] Jost Schatzmann, Statistical user and error modellingfor spoken dialogue systems, Ph.D.
  42. draft21.dvi

    mi.eng.cam.ac.uk/~mjfg/ASRU13.pdf
    7 Nov 2013: procedure described in [24]. Unilingual and multilingual AMs were. each built from a flat start. ... 24] J. Park et al., “The Efficient Incorporation of MLP Fea-.
  43. DEVELOPMENT OF THE 2003 CU-HTK CONVERSATIONAL TELEPHONE…

    mi.eng.cam.ac.uk/reports/svr-ftp/evermann_icassp2004.pdf
    27 May 2004: purpose WER. P1 supervision for VTLN 34.2P2 supervision for MLLR 28.4P3 lattice generation 24.8. ... System (P4) A B C DSAT HLDA SPron non-HLDA23.0 23.6 23.4 24.8.
  44. paper.dvi

    mi.eng.cam.ac.uk/~mjfg/sxz20_inter11.pdf
    19 Jan 2012: SpeechLang., vol. 24, no. 4, pp. 648–662, 2010. [6] B. Taskar, “Learning structured prediction models: a large marginapproach,” Ph.D.
  45. 20 Feb 2018: In this section, a Gaussianprocess-based reward estimator is described which uses active learning tolimit intrusive requests for feedback and a noise model to mitigate the effectsof inaccurate feedback [24]. ... 24. Figure 13: The number of times each
  46. lect1.dvi

    mi.eng.cam.ac.uk/~mjfg/local/4F10/lect1.pdf
    10 Nov 2015: for minimum error with generative models. 24 Engineering Part IIB: Module 4F10 Statistical Pattern Processing.
  47. paper.dvi

    mi.eng.cam.ac.uk/reports/svr-ftp/liu_icassp2004.pdf
    29 May 2004: Gauss 24.0 20.7 20.7 17.7 16.0WER (%) 35.3 35.1 35.2 35.3 35.5. ... The 16 component systems was then iteratively split until thenumber of components was 24.
  48. 20 Feb 2018: U| 103 and |M| 103. (24). Goals are composed ofNC constraints taken from theset of constraintsC, andNR requests taken from the setof requestsR.
  49. Unsupervised Language Model Adaptation for Mandarin…

    mi.eng.cam.ac.uk/reports/svr-ftp/mrva_icslp06.pdf
    20 Jan 2007: Test set baseline N-gram adaptfixed weights dynamic weights. dev05bcm (BC) 25.6 24.5eval04 (BN) 14.7 14.8dev04f (BN) 6.4 6.5. ... P3 27.4 25.6 24.5 24.3 24.3. Table 3: P2, P3 stage dev05bcm CERs.
  50. 3_2_ransac

    mi.eng.cam.ac.uk/~cipolla/lectures/4F12/Slides/4F12-ImageStitching.pdf
    27 Oct 2020: x̃2 = [ R2 | 0 ] X̃ = R2X (A.24).
  51. 20 Feb 2018: The Pietquin model. Train TestPrecision Recall Precision Recall. BIG 19.74 24.11 17.83 21.66LEV 43.11 35.07 37.98 31.57PTQ 45.00 36.35 40.16

Search history

Recently clicked results

Recently clicked results

Your click history is empty.

Recent searches

Recent searches

Your search history is empty.