Search

Search Funnelback University

Search powered by Funnelback
141 - 160 of 1,000 search results for KaKaoTalk:ZA31 24 24 |u:mi.eng.cam.ac.uk where 0 match all words and 1,000 match some words.
  1. Results that match 1 of 2 words

  2. sigdial11_sdc10-Feb27-V2

    mi.eng.cam.ac.uk/~sjy/papers/bbch11.pdf
    20 Feb 2018: 6% 24.6% 14.7% 9.6%. ... Length (s) Turns/call Words/turn. SYS1 control 155 18.29 2.87 (2.84) SYS1 live 111 16.24 2.15 (1.03) SYS2 control 147 17.57 1.63
  3. main.dvi

    mi.eng.cam.ac.uk/~sjy/papers/youn07
    20 Feb 2018: Hence, an itera-tive algorithm can be implemented which repeatedlyscans through the vocabulary, testing each word tosee if moving it to some other class would increasethe likelihood [24]. ... Thed p nuisance dimensions are modelled by a
  4. 20 Feb 2018: 24, no. 2, pp.150–174, 2010. [5] B. Thomson and S. Young, “Bayesian update of dialogue state:A POMDP framework for spoken dialogue systems,” ComputerSpeech and Language, vol. ... 24, no. 4, pp. 562–588, 2010. [6] M. Gašić, C. Breslin, M.
  5. 20 Feb 2018: S. Young et al. / Computer Speech and Language 24 (2010) 150–174 151. ... 152 S. Young et al. / Computer Speech and Language 24 (2010) 150–174.
  6. Uncertainty management for on-line optimisation of a…

    mi.eng.cam.ac.uk/~sjy/papers/dgcg11.pdf
    20 Feb 2018: 24, no. 2, pp. 150–174,2010. [6] W. Eckert, E. Levin, and R. ... 9] O. Pietquin, M. Geist, S. Chandramohan, and H. Frezza-Buet,“Sample-Efficient Batch Reinforcement Learning for DialogueManagement Optimization,” ACM Transactions on Speech
  7. 29 Sep 2016: 24 of 67. S2S: Generative Models [5, 6]. • Consider two sequences L T: input: x1:T = {x1, x2,. , ... 47 of 67. ASR: Sequence Training [24]. • Cross-Entropy using fixed alignment standard criterion (RNN).
  8. 12 Jul 2016: When the segment level features are used, the log-linear modelparameters η̂ could be considered as phone dependent acousticmodel scales [24]. ... In joint decoding, 2% relative WER performance gain wasachieved over the hybrid system, from 11.24% to
  9. 20 Feb 2018: 564 B. Thomson, S. Young / Computer Speech and Language 24 (2010) 562–588. ... B. Thomson, S. Young / Computer Speech and Language 24 (2010) 562–588 565.
  10. Improving Cascaded Systems in Spoken Language Processing

    mi.eng.cam.ac.uk/~mjfg/thesis_ytl28.pdf
    5 May 2023: zzzt = σsigmoid(WWW zf xxxt WWWzrhhht1 bbb. z) (2.24). rrrt = σsigmoid(WWW rf xxxt WWWrrhhht1 bbb.
  11. System Combination with Log-linear Models

    mi.eng.cam.ac.uk/~mjfg/icassp16_yang.pdf
    5 Apr 2016: When the segment level features are used, the log-linear modelparameters η̂ could be considered as phone dependent acousticmodel scales [24]. ... In joint decoding, 2% relative WER performance gain wasachieved over the hybrid system, from 11.24% to
  12. SSVM_LVCSR_ASRU11.dvi

    mi.eng.cam.ac.uk/~mjfg/sxz20_ASRU11.pdf
    19 Jan 2012: 23)), finds the most violated constraint (Eq. (24)), andadds it to the working set. ... Parallelingthe loop for Eq. (24) will lead to a substantial speed-up in thenumber of threads.
  13. paper.dvi

    mi.eng.cam.ac.uk/~mjfg/yw293_ASRU11.pdf
    19 Jan 2012: and. J(m)xδ =. g. xtδ. µ(m)xe ,µl,µn. (24). Thus the model parameters are compensated by. ... Forexample, using the initial noise estimate, RVTSJ performancevaried from 27.5% to 31.7%, while the performance of MLestimated noise only varied from 24.3%
  14. acl2010.dvi

    mi.eng.cam.ac.uk/~sjy/papers/gjkm10.pdf
    20 Feb 2018: Computer Speech and Language, 24(2):150–174.
  15. 13 Jun 2013: Better performance couldbe achieved by gradually increasing C. Equation (16) is also known as the training criterion ofthe structural SVM [23, 24]. ... ACM, 2004. [24] Shi-Xiong Zhang and Mark Gales, “Structured SVMs for auto-matic speech
  16. 20 Feb 2018: SFRName Reward Success #Turnsbest prior 8.66 0.35 85.40 2.19 8.32 0.20adapted 9.62 0.30 89.60 1.90 8.24 0.19. ... The systemwas deployed in a telephone-based set-up, with subjects recruited via Ama-zon MTurk and a recurrent neural network model was used
  17. 8 Sep 2010: Speech Lang.,vol. 24, no. 4, pp. 648–662, 2010. [8] B. Taskar, “Learning structured prediction models: a large marginapproach,” Ph.D.
  18. 20 Feb 2018: Computer Speech and Language,24:562–588. Jason D. Williams and Steve Young. 2007.
  19. Online_ASRU11.dvi

    mi.eng.cam.ac.uk/~sjy/papers/gjty11.pdf
    20 Feb 2018: 24, no. 2, pp. 150–174, 2010. [9] B. Thomson and S. ... 24, no. 4, pp. 562–588, 2010. [10] M. Gǎsić, S. Keizer, F.
  20. CU-HTK April 2002 Switchboard System Phil Woodland, Gunnar Evermann,…

    mi.eng.cam.ac.uk/reports/svr-ftp/woodland_rt02.pdf
    5 Jun 2002: 16 20 24 2833. 33.5. 34. 34.5. 35. 35.5. 36. 36.5. ... Cambridge UniversityEngineering Department. Rich Transcription Workshop 2002 24. Woodland, Evermann, Gales, Hain, Liu, Moore, Povey & Wang: CU-HTK April 2002 Switchboard system.
  21. 20 Feb 2018: 24, no. 4, pp. 562–588, 2010. [13] M Gašić, C Breslin, M Henderson, D Kim, M Szummer,B Thomson, P Tsiakoulis, and S Young, “POMDP-based dia-logue manager adaptation

Search history

Recently clicked results

Recently clicked results

Your click history is empty.

Recent searches

Your search history is empty.