Search

Search Funnelback University

Search powered by Funnelback
151 - 200 of 1,000 search results for katalk:za33 24 |u:mi.eng.cam.ac.uk where 0 match all words and 1,000 match some words.
  1. Results that match 1 of 2 words

  2. acl2010.dvi

    mi.eng.cam.ac.uk/~sjy/papers/gjkm10.pdf
    20 Feb 2018: Computer Speech and Language, 24(2):150–174.
  3. 20 Feb 2018: 0.24. 0.22. 0.20. 0.18. 0.16. Log-. likel. ihoo. dpe. rmic. ro-tu. ... 24, no. 4, pp. 562–588, 2010. [14] SpaceBook. EC FP7/2011-16, grant number 270019.
  4. 20 Feb 2018: on Mancorpora)a 91.40 90.17 90.24. 90.20Auto (Google MT) 90.81 90.77 87.72 89.223. ... 24, no. 2, pp. 150–174, April 2010. [8] P. Koehn, H.
  5. tech.dvi

    mi.eng.cam.ac.uk/~sjy/papers/bghk13.pdf
    20 Feb 2018: 24, pp. 562–588, 2010. [3] G. Aist, J. Allen, E. Campana, C. ... 24] M. Henderson, M. Gašić, B. Thomson, P. Tsiakoulis, K.Yu,and S.
  6. LEARNING BETWEEN DIFFERENT TEACHER AND STUDENT MODELS IN ASR ...

    mi.eng.cam.ac.uk/~mjfg/ALTA/ASRU2019_TS.pdf
    20 Dec 2019: The derivatives of the per-frame. observation log-likelihoods with respects to the parameters are [24]. ... Work in [24] suggests several methods to improve gra-dient descent training of a GMM.
  7. 8 Sep 2010: Speech Lang.,vol. 24, no. 4, pp. 648–662, 2010. [8] B. Taskar, “Learning structured prediction models: a large marginapproach,” Ph.D.
  8. 20 Feb 2018: Computer Speech and Language,24:562–588. Jason D. Williams and Steve Young. 2007.
  9. sigdial11_sdc10-Feb27-V2

    mi.eng.cam.ac.uk/~sjy/papers/bbch11.pdf
    20 Feb 2018: 6% 24.6% 14.7% 9.6%. ... Length (s) Turns/call Words/turn. SYS1 control 155 18.29 2.87 (2.84) SYS1 live 111 16.24 2.15 (1.03) SYS2 control 147 17.57 1.63
  10. 20 Feb 2018: 24, no. 2, pp.150–174, 2010. [5] B. Thomson and S. Young, “Bayesian update of dialogue state:A POMDP framework for spoken dialogue systems,” ComputerSpeech and Language, vol. ... 24, no. 4, pp. 562–588, 2010. [6] M. Gašić, C. Breslin, M.
  11. Improving Cascaded Systems in Spoken Language Processing

    mi.eng.cam.ac.uk/~mjfg/thesis_ytl28.pdf
    5 May 2023: zzzt = σsigmoid(WWW zf xxxt WWWzrhhht1 bbb. z) (2.24). rrrt = σsigmoid(WWW rf xxxt WWWrrhhht1 bbb.
  12. 20 Feb 2018: In mostcases, a data-driven approach is followed, either by detect-ing/annotating emphasized words in existing corpora [23, 10] orby collecting speech corpora specifically designed for emphasismodeling [24]. ... Appointment Booking Task
  13. 5 Apr 2016: Speech andSignal Processing, ICASSP 2015, South Brisbane, Queens-land, Australia, April 19-24, 2015, 2015, pp. ... IEEE, 2015, pp. 4315–4319. [24] Mark JF Gales, “Cluster adaptive training of hidden markovmodels,” Speech and Audio Processing, IEEE
  14. Uncertainty management for on-line optimisation of a…

    mi.eng.cam.ac.uk/~sjy/papers/dgcg11.pdf
    20 Feb 2018: 24, no. 2, pp. 150–174,2010. [6] W. Eckert, E. Levin, and R. ... 9] O. Pietquin, M. Geist, S. Chandramohan, and H. Frezza-Buet,“Sample-Efficient Batch Reinforcement Learning for DialogueManagement Optimization,” ACM Transactions on Speech
  15. Learning Domain-Independent Dialogue Policies via…

    mi.eng.cam.ac.uk/~sjy/papers/wsws15.pdf
    20 Feb 2018: Computer Speech andLanguage, 24(4):562–588. Zhuoran Wang and Oliver Lemon. 2013. A simpleand generic belief tracking mechanism for the Dia-log State Tracking Challenge: On the believabilityof observed information.
  16. 20 Feb 2018: 24, pp. 562–588, 2010. [20] M. Lukoeviius and H. Jaeger, “Reservoir computing approachesto recurrent neural network training,” Computer Science Review,vol. ... abs/1412.2306, 2014. [24] G. Mesnil, Y. Dauphin, K. Yao, Y. Bengio, L.
  17. 20 Feb 2018: When errors are correlated belief tracking is less accurate be-cause it tends to over-estimate alternatives in the N-best list[24]. ... 24, no. 4, pp. 562–588, 2010. 17. T. Minka, “Expectation Propagation for Approximate Bayesian Inference,” in
  18. 29 Sep 2016: 24 of 67. S2S: Generative Models [5, 6]. • Consider two sequences L T: input: x1:T = {x1, x2,. , ... 47 of 67. ASR: Sequence Training [24]. • Cross-Entropy using fixed alignment standard criterion (RNN).
  19. is-05-hvs6_final

    mi.eng.cam.ac.uk/~sjy/papers/seyo05.pdf
    20 Feb 2018: 52 class n-gram 26.3 25.0 24.9 HVS_52 21.7 20.4 20.1. Table 1: Perplexity for models of varied stack depths trained for 250 iterations.
  20. BN-E Experiments in Cambridge Do Yeong Kim, Mark Gales, ...

    mi.eng.cam.ac.uk/research/projects/EARS/pubs/kim_sttmar05.pdf
    12 Apr 2005: 302k 9k+ 16.0 13.9 24.8 –MLE 415k 9k+ 16.0 13.5 24.3 –. 398k 12k+ 16.1 13.6 24.5 –. 302k 9k+ 13.2 11.2 ... dev04 eval03 dev04f. MLEMPron 16.0 13.6 24.5SPron 15.6 13.5 24.2. MPEMPron 12.9 11.1 19.1SPron 12.7 10.8 18.8.
  21. 12 Apr 2022: default:cout << "Trading error: trading system failure." << endl;exit(-1);. }}. 24.
  22. poyosp08

    mi.eng.cam.ac.uk/~sjy/papers/dpyo08.pdf
    20 Feb 2018: T - R 0.54 0.22 0.24. R - O 0.52 0.31 0.17.
  23. yokou

    mi.eng.cam.ac.uk/~sjy/papers/toyo09.pdf
    20 Feb 2018: where. x(d)q = 2. “c(d)q. Dc(d)q. E”(v(cq) v(c)) p(d)v. (24). 4.3.
  24. main.dvi

    mi.eng.cam.ac.uk/~sjy/papers/youn07
    20 Feb 2018: Hence, an itera-tive algorithm can be implemented which repeatedlyscans through the vocabulary, testing each word tosee if moving it to some other class would increasethe likelihood [24]. ... Thed p nuisance dimensions are modelled by a
  25. 12 Jul 2016: When the segment level features are used, the log-linear modelparameters η̂ could be considered as phone dependent acousticmodel scales [24]. ... In joint decoding, 2% relative WER performance gain wasachieved over the hybrid system, from 11.24% to
  26. 20 Feb 2018: 24, no. 4, pp. 562–588, 2010. [13] M Gašić, C Breslin, M Henderson, D Kim, M Szummer,B Thomson, P Tsiakoulis, and S Young, “POMDP-based dia-logue manager adaptation
  27. Online_ASRU11.dvi

    mi.eng.cam.ac.uk/~sjy/papers/gjty11.pdf
    20 Feb 2018: 24, no. 2, pp. 150–174, 2010. [9] B. Thomson and S. ... 24, no. 4, pp. 562–588, 2010. [10] M. Gǎsić, S. Keizer, F.
  28. JOINT MODELLING OF VOICING LABEL AND CONTINUOUS F0 FOR ...

    mi.eng.cam.ac.uk/~sjy/papers/yuyo11a.pdf
    20 Feb 2018: Mixed excitation using STRAIGHT was employed [12].The speech features used were 24 Mel-Cepstral spectral coefficients,the logarithm of F0, and aperiodic components in five frequencybands (0 to 1, 1
  29. 20 Feb 2018: S. Young et al. / Computer Speech and Language 24 (2010) 150–174 151. ... 152 S. Young et al. / Computer Speech and Language 24 (2010) 150–174.
  30. 9 Aug 2005: #"$%&')(,-.%%&( 0/-#1""324%5/246"78&9:5. ;<8<'7>=@?9A@BA@BDCE<8F@<'8=HGIBDJK=B&L>MNMNO>PQ'/I JRI =BSA@BDCT8=G>=UVI ONJW(XY=ZL@Q. "M>L[AXYU]M>BDU=>_8B?IBDMNM>XaIBD?HQ.BI bDM>XcJ4I Ued=>_#'A@]G[XaI C?@M. ;'&'f2#PI JL[A@L[M>XCMNJKO>XgI,G[MNJABDMNhA@i? @
  31. System Combination with Log-linear Models

    mi.eng.cam.ac.uk/~mjfg/icassp16_yang.pdf
    5 Apr 2016: When the segment level features are used, the log-linear modelparameters η̂ could be considered as phone dependent acousticmodel scales [24]. ... In joint decoding, 2% relative WER performance gain wasachieved over the hybrid system, from 11.24% to
  32. Low-Resource Speech Recognition and Keyword-Spotting

    mi.eng.cam.ac.uk/~mjfg/SPECOM_2017.pdf
    29 Nov 2017: 23/63. Stimulated Systems. /ey//em/. /sil/. /sh/. /ow/ /ay/. 24/63. Stimulated Network Training. •
  33. 13 Jun 2013: Better performance couldbe achieved by gradually increasing C. Equation (16) is also known as the training criterion ofthe structural SVM [23, 24]. ... ACM, 2004. [24] Shi-Xiong Zhang and Mark Gales, “Structured SVMs for auto-matic speech
  34. 20 Feb 2018: Sec. 2. the reward [20, 21, 22, 23] by using the PARADISE frame-work [24]. ... 2356–2361. [24] M. Walker, D. J. Litman, C. A. Kamm, and A.
  35. 20 Feb 2018: 564 B. Thomson, S. Young / Computer Speech and Language 24 (2010) 562–588. ... B. Thomson, S. Young / Computer Speech and Language 24 (2010) 562–588 565.
  36. 20 Feb 2018: The Knowledge Engineering Review, Vol. 00:0, 1–24. c 2006, Cambridge University PressDOI: 10.1017/S000000000000000 Printed in the United Kingdom.
  37. slides.dvi

    mi.eng.cam.ac.uk/~mjfg/Bilbao14/talk.pdf
    25 Jun 2014: Cambridge University. Engineering DepartmenteNTERFACE June 2014 24. Controllable and Adaptable Statistical Parametric Speech Synthesis Systems. ... Integrated Expressive Speech Training [24]. Training. Expressive State. Prediction. ExtractionAcoustic
  38. paper.dvi

    mi.eng.cam.ac.uk/~mjfg/yw293_ASRU11.pdf
    19 Jan 2012: and. J(m)xδ =. g. xtδ. µ(m)xe ,µl,µn. (24). Thus the model parameters are compensated by. ... Forexample, using the initial noise estimate, RVTSJ performancevaried from 27.5% to 31.7%, while the performance of MLestimated noise only varied from 24.3%
  39. Tsiakoulis_Pirros_1293

    mi.eng.cam.ac.uk/~sjy/papers/tghp12.pdf
    20 Feb 2018: Proc. ICASSP, Taipei, Taiwan. Thomson, B. & Young, S. (2010)“Bayesian Update of Dialogue State: A POMDP framework for spoken dialogue systems.” Computer Speech and Language 24(4):562-588.
  40. 20 Feb 2018: 24, no. 4, pp. 562 – 588, 2010. [2] R. Sutton and A.
  41. SSVM_LVCSR_ASRU11.dvi

    mi.eng.cam.ac.uk/~mjfg/sxz20_ASRU11.pdf
    19 Jan 2012: 23)), finds the most violated constraint (Eq. (24)), andadds it to the working set. ... Parallelingthe loop for Eq. (24) will lead to a substantial speed-up in thenumber of threads.
  42. 20 Feb 2018: scr-10% 2.24 2.03 2.00 1.92. p <0.05, p <0.005Table 2: Human evaluation for utterance quality intwo domains.
  43. 20 Feb 2018: SFRName Reward Success #Turnsbest prior 8.66 0.35 85.40 2.19 8.32 0.20adapted 9.62 0.30 89.60 1.90 8.24 0.19. ... The systemwas deployed in a telephone-based set-up, with subjects recruited via Ama-zon MTurk and a recurrent neural network model was used
  44. 20 Feb 2018: 6. RELATED WORK. In machine learning in general much research has looked atadaptation of statistical models [21, 22, 23] however researchinto adaptation of SDS components to new domains [24, 25,26, ... 24, pp.562–588, 2010. [16] Milica Gašić and
  45. CU-HTK April 2002 Switchboard System Phil Woodland, Gunnar Evermann,…

    mi.eng.cam.ac.uk/reports/svr-ftp/woodland_rt02.pdf
    5 Jun 2002: 16 20 24 2833. 33.5. 34. 34.5. 35. 35.5. 36. 36.5. ... Cambridge UniversityEngineering Department. Rich Transcription Workshop 2002 24. Woodland, Evermann, Gales, Hain, Liu, Moore, Povey & Wang: CU-HTK April 2002 Switchboard system.
  46. 11 Mar 2016: In order to solve this problem, recentlythere has increasing research interest in deriving efficient paralleltraining algorithms for RNNLMs [22, 23, 24, 25]. ... 9.1 2.9 117.85GRNN 5472.1 170.0 24.3 7.9 2.4 117.6.
  47. 20 Feb 2018: F1 ICE. Slot4 95.29% 90.89% 95.72% 93.24% 0.478 89.92% 74.73% 61.56% 67.51% 0.743. ... Task - - - - - 97.12% 83.24% 64.93% 72.95% 0.175.
  48. draft21.dvi

    mi.eng.cam.ac.uk/~mjfg/ASRU13.pdf
    7 Nov 2013: procedure described in [24]. Unilingual and multilingual AMs were. each built from a flat start. ... 24] J. Park et al., “The Efficient Incorporation of MLP Fea-.
  49. 20 Feb 2018: 24,no. 4, pp. 562–588, Oct. 2010. [7] Jost Schatzmann, Statistical user and error modellingfor spoken dialogue systems, Ph.D.
  50. paper.dvi

    mi.eng.cam.ac.uk/~mjfg/sxz20_inter11.pdf
    19 Jan 2012: SpeechLang., vol. 24, no. 4, pp. 648–662, 2010. [6] B. Taskar, “Learning structured prediction models: a large marginapproach,” Ph.D.
  51. eps.dis.dur.testa.eps

    mi.eng.cam.ac.uk/~mjfg/gales_ASRU09.pdf
    14 Sep 2010: Using 17 pairs, about 24% of thetotal number of pairs, 92% of the WER improvement usingthe 1-v-1 system over the VTS baseline was achieved.

Search history

Recently clicked results

Recently clicked results

Your click history is empty.

Recent searches

Recent searches

Your search history is empty.