Search

Search Funnelback University

Search powered by Funnelback
31 - 40 of 1,000 search results for katalk:PC53 24 / |u:mi.eng.cam.ac.uk where 0 match all words and 1,000 match some words.
  1. Results that match 1 of 2 words

  2. A HIGH-PERFORMANCE CANTONESE KEYWORD SEARCH SYSTEM

    mi.eng.cam.ac.uk/~mjfg/ICASSP13_ibm2.pdf
    13 Jun 2013: with 24% speaking the Central Guangdong, 20%the Northern Pearl River Delta, 19% the Southern Pearl River Delta,19% the Guangxi and Western Guangdong, and 18% the NorthernGuangdong dialects. ... System combination is performed using an
  3. 3 Jul 2018: shows that the CEDM learns to address a relationin up to 24.5% of all dialogues for r = 1.0. ... Computer Speech & Lan-guage, 24(2):150–174. Steve J. Young, Milica Gašić, Blaise Thomson, and Ja-son D.
  4. 20 Feb 2018: S. Young et al. / Computer Speech and Language 24 (2010) 150–174 151. ... 152 S. Young et al. / Computer Speech and Language 24 (2010) 150–174.
  5. crosseval_diff-reward2b.ps

    mi.eng.cam.ac.uk/~sjy/papers/kgjm10.pdf
    20 Feb 2018: Yu. 2009. The Hidden InformationState model: a practical framework for POMDPbased spoken dialogue management.ComputerSpeech and Language, 24(2):150–174.
  6. 28 Apr 2014: Instead, previous research has beenfocused on using N-best list rescoring for RNNLM performanceevaluation [13, 14, 26, 27, 24]. ... 21, no. 3, pp. 492–518,2007. [24] Y. Si, Q. Zhang, T.
  7. 20 Feb 2018: To obtain a closed formsolution of (24), the policy π must be differentiable with respect to θ. ... 10. To lower the variance of the estimate of the gradient, a constant baseline, B, can beintroduced into (24) without introducing any bias [22].
  8. 20 Feb 2018: 24, no. 4, Oct. 2010. [16] S. J. Young, G. Evermann, M.
  9. 20 Feb 2018: Com-puter Speech & Language, 24(4):562–588, 2010. Y. Tokuda, T. Yoshimura, T. ... Computer Speech and Language,24(2):150–174, 2010.
  10. 20 Feb 2018: The feature set includes 24 spectralcoefficients, log F0 and 5 aperiodic component features.
  11. 20 Feb 2018: An advantage of this sparsification approach is that itenables non-positive definite kernel functions to be used in theapproximation, for example see [24]. ... It has already beenshown that active learning has the potential to lead to fasterlearning [24]

Search history

Recently clicked results

Recently clicked results

Your click history is empty.

Recent searches

Recent searches

Your search history is empty.