Search
Search Funnelback University
- Refined by:
- Date: 2016
11 -
19 of
19
search results for KaKaoTalk:PC53 24 / |u:mi.eng.cam.ac.uk
where 0
match all words and 19
match some words.
Results that match 1 of 2 words
-
Investigation of back-off based interpolation between Recurrent…
mi.eng.cam.ac.uk/~mjfg/asru15-chen.pdf11 Mar 2016: In order to solve this problem, recentlythere has increasing research interest in deriving efficient paralleltraining algorithms for RNNLMs [22, 23, 24, 25]. ... 9.1 2.9 117.85GRNN 5472.1 170.0 24.3 7.9 2.4 117.6. -
Log-Linear System Combination Using Structured Support Vector Machines
mi.eng.cam.ac.uk/~mjfg/interspeech16_combSSVM.pdf26 Sep 2016: This serves the basis ofstructured discriminative models including SSVMs. Classifica-tion is performed by solving a semi-Markov inference problem[24]:. ... 994–1006, 2010. [24] S. Sarawagi and W. W. Cohen, “Semi-Markov conditional randomfields for -
STRUCTURED DISCRIMINATIVE MODELS USING DEEP NEURAL-NETWORK FEATURES…
mi.eng.cam.ac.uk/~mjfg/vandalen_ASRU15.pdf12 Jul 2016: MPE— 7.15 11.06 14.37 24.54 16.79CML 6.95 11.00 14.29 24.39 16.68large-margin 7.02 10.92 14.16 24.28 ... Therefore the systems use graphemic lex-ica generated using an approach which is applicable to all Unicodecharacters [24]. -
Structured Discriminative Models Using Deep Neural-Network Features
mi.eng.cam.ac.uk/~mjfg/asru15-vanDalen.pdf11 Mar 2016: MPE— 7.15 11.06 14.37 24.54 16.79CML 6.95 11.00 14.29 24.39 16.68large-margin 7.02 10.92 14.16 24.28 ... Therefore the systems use graphemic lex-ica generated using an approach which is applicable to all Unicodecharacters [24]. -
Deep Learning for Speech Processing - An NST Perspective
mi.eng.cam.ac.uk/~mjfg/NST_2016.pdf29 Sep 2016: 24 of 67. S2S: Generative Models [5, 6]. • Consider two sequences L T: input: x1:T = {x1, x2,. , ... 47 of 67. ASR: Sequence Training [24]. • Cross-Entropy using fixed alignment standard criterion (RNN). -
MULTILINGUAL REPRESENTATIONS FOR LOW RESOURCE SPEECH RECOGNITION AND…
mi.eng.cam.ac.uk/~mjfg/asru15_cui.pdf23 May 2016: The input features are 24-dimensionallog Mel magnitude spectrum filter banks, pitch, probability of voic-ing, and their derivatives. ... 24, no. 3, pp. 433–444, 2010. [36] Nobuyasu Itoh, Tara N Sainath, Dan Ning Jiang, Jie Zhou, andBhuvana Ramabhadran, -
4F10: Deep Learning
mi.eng.cam.ac.uk/~mjfg/local/4F10/lect6.pdf8 Nov 2016: represents element-wise multiplication between vectors. 24/68. Long-Short Term Memory Networks (reference) [13, 10]. ... Ẽ (θ[τ]) = E (θ[τ]) νw[τ]. 50/68. Dropout [24]. Input. xd. -
Stimulated Deep Neural Network for Speech Recognition
mi.eng.cam.ac.uk/~mjfg/interspeech16_stimu.pdf26 Sep 2016: IEEE, 2011, pp.24–29. [9] R. Gemello, F. Mana, S. Scanzio, P. -
Structured and InûniteDiscriminative Models for Speech Recognition…
mi.eng.cam.ac.uk/~mjfg/thesis_jy308.pdf26 Jul 2016: 24. 2.6 he framework of linear transform based adaptive training. 26. ... criterion (2.23), and it is deûned as follows:. [f(x). ]. =. {0 when f(x) < 0f(x) when f(x) 0 (2.24). Because of the max{} function, the objective function
Search history
Recently clicked results
Recently clicked results
Your click history is empty.
Recent searches
Recent searches
Your search history is empty.