Search
Search Funnelback University
- Refined by:
- Date: 2016
1 -
22 of
22
search results for TALK:PC53 20 |u:mi.eng.cam.ac.uk
where 0
match all words and 22
match some words.
Results that match 1 of 2 words
-
Engineering Tripos Part IIB FOURTH YEAR Paper 4F10: Statistical ...
mi.eng.cam.ac.uk/~mjfg/local/4F10/examples2.pdf10 Nov 2016: ω1 :. [11. ] [22. ] [20. ]. ω2 :. [00. ] -
Machine Learning of Level and Progression in Second/Additional…
mi.eng.cam.ac.uk/~kmk/presentations/UBham_May2016_Knill.pdf12 May 2016: 10. 20. 30. 40. 50. A1 A2 B1 B2 C. %WER. ... System HL-dim Training Data. % Error. KNN - SUP 20.8 RNNLM 100 17.5 RNNLM 200 Semi-SUP 9.3. -
Log-linear System Combination Using Structured Support Vector…
mi.eng.cam.ac.uk/~ar527/Seg_K6.pdf24 Jun 2016: Unfortunately extracting fixed dimensional featuresfrom variable-length observation sequences and modelling thevast, unstructured, mostly unseen space of possible sentencesis non-trivial [20]. ... 20, no. 3, pp. 273–297, 1995. [20] P. Nguyen, G. Heigold -
solutions2.dvi
mi.eng.cam.ac.uk/~mjfg/local/4F10/solutions2.pdf29 Nov 2016: 01. ] [. 20. ]. (c) There are multiple solutions for α (though a unique decision boundary) to thisas it is an under-specified problem. -
template.dvi
mi.eng.cam.ac.uk/~ar527/ragni_is2016.pdf10 Nov 2016: layer parameters. The latter includes augmentation schemes. [17, 18, 9, 10, 19, 20, 21, 22, 23, 24, 25, 26]. ... Data-based schemes instead. make use of data to initialise [27], train [20, 23] or adapt [24] the. -
Investigation of multilingual speech-to-text systems for use in…
mi.eng.cam.ac.uk/~kmk/presentations/UEdin_Feb14_Knill.pdf12 May 2016: Seminar at Edinburgh University February 2014 20. Multilingual STT for Spoken Term Detection. -
Stimulated Deep Neural Network for Speech Recognition
mi.eng.cam.ac.uk/~mjfg/interspeech16_stimu.pdf26 Sep 2016: In order to setup the stimulated DNNs, the mono-phone 2Dpositions were firstly obtained via t-SNE [17] over the training-set averaged CMLLR [20] frames of the phonemes. ... Fall 2004 Rich Transcription Workshop (RT-04),2004. [20] M. J. Gales, “Maximum -
Knill_CUEDSeminar_20140403.dvi
mi.eng.cam.ac.uk/~kmk/presentations/CUED_Apr14_Knill.pdf12 May 2016: Babel ProgramSeminar at Cambridge University April 2014 20. Multilingual STT for Spoken Term Detection. -
Combining I-vector Representation and Structured Neural Networks for…
mi.eng.cam.ac.uk/~mjfg/icassp16_wu.pdf5 Apr 2016: 19] introduces a scaling factor on hidden-layer activationsand in [20], the differentiable pooling technique is used to obtainthe speaker-dependent compensation from a hidden-activation can-didate pool. ... 20, no. 1, pp.30–42, 2012. [2] Geoffrey -
slides_part1.dvi
mi.eng.cam.ac.uk/~kmk/presentations/TutorialIC_Sep2015_part1_Knill.pdf12 May 2016: 1520. 25. 0. 5. 10. 15. 20. 250. 0.05. 0.1. 0.15. ... and Language Processing, vol. 20, no. 1, pp. 30–42, 2012. [5] A. -
SYSTEM COMBINATION WITH LOG-LINEAR MODELS J. Yang, C. Zhang, ...
mi.eng.cam.ac.uk/~mjfg/yang_ICASSP16.pdf12 Jul 2016: the Gaussian sufficient statistics[19] and HMM mean and variance statistics [20]. ... 1117–1120. [20] Georg Heigold, Ralf Schlüter, and Hermann Ney, “On theequivalence of Gaussian HMM and Gaussian HMM-like hid-den conditional random fields.,” in -
4F10: Deep Learning
mi.eng.cam.ac.uk/~mjfg/local/4F10/lect6.pdf8 Nov 2016: The LSTM is then unrolled for 20timesteps, and thus consumes a larger context of 20 l. ... µ)m (x t ),F(σ)m (x t )). 34/68. Gradient Descent [20]. • -
slides_part2.dvi
mi.eng.cam.ac.uk/~kmk/presentations/TutorialIC_Sep2015_part2_Knill.pdf12 May 2016: further developed by a number of sites [19, 20, 21, 22]. • ... Corpora,” in Proc. HLT-EMNLP, 2005. [20] George Saon, Hagen Soltau, Upendra Chaudhari, Stephen Chu, Brian Kingsbury, Hong-. -
System Combination with Log-linear Models
mi.eng.cam.ac.uk/~mjfg/icassp16_yang.pdf5 Apr 2016: the Gaussian sufficient statistics[19] and HMM mean and variance statistics [20]. ... 1117–1120. [20] Georg Heigold, Ralf Schlüter, and Hermann Ney, “On theequivalence of Gaussian HMM and Gaussian HMM-like hid-den conditional random fields.,” in -
Deep Learning for Speech Processing - An NST Perspective
mi.eng.cam.ac.uk/~mjfg/NST_2016.pdf29 Sep 2016: 19 of 67. Long-Short Term Memory Networks [20, 16]t. xt. ht1. ... i. time delay. f. ii io. ht1x. 20 of 67. Long-Short Term Memory Networks. • -
Log-Linear System Combination Using Structured Support Vector Machines
mi.eng.cam.ac.uk/~mjfg/interspeech16_combSSVM.pdf26 Sep 2016: tic modelling techniques. These individual systems might usedifferent front-ends, segmentations, dictionaries or decisiontrees [20, 21]. ... 20, no. 3, pp. 273–297, 1995. [23] P. Nguyen, G. Heigold, and G. -
Investigation of back-off based interpolation between Recurrent…
mi.eng.cam.ac.uk/~mjfg/asru15-chen.pdf11 Mar 2016: For thisreason, RNNLMs are usually linearly interpolated with n-gram LMsto obtain both a good context coverage and strong generalisation [1,3, 17, 18, 19, 20]. ... ISCA Interspeech, 2010. [20] Hai-Son Le, Ilya Oparin, Alexandre Allauzen, J Gauvain, -
Multi-Language Neural Network Language Models
mi.eng.cam.ac.uk/~mjfg/interspeech16_MLNNLMs.pdf26 Sep 2016: layer parameters. The latter includes augmentation schemes. [17, 18, 9, 10, 19, 20, 21, 22, 23, 24, 25, 26]. ... Data-based schemes instead. make use of data to initialise [27], train [20, 23] or adapt [24] the. -
MULTILINGUAL REPRESENTATIONS FOR LOW RESOURCE SPEECH RECOGNITION AND…
mi.eng.cam.ac.uk/~mjfg/asru15_cui.pdf23 May 2016: All DNN models used in thispaper are hybrid models [20]. The IBM Attila speech recognitiontoolkit [42] is used for training the models presented in this paper. ... 20] Brian Kingsbury, Tara N Sainath, and Hagen Soltau, “Scalableminimum bayes risk -
STRUCTURED DISCRIMINATIVE MODELS USING DEEP NEURAL-NETWORK FEATURES…
mi.eng.cam.ac.uk/~mjfg/vandalen_ASRU15.pdf12 Jul 2016: 5.1. AURORA 4. AURORA 4 is a medium-to-large noise-corrupted speech recogni-tion task [20]. ... 6950–6954. [20] N. Parihar and J. Picone, “Aurora working group: DSR frontend LVCSR evaluation,” Tech. -
Structured Discriminative Models Using Deep Neural-Network Features
mi.eng.cam.ac.uk/~mjfg/asru15-vanDalen.pdf11 Mar 2016: 5.1. AURORA 4. AURORA 4 is a medium-to-large noise-corrupted speech recogni-tion task [20]. ... 6950–6954. [20] N. Parihar and J. Picone, “Aurora working group: DSR frontend LVCSR evaluation,” Tech. -
Structured and InûniteDiscriminative Models for Speech Recognition…
mi.eng.cam.ac.uk/~mjfg/thesis_jy308.pdf26 Jul 2016: 19. 2.4.1 Maximum a Posteriori (MAP). 20. 2.4.2 Linear Transform Based Adaptation. ... criterion (2.20) with 1/0 loss deûned in (2.21). • Word-level loss: his loss function is directly related to the expected word error rate.
Search history
Recently clicked results
Recently clicked results
Your click history is empty.
Recent searches
Recent searches
Your search history is empty.