Search

Search Funnelback University

Search powered by Funnelback
11 - 20 of 83 search results for KaKaoTalk:vb20 200 |u:www.mlmi.eng.cam.ac.uk where 0 match all words and 83 match some words.
  1. Results that match 1 of 2 words

  2. Sequential Neural Models with Stochastic Layers

    https://www.mlmi.eng.cam.ac.uk/files/d402k_poster_sequential_neural_models_with_stochastic_layers.pdf
    6 Nov 2019: We then used a separated testingset to measure the ELBO of different SRNN archi-tectures, namely for z R(2,10,25,50,100,200) and ford
  3. Curiosity-Driven Reinforcement Learning for Dialogue Management

    https://www.mlmi.eng.cam.ac.uk/files/paulawesselmann_mlsalt.pdf
    6 Nov 2019: 33. 4.5 Actions the policy has learned to use after training for 200, 400, and 600dialogues and corresponding curiosity rewards those actions received.
  4. MergedFile

    https://www.mlmi.eng.cam.ac.uk/files/de_jong_thesis.pdf
    6 Nov 2019: Compressing neural networks. Sjoerd Roelof de JongFitzwilliam College. A dissertation submitted to the University of Cambridgein partial fulfilment of the requirements for the degree of. Master of Philosophy in Machine Learning, Speech, and
  5. Designing Neural Network Hardware Accelerators Using Deep Gaussian…

    https://www.mlmi.eng.cam.ac.uk/files/havasi_dissertation.pdf
    30 Oct 2019: The test log-likelihood of the GPmodel was 1.200.06 as opposed to 0.610.04 of DGPs and 0.480.05 of JointDGPs at 300 training points.
  6. thesis

    https://www.mlmi.eng.cam.ac.uk/files/burt_thesis.pdf
    6 Nov 2019: plotted for a synthetic data set with N = 200, x N(0,52) and s = 5. ... 1.2 that holds for large N plotted for a syntheticdata set with N = 200, x N(0,52) and s = 5.
  7. Investigating Inference in BayesianNeural Networks via Active…

    https://www.mlmi.eng.cam.ac.uk/files/riccardo_barbano_dissertation_mlmi.pdf
    18 Nov 2019: Initially, we train on200 labelled data-points, and progress in batches of 50 with a budget of 200. ... 200 epochs are used to guarantee convergence. 40. 7 A More Complex Dataset.
  8. Evaluating Benefits of Heterogeneity in Constrained Multi-Agent…

    https://www.mlmi.eng.cam.ac.uk/files/2022_-_2023_dissertations/evaluating_benefits_of_heterogeneity.pdf
    14 Dec 2023: of arollout for different neural constraints over 200 rollouts with 300 environ-ments each. ... 59. 5.10 Mean reward over 200 rollouts across 300 steps per rollout.
  9. Pathologies of Deep Sparse Gaussian Process Regression

    https://www.mlmi.eng.cam.ac.uk/files/diaz_thesis.pdf
    30 Oct 2019: Pathologies of Deep SparseGaussian Process Regression. Sergio Pascual Díaz. Department of EngineeringUniversity of Cambridge. This dissertation is submitted for the degree ofMaster of Philosophy. Fitzwilliam College August 2017. Declaration. I,
  10. Overcoming Catastrophic Forgetting in Neural Machine Translation

    https://www.mlmi.eng.cam.ac.uk/files/kell_thesis.pdf
    6 Nov 2019: Overcoming Catastrophic Forgetting inNeural Machine Translation. Gregory Kell. Department of Engineering. University of Cambridge. This dissertation is submitted for the degree of. MPhil Machine Learning Speech and Language Technology. Wolfson
  11. Tradeoffs in Neural Variational Inference

    https://www.mlmi.eng.cam.ac.uk/files/cruz_dissertation.pdf
    30 Oct 2019: The celebA dataset ([39]) consists of more than 200,000 images of celebrity faces. ... For ourwork, we consider 200,000 of these which we split as follows:. •

Refine your results

Search history

Recently clicked results

Recently clicked results

Your click history is empty.

Recent searches

Recent searches

Your search history is empty.