Search
Search Funnelback University
- Refined by:
- Date: Past fortnight
1 -
15 of
15
search results for KaKaoTalk:vb20 200 |u:mlg.eng.cam.ac.uk
where 0
match all words and 15
match some words.
Results that match 1 of 2 words
-
preferential_fairness_nips_2017.pages
https://mlg.eng.cam.ac.uk/adrian/preferential_fairness_nips_2017.pdf19 Jun 2024: 4. New notions of fairness. M (100). W (100)M (200) W (200). ... Benefit: 0% (M), 67% (W). M (100). W (100)M (200) W (200). -
Exploring Properties of the Deep Image Prior Andreas…
https://mlg.eng.cam.ac.uk/adrian/NeurIPS_2019_DIP7.pdf19 Jun 2024: This is confirmed by the confidenceof the DIP output, which increased to > 0:5 after just 200 iterations. ... 2. (a) 100 iterationsConf.: 0.004. (b) 200 iterationsConf.: 0.52. (c) 300 iterationsConf.: 0.72. -
ML-IRL: Machine Learning in Real Life Workshop at ICLR ...
https://mlg.eng.cam.ac.uk/adrian/ML_IRL_2020-CLUE.pdf19 Jun 2024: MNIST 2 1200 6 -LSAT 2 200 3 300. COMPAS 2 200 3 300. -
Network Ranking With Bethe Pseudomarginals Kui TangColumbia…
https://mlg.eng.cam.ac.uk/adrian/2013_NeurIPS_DiscML_Network.pdf19 Jun 2024: We drew independent nodescores from a mixture of Gaussians and a scale free network (100 nodes, 200 edges) from the Barabsi-Albertmodel [12]. -
TibGM: A Transferable and Information-Based Graphical Model Approach…
https://mlg.eng.cam.ac.uk/adrian/ICML2019-TibGM.pdf19 Jun 2024: 0 100 200 300 400 500. 103 steps0. 50. 100. 150. ... 200. 250. 300. 350. retu. rn. (a) Swimmer (rllab). 0 250 500 750 1000 1250 1500 1750 2000. -
From Parity to Preference-based Notionsof Fairness in Classification…
https://mlg.eng.cam.ac.uk/adrian/NeurIPS17-from-parity-to-preference.pdf19 Jun 2024: W (100)M (200) W (200). f2. f1Acc: 0.83. Benefit: 0% (M), 67% (W). ... M (100). W (100)M (200) W (200). f2. f1Acc: 0.72. Benefit: 22% (M), 22% (W)Acc: 1.00. -
Understanding the Bethe Approximation: When and How can it ...
https://mlg.eng.cam.ac.uk/adrian/abc.pdf19 Jun 2024: Understanding the Bethe Approximation: When and How can it go Wrong? Adrian WellerColumbia UniversityNew York NY 10027. adrian@cs.columbia.edu. Kui TangColumbia UniversityNew York NY 10027. kt2384@cs.columbia.edu. David SontagNew York UniversityNew -
Now You See Me (CME): Concept-based Model Extraction
https://mlg.eng.cam.ac.uk/adrian/AIMLAI20-CME.pdf19 Jun 2024: This dataset consists of 11,788 im-ages of 200 bird species with every image annotatedusing 312 binary concept labels (e.g. ... 28] C. Wah, S. Branson, P. Welinder, P. Perona, S. Be-longie, The caltech-ucsd birds-200-2011 dataset(2011). -
2018 Formatting Instructions for Authors Using LaTeX
https://mlg.eng.cam.ac.uk/adrian/AIES18-crowd_signals.pdf19 Jun 2024: Weuse this set of 200 labeled tweets as our ground truth dataset. -
Beyond Distributive Fairness in Algorithmic Decision Making: Feature…
https://mlg.eng.cam.ac.uk/adrian/AAAI18-BeyondDistributiveFairness.pdf19 Jun 2024: For a given dataset, we gather responses to the abovequestions from 200 different AMT workers (that is, each fea-ture is judged by 200 different workers). -
Ode to an ODE Krzysztof Choromanski ∗Robotics at Google ...
https://mlg.eng.cam.ac.uk/adrian/NeurIPS20-ODEtoODE.pdf19 Jun 2024: Inall experiments we used k = 200 perturbations per iteration [15]. ... mentioned above, in allexperiments we used k = 200). -
Blind Justice: Fairness with Encrypted Sensitive Attributes
https://mlg.eng.cam.ac.uk/adrian/ICML18-BlindJustice.pdf19 Jun 2024: Blind Justice: Fairness with Encrypted Sensitive Attributes. Niki Kilbertus 1 2 Adrià Gascón 3 4 Matt Kusner 3 4 Michael Veale 5 Krishna P. Gummadi 6 Adrian Weller 2 3. AbstractRecent work has explored how to train machinelearning models which -
You Shouldn’t Trust Me: Learning Models WhichConceal Unfairness From…
https://mlg.eng.cam.ac.uk/adrian/ECAI20-You_Shouldn%E2%80%99t_Trust_Me.pdf19 Jun 2024: 373 10.573 6.200 0.9 2.5 14.3 0.0 4029.1 28.6 1.2 0.0SHAP 3.7 12.9 9.2 4.499 12.027 -
Leader Stochastic Gradient Descent for DistributedTraining of Deep…
https://mlg.eng.cam.ac.uk/adrian/NeurIPS2019_LSGD_preprint.pdf19 Jun 2024: Leader Stochastic Gradient Descent for DistributedTraining of Deep Learning Models. Yunfei Teng,1yt1208@nyu.edu. Wenbo Gao,2wg2279@columbia.edu. Francois Chaluschalusf3@gmail.com. Anna Choromanskaac5455@nyu.edu. Donald Goldfarbgoldfarb@columbia.edu. -
Methods for Inference in Graphical Models
https://mlg.eng.cam.ac.uk/adrian/phd_FINAL.pdf19 Jun 2024: Methods for Inference in Graphical Models. Adrian Weller. Submitted in partial fulfillment of the. requirements for the degree. of Doctor of Philosophy. in the Graduate School of Arts and Sciences. COLUMBIA UNIVERSITY. 2014. c2014. Adrian Weller.
Refine your results
clear all
Date
Search history
Recently clicked results
Recently clicked results
Your click history is empty.
Recent searches
Recent searches
Your search history is empty.