Google Scholar Profiles
Recent Publications
- 2022
- GenLabel: Mixup Relabeling using Generative Models
J. Sohn, L. Shang, H. Chen, J. Moon, D. Papailiopoulos, and K. Lee
ICML 2022 - Permutation-Based SGD: Is Random Optimal?
S. Rajput, K. Lee, and D. Papailiopoulos
ICLR 2022
- GenLabel: Mixup Relabeling using Generative Models
- 2021
- Pure exploration in kernel and neural bandits
Y. Zhu, D. Zhou, R. Jiang, Q. Gu, R. Willett, R. Nowak
NeurIPS 2021 - Practical, Provably-Correct Interactive Learning in the Realizable Setting: The Power of True Believers
J. Katz-Samuels, B. Mason, K. Jamieson, R. Nowak
NeurIPS 2021 - An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks
S Rajput, K Sreenivasan, D Papailiopoulos, A Karbasi
NeurIPS 2021 - Sample Selection for Fair and Robust Training
Y. Roh, K. Lee, S. Whang, and C. Suh
NeurIPS 2021 - Gradient Inversion with Generative Image Prior
J. Kim, J. Jeon, K. Lee, S. Oh, and J. Ok
NeurIPS 2021 - Fisher-Pitman permutation tests based on nonparametric Poisson mixtures with application to single cell genomics
Z. Miao, W. Kong, R. Korlakai Vinayak, W. Sun, and F. Han - Coded-InvNet for Resilient Prediction Serving Systems
T. Dinh and K. Lee
ICML 2021 - Discrete-Valued Latent Preference Matrix Estimation with Graph Side Information
C. Jo and K. Lee
ICML 2021 - Pufferfish: Communication-efficient Models At No Extra Cost
H. Wang, S. Agarwal, and D. Papailiopoulos
MLSys 2021 - Tensor Methods for Nonlinear Matrix Completion
G Ongie, D Pimentel-Alarcón, L Balzano, R Willett, RD NowakSIAM Journal on Mathematics of Data Science 3 (1), 253-279
- Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
S. Agarwal, H. Wang, K. Lee, S. Venkataraman, and D. Papailiopoulos
MLSys 2021 - FairBatch: Batch Selection for Model Fairness
Y. Roh, K. Lee, S. Whang, and C. Suh
ICLR 2021 - Banach space representer theorems for neural networks and ridge splines
R Parhi, RD NowakJournal of Machine Learning Research 22 (43), 1-40
- Pure exploration in kernel and neural bandits
- 2020
- Maximin active learning in overparameterized model classes
M Karzand, RD NowakIEEE Journal on Selected Areas in Information Theory 1 (1), 167-177
- Concentration inequalities for the empirical distribution of discrete distributions: beyond the method of types
J Mardia, J Jiao, E Tánczos, RD Nowak, T WeissmanInformation and Inference: A Journal of the IMA 9 (4), 813-850
- Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
A. Pensia, S. Rajput, A. Nagle, H. Vishwakarma, D. Papailiopoulos
NeurIPS 2020 (spotlight) - Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. Sohn, K. Lee, and D. Papailiopoulos
NeurIPS 2020 - Bad Global Minima Exist and SGD Can Reach Them
S. Liu, D. Papailiopoulos, D. Achlioptas
NeurIPS 2020 - Finding All ϵ -Good Arms in Stochastic Bandits
B Mason, L Jain, AS Tripathy, R Nowak
NeuIPS 2020 - Reprogramming GANs via Input Noise Design
K. Lee, C. Suh, and K. Ramchandran
ECML PKDD 2020 - Popular Imperceptibility Measures in Visual Adversarial Attacks are Far from Human Perception
A Sen, X Zhu, E Marshall, R NowakInternational Conference on Decision and Game Theory for Security 2020
- Robust Outlier Arm Identification
Y Zhu, S Katariya, R NowakICML 2020
- Estimating the number and effect sizes of non-null hypotheses
J. Brennan, R. Korlakai Vinayak, K. Jamieson
ICML 2020 - The role of neural network activation functions
R Parhi, RD NowakIEEE Signal Processing Letters 2020
- Federated Learning with Matched Averaging
H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, Y. Khazaeni
ICLR 2020 (oral) - FR-Train: A mutual information-based approach to fair and robust training
Y. Roh, K. Lee, S. Whang, and C. Suh
ICML 2020 - Closing the Convergence Gap of SGD without Replacement
S. Rajput, A. Gupta, D. Papailiopoulos
ICML 2020 - Linear bandits with feature feedback
U Oswal, A Bhargava, R NowakAAAI 2020
- Maximin active learning in overparameterized model classes
- 2019
- DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation?
S. Rajput, H. Wang, Z. Charles, and D. Papailiopoulos
NeurIPS 2019 - Maxgap bandit: Adaptive algorithms for approximate ranking
S Katariya, A Tripathy, R Nowak
NeurIPS 2019 - Learning nearest neighbor graphs from noisy distance samples
B Mason, A Tripathy, R Nowak
NeurIPS 2019 - Maximum Likelihood Estimation for Learning Populations of Parameters
R. Korlakai Vinayak, W. Kong, G. Valiant, and S. Kakade
ICML 2019 - Does Data Augmentation Lead to Positive Margin?
S. Rajput, Z. Feng, Z. Charles, P.-L. Loh, and D. Papailiopoulos
ICML 2019 - Bilinear bandits with low-rank structure
KS Jun, R Willett, S Wright, R NowakICML 2019
- The Illusion of Change: Correcting for Bias when Inferring Changes in Sparse, Societal-Scale Data
G. Cadamuro, R. Korlakai Vinayak, J. Blumenstock, S. Kakade, and J. N. Shapiro
WWW 2019 - Crash to Not Crash: Learn to Identify Dangerous Vehicles using a Simulator
H. Kim*, K. Lee*, G. Hwang, and C. Suh
AAAI 2019 (oral)
- DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation?
- <= 2018
- Binary Rating Estimation with Graph Side Information
K. Ahn, K. Lee, H. Cha, and C. Suh
NeurIPS 2018 - Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings
K. Lee*, H. Kim*, and C. Suh
ICLR 2018
- Binary Rating Estimation with Graph Side Information