Follow
Aditi Raghunathan
Aditi Raghunathan
Assistant professor, Carnegie Mellon University
Verified email at cmu.edu - Homepage
Title
Cited by
Cited by
Year
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
43312021
Certified defenses against adversarial examples
A Raghunathan, J Steinhardt, P Liang
arXiv preprint arXiv:1801.09344, 2018
11372018
Unlabeled data improves adversarial robustness
Y Carmon, A Raghunathan, L Schmidt, JC Duchi, PS Liang
Advances in neural information processing systems 32, 2019
8252019
Fine-tuning can distort pretrained features and underperform out-of-distribution
A Kumar, A Raghunathan, R Jones, T Ma, P Liang
arXiv preprint arXiv:2202.10054, 2022
6822022
An explanation of in-context learning as implicit bayesian inference
SM Xie, A Raghunathan, P Liang, T Ma
arXiv preprint arXiv:2111.02080, 2021
6382021
Just train twice: Improving group robustness without training group information
EZ Liu, B Haghgoo, AS Chen, A Raghunathan, PW Koh, S Sagawa, ...
International Conference on Machine Learning, 6781-6792, 2021
5272021
Semidefinite relaxations for certifying robustness to adversarial examples
A Raghunathan, J Steinhardt, PS Liang
Advances in neural information processing systems 31, 2018
5072018
An investigation of why overparameterization exacerbates spurious correlations
S Sagawa, A Raghunathan, PW Koh, P Liang
International Conference on Machine Learning, 8346-8356, 2020
3922020
The pitfalls of simplicity bias in neural networks
H Shah, K Tamuly, A Raghunathan, P Jain, P Netrapalli
Advances in Neural Information Processing Systems 33, 9573-9585, 2020
3902020
Certified robustness to adversarial word substitutions
R Jia, A Raghunathan, K Göksel, P Liang
arXiv preprint arXiv:1909.00986, 2019
3332019
Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization
JP Miller, R Taori, A Raghunathan, S Sagawa, PW Koh, V Shankar, ...
International conference on machine learning, 7721-7735, 2021
2982021
Adversarial training can hurt generalization
A Raghunathan, SM Xie, F Yang, JC Duchi, P Liang
arXiv preprint arXiv:1906.06032, 2019
2792019
Understanding and mitigating the tradeoff between robustness and accuracy
A Raghunathan, SM Xie, F Yang, J Duchi, P Liang
arXiv preprint arXiv:2002.10716, 2020
2632020
DROCC: Deep robust one-class classification
S Goyal, A Raghunathan, M Jain, HV Simhadri, P Jain
International conference on machine learning, 3711-3721, 2020
1942020
Finetune like you pretrain: Improved finetuning of zero-shot vision models
S Goyal, A Kumar, S Garg, Z Kolter, A Raghunathan
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
1302023
Automatically auditing large language models via discrete optimization
E Jones, A Dragan, A Raghunathan, J Steinhardt
International Conference on Machine Learning, 15307-15329, 2023
1272023
Robust encodings: A framework for combating adversarial typos
E Jones, R Jia, A Raghunathan, P Liang
arXiv preprint arXiv:2005.01229, 2020
1202020
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming
S Dathathri, K Dvijotham, A Kurakin, A Raghunathan, J Uesato, RR Bunel, ...
Advances in Neural Information Processing Systems 33, 5318-5331, 2020
1192020
On the opportunities and risks of foundation models. arXiv 2021
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2023
952023
Test time adaptation via conjugate pseudo-labels
S Goyal, M Sun, A Raghunathan, JZ Kolter
Advances in Neural Information Processing Systems 35, 6204-6218, 2022
862022
The system can't perform the operation now. Try again later.
Articles 1–20