Follow
Tengyu MA
Title
Cited by
Cited by
Year
A simple but tough-to-beat baseline for sentence embeddings
S Arora, Y Liang, T Ma
ICLR 2017, 2016
12042016
Matrix Completion has No Spurious Local Minimum
R Ge, JD Lee, T Ma
NIPS 2016 (best student paper). arXiv preprint arXiv:1605.07272, 2016
6062016
Generalization and Equilibrium in Generative Adversarial Nets (GANs)
S Arora, R Ge, Y Liang, T Ma, Y Zhang
ICML 2017;arXiv preprint arXiv:1703.00573, 2017, 2017
5822017
Learning imbalanced datasets with label-distribution-aware margin loss
K Cao, C Wei, A Gaidon, N Arechiga, T Ma
NeurIPS 2019; arXiv preprint arXiv:1906.07413, 2019
5392019
Provable bounds for learning some deep representations
S Arora, A Bhaskara, R Ge, T Ma
International conference on machine learning, 584-592, 2014
3912014
A latent variable model approach to pmi-based word embeddings
S Arora, Y Li, Y Liang, T Ma, A Risteski
Transactions of the Association for Computational Linguistics 4, 385-399, 2016
390*2016
Identity Matters in Deep Learning
M Hardt, T Ma
ICLR 2017, 2016
3272016
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
2972021
Finding Approximate Local Minima for Nonconvex Optimization in Linear Time
N Agarwal, Z Allen-Zhu, B Bullins, E Hazan, T Ma
STOC 2017, 2016
290*2016
Mopo: Model-based offline policy optimization
T Yu, G Thomas, L Yu, S Ermon, JY Zou, S Levine, C Finn, T Ma
Advances in Neural Information Processing Systems 33, 14129-14142, 2020
2612020
Gradient descent learns linear dynamical systems
M Hardt, T Ma, B Recht
arXiv preprint arXiv:1609.05191, 2016
2512016
Learning one-hidden-layer neural networks with landscape design
R Ge, JD Lee, T Ma
ICLR 2017; arXiv preprint arXiv:1711.00501, 2017
2312017
Algorithmic Regularization in Over-parameterized Matrix Recovery and Neural Networks with Quadratic Activations
Y Li, T Ma, H Zhang
COLT 2018 (best paper); arXiv preprint arXiv:1712.09203, 2017
222*2017
Fixup initialization: Residual learning without normalization
H Zhang, YN Dauphin, T Ma
arXiv preprint arXiv:1901.09321, 2019
2192019
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
TM Colin Wei, Jason D. Lee, Qiang Liu
arXiv preprint arXiv:1810.05369, 2019
189*2019
Simple, efficient, and neural algorithms for sparse coding
S Arora, R Ge, T Ma, A Moitra
Conference on Learning Theory (COLT) 2015. arXiv preprint arXiv:1503.00778, 2015
1862015
Linear algebraic structure of word senses, with applications to polysemy
S Arora, Y Li, Y Liang, T Ma, A Risteski
arXiv preprint arXiv:1601.03764, 2016
1832016
Verified uncertainty calibration
A Kumar, PS Liang, T Ma
Advances in Neural Information Processing Systems 32, 2019
1702019
Towards explaining the regularization effect of initial large learning rate in training neural networks
Y Li, C Wei, T Ma
Advances in Neural Information Processing Systems 32, 2019
1662019
Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees
Y Luo, H Xu, Y Li, Y Tian, T Darrell, T Ma
arXiv preprint arXiv:1807.03858, 2018
1552018
The system can't perform the operation now. Try again later.
Articles 1–20