Follow
Zhe Wang
Zhe Wang
Verified email at osu.edu
Title
Cited by
Cited by
Year
SpiderBoost and momentum: Faster variance reduction algorithms
Z Wang, K Ji, Y Zhou, Y Liang, V Tarokh
arXiv preprint arXiv:1810.10690, 2018
156*2018
Improving sample complexity bounds for actor-critic algorithms
T Xu, Z Wang, Y Liang
arXiv preprint arXiv:2004.12956, 2020
60*2020
Improved zeroth-order variance reduced algorithms and analysis for nonconvex optimization
K Ji, Z Wang, Y Zhou, Y Liang
International conference on machine learning, 3100-3109, 2019
422019
Stochastic variance-reduced cubic regularization for nonconvex optimization
Z Wang, Y Zhou, Y Liang, G Lan
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
392019
Non-asymptotic convergence analysis of two time-scale (natural) actor-critic algorithms
T Xu, Z Wang, Y Liang
arXiv preprint arXiv:2005.03557, 2020
382020
Reanalysis of variance reduced temporal difference learning
T Xu, Z Wang, Y Zhou, Y Liang
arXiv preprint arXiv:2001.01898, 2020
312020
Cubic regularization with momentum for nonconvex optimization
Z Wang, Y Zhou, Y Liang, G Lan
Uncertainty in Artificial Intelligence, 313-322, 2020
202020
Convergence of cubic regularization for nonconvex optimization under KL property
Y Zhou, Z Wang, Y Liang
Advances in Neural Information Processing Systems 31, 2018
142018
History-gradient aided batch size adaptation for variance reduced algorithms
K Ji, Z Wang, B Weng, Y Zhou, W Zhang, Y Liang
International Conference on Machine Learning, 4762-4772, 2020
12*2020
Enhanced first and zeroth order variance reduced algorithms for min-max optimization
T Xu, Z Wang, Y Liang, HV Poor
122020
Momentum schemes with stochastic variance reduction for nonconvex composite optimization
Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh
arXiv preprint arXiv:1902.02715, 2019
122019
Spectral algorithms for community detection in directed networks
Z Wang, Y Liang, P Ji
Journal of Machine Learning Research, 2020
112020
A note on inexact gradient and Hessian conditions for cubic regularized Newton’s method
Z Wang, Y Zhou, Y Liang, G Lan
Operations Research Letters 47 (2), 146-149, 2019
11*2019
Proximal gradient algorithm with momentum and flexible parameter restart for nonconvex optimization
Y Zhou, Z Wang, K Ji, Y Liang, V Tarokh
arXiv preprint arXiv:2002.11582, 2020
42020
ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization
X Huang, R Xu, H Zhou, Z Wang, Z Liu, L Li
Proceedings of the AAAI Conference on Artificial Intelligence 35 (9), 7857-7864, 2021
2021
Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs
X Huang, H Zhou, R Xu, Z Wang, L Li
arXiv preprint arXiv:2006.07037, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–16