First-order stochastic algorithms for escaping from saddle points in almost linear time Y Xu, R Jin, T Yang Advances in Neural Information Processing Systems, 5530-5540, 2018 | 117 | 2018 |
Dash: Semi-supervised learning with dynamic thresholding Y Xu, L Shang, J Ye, Q Qian, YF Li, B Sun, H Li, R Jin International Conference on Machine Learning, 11525-11536, 2021 | 85 | 2021 |
Practical and theoretical considerations in study design for detecting gene-gene interactions using MDR and GMDR approaches GB Chen, Y Xu, HM Xu, MD Li, J Zhu, XY Lou PloS one 6 (2), e16981, 2011 | 67 | 2011 |
ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization Y Xu, M Liu, Q Lin, T Yang Advances in neural information processing systems 30, 2017 | 51 | 2017 |
Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization Y Yan, Y Xu, Q Lin, W Liu, T Yang Advances in Neural Information Processing Systems 33, 5789-5800, 2020 | 49* | 2020 |
On stochastic moving-average estimators for non-convex optimization Z Guo, Y Xu, W Yin, R Jin, T Yang arXiv preprint arXiv:2104.14840, 2021 | 45 | 2021 |
Stochastic convex optimization: Faster local growth implies faster global convergence Y Xu, Q Lin, T Yang International Conference on Machine Learning, 3821-3830, 2017 | 41 | 2017 |
Stochastic optimization for DC functions and non-smooth non-convex regularizers with non-asymptotic convergence Y Xu, Q Qi, Q Lin, R Jin, T Yang International conference on machine learning, 6942-6951, 2019 | 35 | 2019 |
Sadagrad: Strongly adaptive stochastic gradient methods Z Chen*, Y Xu*, E Chen, T Yang International Conference on Machine Learning, 913-921, 2018 | 30 | 2018 |
Towards understanding label smoothing Y Xu, Y Xu, Q Qian, H Li, R Jin arXiv preprint arXiv:2006.11653, 2020 | 27 | 2020 |
Learning with non-convex truncated losses by SGD Y Xu, S Zhu, S Yang, C Zhang, R Jin, T Yang Uncertainty in Artificial Intelligence, 701-711, 2020 | 26 | 2020 |
Homotopy Smoothing for Non-Smooth Problems with Lower Complexity than Y Xu*, Y Yan*, Q Lin, T Yang Advances In Neural Information Processing Systems 29, 1208-1216, 2016 | 25 | 2016 |
Stochastic Primal-Dual Algorithms with Faster Convergence than for Problems without Bilinear Structure Y Yan, Y Xu, Q Lin, L Zhang, T Yang arXiv preprint arXiv:1904.10112, 2019 | 22 | 2019 |
Non-asymptotic analysis of stochastic methods for non-smooth non-convex regularized problems Y Xu, R Jin, T Yang Advances In Neural Information Processing Systems 32, 2630-2640, 2019 | 22* | 2019 |
Accelerate stochastic subgradient method by leveraging local error bound Y Xu, Q Lin, T Yang CoRR, abs/1607.01027, 2016 | 21* | 2016 |
Chex: channel exploration for CNN model compression Z Hou, M Qin, F Sun, X Ma, K Yuan, Y Xu, YK Chen, R Jin, Y Xie, SY Kung Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 20 | 2022 |
An online method for a class of distributionally robust optimization with non-convex objectives Q Qi, Z Guo, Y Xu, R Jin, T Yang Advances in Neural Information Processing Systems 34, 2021 | 17 | 2021 |
Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity Z Yuan, Z Guo, Y Xu, Y Ying, T Yang International Conference on Machine Learning, 12219-12229, 2021 | 17 | 2021 |
NEON+: Accelerated gradient methods for extracting negative curvature for non-convex optimization Y Xu, R Jin, T Yang arXiv preprint arXiv:1712.01033, 2017 | 16 | 2017 |
A novel convergence analysis for algorithms of the adam family Z Guo, Y Xu, W Yin, R Jin, T Yang arXiv preprint arXiv:2112.03459, 2021 | 15 | 2021 |