Gang Niu
Gang Niu
RIKEN Center for Advanced Intelligence Project
Verified email at postman.riken.jp - Homepage
Title
Cited by
Cited by
Year
Co-teaching: Robust training of deep neural networks with extremely noisy labels
B Han, Q Yao, X Yu, G Niu, M Xu, W Hu, IW Tsang, M Sugiyama
NeurIPS 2018, 2018
6782018
Positive-unlabeled learning with non-negative risk estimator
R Kiryo, G Niu, MC Plessis, M Sugiyama
NeurIPS 2017 (oral), 2017
2412017
Analysis of learning from positive and unlabeled data
MC du Plessis, G Niu, M Sugiyama
NeurIPS 2014, 2014
2362014
How does disagreement help generalization against label corruption?
X Yu, B Han, J Yao, G Niu, IW Tsang, M Sugiyama
ICML 2019, 2019
2252019
Convex formulation for learning from positive and unlabeled data
MC du Plessis, G Niu, M Sugiyama
ICML 2015, 2015
2042015
Class-prior estimation for learning from positive and unlabeled data
MC du Plessis, G Niu, M Sugiyama
Machine Learning 106 (4), 463--492, 2017
168*2017
Analysis and improvement of policy gradient estimation
T Zhao, H Hachiya, G Niu, M Sugiyama
NeurIPS 2011, 2011
1342011
Masking: A new perspective of noisy supervision
B Han, J Yao, G Niu, M Zhou, IW Tsang, Y Zhang, M Sugiyama
NeurIPS 2018, 2018
1192018
Are anchor points really indispensable in label-noise learning?
X Xia, T Liu, N Wang, B Han, C Gong, G Niu, M Sugiyama
NeurIPS 2019, 2019
1062019
Does distributionally robust supervised learning give robust classifiers?
W Hu, G Niu, I Sato, M Sugiyama
ICML 2018, 2018
1042018
Semi-supervised classification based on classification from positive and unlabeled data
T Sakai, MC du Plessis, G Niu, M Sugiyama
ICML 2017, 2017
922017
Information-theoretic semi-supervised metric learning via entropy regularization
G Niu, B Dai, M Yamada, M Sugiyama
ICML 2012, 2012
922012
Theoretical comparisons of positive-unlabeled learning against positive-negative learning
G Niu, MC du Plessis, T Sakai, Y Ma, M Sugiyama
NeurIPS 2016, 2016
892016
Attacks which do not kill training make adversarial learning stronger
J Zhang, X Xu, B Han, G Niu, L Cui, M Sugiyama, M Kankanhalli
ICML 2020, 2020
762020
Information-maximization clustering based on squared-loss mutual information
M Sugiyama, G Niu, M Yamada, M Kimura, H Hachiya
Neural Computation 26 (1), 84--131, 2014
74*2014
Learning from complementary labels
T Ishida, G Niu, W Hu, M Sugiyama
NeurIPS 2017, 2017
702017
SIGUA: Forgetting may make learning with noisy labels more robust
B Han, G Niu, X Yu, Q Yao, M Xu, IW Tsang, M Sugiyama
ICML 2020, 2020
60*2020
Squared-loss mutual information regularization: A novel information-theoretic approach to semi-supervised learning
G Niu, W Jitkrittum, B Dai, H Hachiya, M Sugiyama
ICML 2013, 2013
582013
On the minimal supervision for training any binary classifier from only unlabeled data
N Lu, G Niu, AK Menon, M Sugiyama
ICLR 2019, 2019
542019
Part-dependent label noise: Towards instance-dependent label noise
X Xia, T Liu, B Han, N Wang, M Gong, H Liu, G Niu, D Tao, M Sugiyama
NeurIPS 2020 (spotlight), 2020
502020
The system can't perform the operation now. Try again later.
Articles 1–20