Follow
Tianxing He
Tianxing He
Verified email at cs.washington.edu - Homepage
Title
Cited by
Cited by
Year
Why gradient clipping accelerates training: A theoretical justification for adaptivity
J Zhang, T He, S Sra, A Jadbabaie
International Conference on Learning Representations 2020, 2019
3772019
Reshaping deep neural network for fast decoding by node-pruning
T He, Y Fan, Y Qian, T Tan, K Yu
2014 IEEE International Conference on Acoustics, Speech and Signal …, 2014
1582014
Exploiting LSTM structure in deep neural networks for speech recognition
T He, J Droppo
2016 IEEE international conference on acoustics, speech and signal …, 2016
492016
Can language models solve graph problems in natural language?
H Wang, S Feng, T He, Z Tan, X Han, Y Tsvetkov
Advances in Neural Information Processing Systems 36, 2024
442024
Negative training for neural dialogue response generation
T He, J Glass
ACL 2020, 2019
432019
An empirical study of transformer-based neural language model adaptation
K Li, Z Liu, T He, H Huang, F Peng, D Povey, S Khudanpur
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
392020
Analyzing the forgetting problem in pretrain-finetuning of open-domain dialogue response models
T He, J Liu, K Cho, M Ott, B Liu, J Glass, F Peng
Proceedings of the 16th Conference of the European Chapter of the …, 2021
382021
Quantifying exposure bias for neural language generation
T He, J Zhang, Z Zhou, J Glass
312019
A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation
M Nadeem, T He, K Cho, J Glass
AACL 2020, 2020
262020
Exposure bias versus self-recovery: Are distortions really incremental for autoregressive text generation?
T He, J Zhang, Z Zhou, J Glass
arXiv preprint arXiv:1905.10617, 2019
232019
Detecting egregious responses in neural sequence-to-sequence models
T He, J Glass
International Conference on Learning Representations 2019, 2018
232018
On training bi-directional neural network language model with noise contrastive estimation
T He, Y Zhang, J Droppo, K Yu
2016 10th International Symposium on Chinese Spoken Language Processing …, 2016
192016
On the blind spots of model-based evaluation metrics for text generation
T He, J Zhang, T Wang, S Kumar, K Cho, J Glass, Y Tsvetkov
arXiv preprint arXiv:2212.10020, 2022
162022
An investigation on DNN-derived bottleneck features for GMM-HMM based robust speech recognition
Y You, Y Qian, T He, K Yu
2015 IEEE China Summit and International Conference on Signal and …, 2015
152015
Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models
T He, B McCann, C Xiong, E Hosseini-Asl
EACL 2021, 2021
132021
Recurrent neural network language model with structured word embeddings for speech recognition
T He, X Xiang, Y Qian, K Yu
2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015
92015
Why gradient clipping accelerates training: A theoretical justification for adaptivity. arXiv 2019
J Zhang, T He, S Sra, A Jadbabaie
arXiv preprint arXiv:1905.11881, 0
9
On the zero-shot generalization of machine-generated text detectors
X Pu, J Zhang, X Han, Y Tsvetkov, T He
arXiv preprint arXiv:2310.05165, 2023
82023
Semstamp: A semantic watermark with paraphrastic robustness for text generation
AB Hou, J Zhang, T He, Y Wang, YS Chuang, H Wang, L Shen, ...
arXiv preprint arXiv:2310.03991, 2023
82023
Mix-review: Alleviate forgetting in the pretrain-finetune framework for neural language generation models
T He, J Liu, K Cho, M Ott, B Liu, J Glass, F Peng
72019
The system can't perform the operation now. Try again later.
Articles 1–20