Follow
Lu Hou (侯璐)
Lu Hou (侯璐)
Noah's Ark Lab, Huawei
Verified email at huawei.com - Homepage
Title
Cited by
Cited by
Year
FILIP: Fine-grained Interactive Language-Image Pre-training
L Yao*, R Huang*, L Hou*, G Lu, M Niu, H Xu, X Liang, Z Li, X Jiang, C Xu
10th International Conference on Learning Representations (ICLR-2022), 2022
2312022
Loss-aware Binarization of Deep Networks
L Hou, Q Yao, JT Kwok
5th International Conference on Learning Representations (ICLR-2017), 2016
2262016
Dynabert: Dynamic bert with adaptive width and depth
L Hou, Z Huang, L Shang, X Jiang, X Chen, Q Liu
Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS-2020), 2020
1992020
Loss-aware Weight Quantization of Deep Networks
L Hou, JT Kwok
6th International Conference on Learning Representations (ICLR-2018), 2018
1452018
TernaryBERT: Distillation-aware Ultra-low Bit BERT
W Zhang*, L Hou*, Y Yin*, L Shang, X Chen, X Jiang, Q Liu
Conference on Empirical Methods in Natural Language Processing (EMNLP-2020), 2020
1322020
BinaryBERT: Pushing the Limit of BERT Quantization
H Bai, W Zhang, L Hou, L Shang, J Jin, X Jiang, Q Liu, M Lyu, I King
59th Annual Meeting of the Association for Computational Linguistics (ACL-2021), 2021
1222021
Efficient Learning of Timeseries Shapelets
L Hou, JT Kwok, JM Zurada
the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-2016), 2016
932016
Improved OOD Generalization via Adversarial Training and Pre-training
M Yi, L Hou, J Sun, L Shang, X Jiang, Q Liu, ZM Ma
The Thirty-eighth International Conference on Machine Learning (ICML-2021), 2021
442021
Wukong: A 100 million large-scale chinese cross-modal pre-training benchmark
J Gu, X Meng, G Lu, L Hou, N Minzhe, X Liang, L Yao, R Huang, W Zhang, ...
Advances in Neural Information Processing Systems 35, 26418-26431, 2022
412022
Normalization Helps Training of Quantized LSTM
L Hou, J Zhu, JT Kwok, F Gao, T Qin, T Liu
Thirty-third Conference on Neural Information Processing Systems (NeurIPS-2019), 2019
402019
Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
W Dai, L Hou, L Shang, X Jiang, Q Liu, P Fung
Findings of the Association for Computational Linguistics (ACL-IJCNLP 2022), 2022
362022
Compression of Generative Pre-trained Language Models via Quantization
C Tao, L Hou, W Zhang, L Shang, X Jiang, Q Liu, P Luo, N Wong
60th Annual Meeting of the Association for Computational Linguistics (ACL-2022), 2022
322022
Analysis of Quantized Models
L Hou, R Zhang, JT Kwok
7th International Conference on Learning Representations (ICLR-2019), 2019
302019
Ghostbert: Generate more features with cheap operations for BERT
Z Huang, L Hou, L Shang, X Jiang, X Chen, Q Liu
59th Annual Meeting of the Association for Computational Linguistics (ACL …, 2021
202021
Reweighting Augmented Samples by Minimizing the Maximal Expected Loss
M Yi, L Hou, L Shang, X Jiang, Q Liu, ZM Ma
9th International Conference on Learning Representations (ICLR-2021), 2021
162021
Towards efficient post-training quantization of pre-trained language models
H Bai, L Hou, L Shang, X Jiang, I King, MR Lyu
Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS-2022), 2021
142021
Power Law in Deep Neural Networks: Sparse Network Generation and Continual Learning With Preferential Attachment
F Feng, L Hou, Q She, RHM Chan, JT Kwok
IEEE Transactions on Neural Networks and Learning Systems, 2022
6*2022
LiteVL: Efficient Video-Language Learning with Enhanced Spatial-Temporal Modeling
D Chen, C Tao, L Hou, L Shang, X Jiang, Q Liu
Conference on Empirical Methods in Natural Language Processing (EMNLP-2022), 2022
42022
Wukong-Reader: Multi-modal Pre-training for Fine-grained Visual Document Understanding
H Bai, Z Liu, X Meng, W Li, S Liu, N Xie, R Zheng, L Wang, L Hou, J Wei, ...
arXiv preprint arXiv:2212.09621, 2022
32022
CTRL: Connect Tabular and Language Model for CTR Prediction
X Li, B Chen, L Hou, R Tang
arXiv preprint arXiv:2306.02841, 2023
22023
The system can't perform the operation now. Try again later.
Articles 1–20