Opt: Open pre-trained transformer language models S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ... arXiv preprint arXiv:2205.01068, 2022 | 995 | 2022 |
Men also like shopping: Reducing gender bias amplification using corpus-level constraints J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang Proceedings of the 2017 Conference on Empirical Methods in Natural Language …, 2017 | 954 | 2017 |
Gender bias in coreference resolution: Evaluation and debiasing methods J Zhao, T Wang, M Yatskar, V Ordonez, KW Chang Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods, 15–20, 2018 | 703 | 2018 |
Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations T Wang, J Zhao, M Yatskar, KW Chang, V Ordonez Proceedings of the IEEE/CVF international conference on computer vision …, 2019 | 371 | 2019 |
Gender bias in contextualized word embeddings J Zhao, T Wang, M Yatskar, R Cotterell, V Ordonez, KW Chang Proceedings of the 2019 Conference of the North American Chapter of the …, 2019 | 341 | 2019 |
General multi-label image classification with transformers J Lanchantin, T Wang, V Ordonez, Y Qi Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021 | 215 | 2021 |
Few-shot Learning with Multilingual Generative Language Models XV Lin, T Mihaylov, M Artetxe, T Wang, S Chen, D Simig, M Ott, N Goyal, ... Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022 | 86* | 2022 |
Selective annotation makes language models better few-shot learners H Su, J Kasai, CH Wu, W Shi, T Wang, J Xin, R Zhang, M Ostendorf, ... The Eleventh International Conference on Learning Representations, 2023 | 72 | 2023 |
Cat-gen: Improving robustness in nlp models via controlled adversarial text generation T Wang, X Wang, Y Qin, B Packer, K Li, J Chen, A Beutel, E Chi Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020 | 68 | 2020 |
Visual news: Benchmark and challenges in news image captioning F Liu, Y Wang, T Wang, V Ordonez Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021 | 52 | 2021 |
Opt: Open pre-trained transformer language models, 2022 S Zhang, S Roller, N Goyal, M Artetxe, M Chen, S Chen, C Dewan, ... URL https://arxiv. org/abs/2205.01068, 0 | 48 | |
Opt-iml: Scaling language model instruction meta learning through the lens of generalization S Iyer, XV Lin, R Pasunuru, T Mihaylov, D Simig, P Yu, K Shuster, T Wang, ... arXiv preprint arXiv:2212.12017, 2022 | 45 | 2022 |
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation T Wang, XV Lin, NF Rajani, B McCann, V Ordonez, C Xiong Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020 | 45* | 2020 |
Name tagging for low-resource incident languages based on expectation-driven learning B Zhang, X Pan, T Wang, A Vaswani, H Ji, K Knight, D Marcu Proceedings of the 2016 conference of the North American chapter of the …, 2016 | 43 | 2016 |
Identifying and mitigating spurious correlations for improving robustness in nlp models T Wang, R Sridhar, D Yang, X Wang Findings of the Association for Computational Linguistics: NAACL 2022, 2022 | 39 | 2022 |
Scaling autoregressive multi-modal models: Pretraining and instruction tuning L Yu, B Shi, R Pasunuru, B Muller, O Golovneva, T Wang, A Babu, B Tang, ... arXiv preprint arXiv:2309.02591, 2023 | 19 | 2023 |
Feedback-prop: Convolutional neural network inference under partial evidence T Wang, K Yamaguchi, V Ordonez Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2018 | 11 | 2018 |
Alert: Adapting language models to reasoning tasks P Yu, T Wang, O Golovneva, B Alkhamissy, G Ghosh, M Diab, ... arXiv preprint arXiv:2212.08286, 2022 | 10 | 2022 |
Tao Yu H Su, J Kasai, CH Wu, W Shi, T Wang, J Xin, R Zhang, M Ostendorf, ... Selective annotation makes language models better few-shot learners. arXiv …, 2022 | 9 | 2022 |
Shepherd: A critic for language model generation T Wang, P Yu, XE Tan, S O'Brien, R Pasunuru, J Dwivedi-Yu, ... arXiv preprint arXiv:2308.04592, 2023 | 8 | 2023 |