Ankur Bapna
Ankur Bapna
Software Engineer, Google Deepmind
Verified email at - Homepage
Cited by
Cited by
Gpipe: Efficient training of giant neural networks using pipeline parallelism
Y Huang, Y Cheng, A Bapna, O Firat, D Chen, M Chen, HJ Lee, J Ngiam, ...
Advances in neural information processing systems 32, 2019
The best of both worlds: Combining recent advances in neural machine translation
MX Chen, O Firat, A Bapna, M Johnson, W Macherey, G Foster, L Jones, ...
arXiv preprint arXiv:1804.09849, 2018
Massively multilingual neural machine translation in the wild: Findings and challenges
N Arivazhagan, A Bapna, O Firat, D Lepikhin, M Johnson, M Krikun, ...
arXiv preprint arXiv:1907.05019, 2019
Simple, scalable adaptation for neural machine translation
A Bapna, N Arivazhagan, O Firat
arXiv preprint arXiv:1909.08478, 2019
Building a conversational agent overnight with dialogue self-play
P Shah, D Hakkani-Tür, G Tür, A Rastogi, A Bapna, N Nayak, L Heck
arXiv preprint arXiv:1801.04871, 2018
Lingvo: a modular and scalable framework for sequence-to-sequence modeling
J Shen, P Nguyen, Y Wu, Z Chen, MX Chen, Y Jia, A Kannan, T Sainath, ...
arXiv preprint arXiv:1902.08295, 2019
Large-scale multilingual speech recognition with a streaming end-to-end model
A Kannan, A Datta, TN Sainath, E Weinstein, B Ramabhadran, Y Wu, ...
arXiv preprint arXiv:1909.05330, 2019
Towards zero-shot frame semantic parsing for domain scaling
A Bapna, G Tur, D Hakkani-Tur, L Heck
arXiv preprint arXiv:1707.02363, 2017
Revisiting character-based neural machine translation with capacity and compression
C Cherry, G Foster, A Bapna, O Firat, W Macherey
arXiv preprint arXiv:1808.09943, 2018
Training deeper neural machine translation models with transparent attention
A Bapna, MX Chen, O Firat, Y Cao, Y Wu
arXiv preprint arXiv:1808.07561, 2018
Investigating multilingual NMT representations at scale
SR Kudugunta, A Bapna, I Caswell, N Arivazhagan, O Firat
arXiv preprint arXiv:1909.02197, 2019
The missing ingredient in zero-shot neural machine translation
N Arivazhagan, A Bapna, O Firat, R Aharoni, M Johnson, W Macherey
arXiv preprint arXiv:1903.07091, 2019
Quality at a glance: An audit of web-crawled multilingual datasets
J Kreutzer, I Caswell, L Wang, A Wahab, D van Esch, N Ulzii-Orshikh, ...
Transactions of the Association for Computational Linguistics 10, 50-72, 2022
mslam: Massively multilingual joint pre-training for speech and text
A Bapna, C Cherry, Y Zhang, Y Jia, M Johnson, Y Cheng, S Khanuja, ...
arXiv preprint arXiv:2202.01374, 2022
Leveraging monolingual data with self-supervision for multilingual neural machine translation
A Siddhant, A Bapna, Y Cao, O Firat, M Chen, S Kudugunta, ...
arXiv preprint arXiv:2005.04816, 2020
Non-parametric adaptation for neural machine translation
A Bapna, O Firat
arXiv preprint arXiv:1903.00058, 2019
Sequential dialogue context modeling for spoken language understanding
A Bapna, G Tur, D Hakkani-Tur, L Heck
arXiv preprint arXiv:1705.03455, 2017
SLAM: A unified encoder for speech and language modeling via speech-text joint pre-training
A Bapna, Y Chung, N Wu, A Gulati, Y Jia, JH Clark, M Johnson, J Riesa, ...
arXiv preprint arXiv:2110.10329, 2021
Evaluating the cross-lingual effectiveness of massively multilingual neural machine translation
A Siddhant, M Johnson, H Tsai, N Ari, J Riesa, A Bapna, O Firat, K Raman
Proceedings of the AAAI conference on artificial intelligence 34 (05), 8854-8861, 2020
Maestro: Matched speech text representations through modality matching
Z Chen, Y Zhang, A Rosenberg, B Ramabhadran, P Moreno, A Bapna, ...
arXiv preprint arXiv:2204.03409, 2022
The system can't perform the operation now. Try again later.
Articles 1–20