Language models are few-shot learners TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... arXiv preprint arXiv:2005.14165, 2020 | 22122* | 2020 |
Efficient non-parametric estimation of multiple embeddings per word in vector space A Neelakantan, J Shankar, A Passos, A McCallum Conference on Empirical Methods in Natural Language Processing, 2014, 2015 | 575 | 2015 |
Adding gradient noise improves learning for very deep networks A Neelakantan, L Vilnis, QV Le, I Sutskever, L Kaiser, K Kurach, J Martens International Conference on Learning Representations Workshop (ICLR Workshop …, 2015 | 544 | 2015 |
Compositional vector space models for knowledge base inference A Neelakantan, B Roth, A McCallum 2015 aaai spring symposium series, 2015 | 413* | 2015 |
Chains of reasoning over entities, relations, and text using recurrent neural networks R Das, A Neelakantan, D Belanger, A McCallum European Chapter of the Association for Computational Linguistics (EACL), 2017., 2016 | 311 | 2016 |
Neural programmer: Inducing latent programs with gradient descent A Neelakantan, QV Le, I Sutskever International Conference on Learning Representations (ICLR), 2016, 2015 | 281 | 2015 |
GPT-4 technical report R OpenAI arXiv, 2303.08774, 2023 | 258 | 2023 |
Taskmaster-1: Toward a realistic and diverse dialog dataset B Byrne, K Krishnamoorthi, C Sankar, A Neelakantan, D Duckworth, ... arXiv preprint arXiv:1909.05358, 2019 | 192 | 2019 |
Learning a natural language interface with neural programmer A Neelakantan, QV Le, M Abadi, A McCallum, D Amodei International Conference on Learning Representations (ICLR), 2017., 2016 | 132 | 2016 |
Text and code embeddings by contrastive pre-training A Neelakantan, T Xu, R Puri, A Radford, JM Han, J Tworek, Q Yuan, ... arXiv preprint arXiv:2201.10005, 2022 | 123 | 2022 |
Language models are few-shot learners. arXiv TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Computer Science, Computation and Language, 2005 | 105 | 2005 |
Inferring Missing Entity Type Instances for Knowledge Base Completion: New Dataset and Methods A Neelakantan, MW Chang The North American Chapter of the Association for Computational Linguistics …, 2015 | 86 | 2015 |
Advances in neural information processing systems T Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... Language models are few-shot learners 33, 1877-901, 2020 | 74 | 2020 |
Theory and experiments on vector quantized autoencoders A Roy, A Vaswani, A Neelakantan, N Parmar arXiv preprint arXiv:1805.11063, 2018 | 69 | 2018 |
Predicting the impact of scientific concepts using full‐text features K McKeown, H Daume III, S Chaturvedi, J Paparrizos, K Thadani, P Barrio, ... Journal of the Association for Information Science and Technology 67 (11 …, 2016 | 69 | 2016 |
Trading off diversity and quality in natural language generation H Zhang, D Duckworth, D Ippolito, A Neelakantan arXiv preprint arXiv:2004.10450, 2020 | 60 | 2020 |
Learning Dictionaries for Named Entity Recognition using Minimal Supervision A Neelakantan, M Collins European Chapter of the Association for Computational Linguistics., 2014 | 55 | 2014 |
Generalizing to unseen entities and entity pairs with row-less universal schema P Verga, A Neelakantan, A McCallum European Chapter of the Association for Computational Linguistics (EACL), 2017., 2016 | 49 | 2016 |
Language models are few-shot learners. CoRR abs/2005.14165 (2020) TB Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, ... URL: https://arxiv. org/abs/2005.14165, 2005 | 47 | 2005 |
RelNet: End-to-end Modeling of Entities & Relations T Bansal, A Neelakantan, A McCallum arXiv preprint arXiv:1706.07179, 2017 | 34 | 2017 |