Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks Y Wang, S Mishra, P Alipoormolabashi, Y Kordi, A Mirzaei, A Naik, ... Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022 | 488 | 2022 |
Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022. Super-NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks Y Wang, S Mishra, P Alipoormolabashi, Y Kordi, A Mirzaei, A Naik, ... Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022 | 190 | 2022 |
Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks Y Wang, S Mishra, P Alipoormolabashi, Y Kordi, A Mirzaei, A Arunkumar, ... arXiv preprint arXiv:2204.07705, 2022 | 128 | 2022 |
CLEVR_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over Images SK Sampat, A Kumar, Y Yang, C Baral arXiv preprint arXiv:2104.05981, 2021 | 29 | 2021 |
Visuo-Linguistic Question Answering (VLQA) Challenge SK Sampat, Y Yang, C Baral arXiv preprint arXiv:2005.00330, 2020 | 19 | 2020 |
Visualization of election data: Using interaction design and visual discovery for communicating complex insights K Gupta, S Sampat, M Sharma, V Rajamanickam JeDEM-eJournal of eDemocracy and Open Government 8 (2), 59-86, 2016 | 15 | 2016 |
‘Just because you are right, doesn’t mean I am wrong’: Overcoming a bottleneck in development and evaluation of Open-Ended VQA tasks M Luo, SK Sampat, R Tallman, Y Zeng, M Vancha, A Sajja, C Baral Proceedings of the 16th Conference of the European Chapter of the …, 2021 | 13 | 2021 |
Reasoning about Actions over Visual and Linguistic Modalities: A Survey SK Sampat, M Patel, S Das, Y Yang, C Baral arXiv preprint arXiv:2207.07568, 2022 | 11 | 2022 |
Blocksworld Revisited: Learning and Reasoning to Generate Event-Sequences from Image Pairs T Gokhale, S Sampat, Z Fang, Y Yang, C Baral arXiv preprint arXiv:1905.12042, 2019 | 8 | 2019 |
'Just because you are right, doesn't mean I am wrong': Overcoming a Bottleneck in the Development and Evaluation of Open-Ended Visual Question Answering (VQA) Tasks M Luo, SK Sampat, R Tallman, Y Zeng, M Vancha, A Sajja, C Baral arXiv preprint arXiv:2103.15022, 2021 | 6 | 2021 |
Cooking with blocks: A recipe for visual reasoning on image-pairs T Gokhale, S Sampat, Z Fang, Y Yang, C Baral Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019 | 6 | 2019 |
A Model-Based Approach to Visual Reasoning on CNLVR Dataset S Sampat, J Lee Sixteenth International Conference on Principles of Knowledge Representation …, 2018 | 4 | 2018 |
Learning Action-Effect Dynamics for Hypothetical Vision-Language Reasoning Task SK Sampat, P Banerjee, Y Yang, C Baral arXiv preprint arXiv:2212.03866, 2022 | 2 | 2022 |
Activity Classification using Myo Gesture Control Armband data through Machine Learning KK Pal, P Banerjee, S Choudhuri, S Sampat | 1 | 2019 |
AutoDW: Automatic Data Wrangling Leveraging Large Language Models L Liu, S Hasegawa, SK Sampat, M Xenochristou, WP Chen, T Kato, ... Proceedings of the 39th IEEE/ACM International Conference on Automated …, 2024 | | 2024 |
Help Me Identify: Is an LLM+ VQA System All We Need to Identify Visual Concepts? SK Sampat, M Patel, Y Yang, C Baral arXiv preprint arXiv:2410.13651, 2024 | | 2024 |
VL-GLUE: A Suite of Fundamental yet Challenging Visuo-Linguistic Reasoning Tasks SK Sampat, M Nakamura, S Kailas, K Aggarwal, M Zhou, Y Yang, C Baral arXiv preprint arXiv:2410.13666, 2024 | | 2024 |
ActionCOMET: A Zero-shot Approach to Learn Image-specific Commonsense Concepts about Actions SK Sampat, Y Yang, C Baral arXiv preprint arXiv:2410.13662, 2024 | | 2024 |
Learning Action-Effect Dynamics from Pairs of Scene-graphs SK Sampat, P Banerjee, Y Yang, C Baral arXiv preprint arXiv:2212.03433, 2022 | | 2022 |
Visuo-Lingustic Question Answering (VLQA) Challenge SK Sampat, Y Yang, C Baral Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020 | | 2020 |