Follow
Shailaja Sampat
Title
Cited by
Cited by
Year
Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks
Y Wang, S Mishra, P Alipoormolabashi, Y Kordi, A Mirzaei, A Naik, ...
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
4882022
Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022. Super-NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks
Y Wang, S Mishra, P Alipoormolabashi, Y Kordi, A Mirzaei, A Naik, ...
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
1902022
Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks
Y Wang, S Mishra, P Alipoormolabashi, Y Kordi, A Mirzaei, A Arunkumar, ...
arXiv preprint arXiv:2204.07705, 2022
1282022
CLEVR_HYP: A Challenge Dataset and Baselines for Visual Question Answering with Hypothetical Actions over Images
SK Sampat, A Kumar, Y Yang, C Baral
arXiv preprint arXiv:2104.05981, 2021
292021
Visuo-Linguistic Question Answering (VLQA) Challenge
SK Sampat, Y Yang, C Baral
arXiv preprint arXiv:2005.00330, 2020
192020
Visualization of election data: Using interaction design and visual discovery for communicating complex insights
K Gupta, S Sampat, M Sharma, V Rajamanickam
JeDEM-eJournal of eDemocracy and Open Government 8 (2), 59-86, 2016
152016
‘Just because you are right, doesn’t mean I am wrong’: Overcoming a bottleneck in development and evaluation of Open-Ended VQA tasks
M Luo, SK Sampat, R Tallman, Y Zeng, M Vancha, A Sajja, C Baral
Proceedings of the 16th Conference of the European Chapter of the …, 2021
132021
Reasoning about Actions over Visual and Linguistic Modalities: A Survey
SK Sampat, M Patel, S Das, Y Yang, C Baral
arXiv preprint arXiv:2207.07568, 2022
112022
Blocksworld Revisited: Learning and Reasoning to Generate Event-Sequences from Image Pairs
T Gokhale, S Sampat, Z Fang, Y Yang, C Baral
arXiv preprint arXiv:1905.12042, 2019
82019
'Just because you are right, doesn't mean I am wrong': Overcoming a Bottleneck in the Development and Evaluation of Open-Ended Visual Question Answering (VQA) Tasks
M Luo, SK Sampat, R Tallman, Y Zeng, M Vancha, A Sajja, C Baral
arXiv preprint arXiv:2103.15022, 2021
62021
Cooking with blocks: A recipe for visual reasoning on image-pairs
T Gokhale, S Sampat, Z Fang, Y Yang, C Baral
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2019
62019
A Model-Based Approach to Visual Reasoning on CNLVR Dataset
S Sampat, J Lee
Sixteenth International Conference on Principles of Knowledge Representation …, 2018
42018
Learning Action-Effect Dynamics for Hypothetical Vision-Language Reasoning Task
SK Sampat, P Banerjee, Y Yang, C Baral
arXiv preprint arXiv:2212.03866, 2022
22022
Activity Classification using Myo Gesture Control Armband data through Machine Learning
KK Pal, P Banerjee, S Choudhuri, S Sampat
12019
AutoDW: Automatic Data Wrangling Leveraging Large Language Models
L Liu, S Hasegawa, SK Sampat, M Xenochristou, WP Chen, T Kato, ...
Proceedings of the 39th IEEE/ACM International Conference on Automated …, 2024
2024
Help Me Identify: Is an LLM+ VQA System All We Need to Identify Visual Concepts?
SK Sampat, M Patel, Y Yang, C Baral
arXiv preprint arXiv:2410.13651, 2024
2024
VL-GLUE: A Suite of Fundamental yet Challenging Visuo-Linguistic Reasoning Tasks
SK Sampat, M Nakamura, S Kailas, K Aggarwal, M Zhou, Y Yang, C Baral
arXiv preprint arXiv:2410.13666, 2024
2024
ActionCOMET: A Zero-shot Approach to Learn Image-specific Commonsense Concepts about Actions
SK Sampat, Y Yang, C Baral
arXiv preprint arXiv:2410.13662, 2024
2024
Learning Action-Effect Dynamics from Pairs of Scene-graphs
SK Sampat, P Banerjee, Y Yang, C Baral
arXiv preprint arXiv:2212.03433, 2022
2022
Visuo-Lingustic Question Answering (VLQA) Challenge
SK Sampat, Y Yang, C Baral
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
2020
The system can't perform the operation now. Try again later.
Articles 1–20