Jack Hessel
Jack Hessel
Verified email at
Cited by
Cited by
CLIPScore: A reference-free evaluation metric for image captioning
J Hessel, A Holtzman, M Forbes, RL Bras, Y Choi
EMNLP, 2021
MERLOT: Multimodal neural script knowledge models
R Zellers, X Lu, J Hessel, Y Yu, JS Park, J Cao, A Farhadi, Y Choi
NeurIPS, 2021
Openflamingo: An open-source framework for training large autoregressive vision-language models
A Awadalla, I Gao, J Gardner, J Hessel, Y Hanafy, W Zhu, K Marathe, ...
arXiv preprint arXiv:2308.01390, 2023
Symbolic knowledge distillation: from general language models to commonsense models
P West, C Bhagavatula, J Hessel, JD Hwang, L Jiang, RL Bras, X Lu, ...
NAACL, 2022
Merlot reserve: Neural script knowledge through vision and language and sound
R Zellers, J Lu, X Lu, Y Yu, Y Zhao, M Salehi, A Kusupati, J Hessel, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
Is reinforcement learning (not) for natural language processing: Benchmarks, baselines, and building blocks for natural language policy optimization
R Ramamurthy, P Ammanabrolu, K Brantley, J Hessel, R Sifa, ...
arXiv preprint arXiv:2210.01241, 2022
How far can camels go? exploring the state of instruction tuning on open resources
Y Wang, H Ivison, P Dasigi, J Hessel, T Khot, K Chandu, D Wadden, ...
Advances in Neural Information Processing Systems 36, 74764-74786, 2023
Quark: Controllable Text Generation with Reinforced Unlearning
X Lu, S Welleck, J Hessel, L Jiang, L Qin, P West, P Ammanabrolu, Y Choi
NeurIPS, 2022
Reframing Human-AI Collaboration for Generating Free-Text Explanations
S Wiegreffe, J Hessel, S Swayamdipta, M Riedl, Y Choi
NAACL, 2022
Multimodal c4: An open, billion-scale corpus of images interleaved with text
W Zhu, J Hessel, A Awadalla, SY Gadre, J Dodge, A Fang, Y Yu, ...
Advances in Neural Information Processing Systems 36, 2024
Soda: Million-scale dialogue distillation with social commonsense contextualization
H Kim, J Hessel, L Jiang, P West, X Lu, Y Yu, P Zhou, RL Bras, M Alikhani, ...
arXiv preprint arXiv:2212.10465, 2022
Something's Brewing! Early Prediction of Controversy-causing Posts from Discussion Features
J Hessel, L Lee
NAACL, 2019
Science, AskScience, and BadScience: On the Coexistence of Highly Related Communities
J Hessel, C Tan, L Lee
The 10th International AAAI Conference on Web and Social Media, 2016
Does my multimodal model learn cross-modal interactions? It's harder to tell than you might think!
J Hessel, L Lee
EMNLP, 2020
Symbolic chain-of-thought distillation: Small models can also" think" step-by-step
LH Li, J Hessel, Y Yu, X Ren, KW Chang, Y Choi
arXiv preprint arXiv:2306.14050, 2023
Personalized soups: Personalized large language model alignment via post-hoc parameter merging
J Jang, S Kim, BY Lin, Y Wang, J Hessel, L Zettlemoyer, H Hajishirzi, ...
arXiv preprint arXiv:2310.11564, 2023
A Case Study on Combining ASR and Visual Features for Generating Instructional Video Captions
J Hessel, B Pang, Z Zhu, R Soricut
CoNLL, 2019
Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest
J Hessel, A Marasović, JD Hwang, L Lee, J Da, R Zellers, R Mankoff, ...
arXiv preprint arXiv:2209.06293, 2022
Fusing pre-trained language models with multimodal prompts through reinforcement learning
Y Yu, J Chung, H Yun, J Hessel, JS Park, X Lu, R Zellers, P Ammanabrolu, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
Cats and Captions vs. Creators and the Clock: Comparing Multimodal Content to Context in Predicting Relative Popularity
J Hessel, L Lee, D Mimno
Proceedings of the 26th International Conference on the World Wide Web, 2017
The system can't perform the operation now. Try again later.
Articles 1–20