Follow
Yonatan Bitton
Yonatan Bitton
Research Scientist, Google
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Openflamingo: An open-source framework for training large autoregressive vision-language models
A Awadalla, I Gao, J Gardner, J Hessel, Y Hanafy, W Zhu, K Marathe, ...
arXiv preprint arXiv:2308.01390, 2023
4812023
Datacomp: In search of the next generation of multimodal datasets
SY Gadre, G Ilharco, A Fang, J Hayase, G Smyrnis, T Nguyen, R Marten, ...
Advances in Neural Information Processing Systems 36, 27092-27112, 2023
3652023
What you see is what you read? improving text-image alignment evaluation
M Yarom, Y Bitton, S Changpinyo, R Aharoni, J Herzig, O Lang, E Ofek, ...
Advances in Neural Information Processing Systems 36, 2024
662024
Breaking common sense: Whoops! a vision-and-language benchmark of synthetic and compositional images
N Bitton-Guetta, Y Bitton, J Hessel, L Schmidt, Y Elovici, G Stanovsky, ...
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
632023
Visit-bench: A benchmark for vision-language instruction following inspired by real-world use
Y Bitton, H Bansal, J Hessel, R Shao, W Zhu, A Awadalla, J Gardner, ...
arXiv preprint arXiv:2308.06595, 2023
552023
Openflamingo
A Awadalla, I Gao, J Gardner, J Hessel, Y Hanafy, W Zhu, K Marathe, ...
Zenodo, March, 2023
48*2023
Datacomp-lm: In search of the next generation of training sets for language models
J Li, A Fang, G Smyrnis, M Ivgi, M Jordan, S Gadre, H Bansal, E Guha, ...
arXiv preprint arXiv:2406.11794, 2024
432024
Automatic generation of contrast sets from scene graphs: Probing the compositional consistency of GQA
Y Bitton, G Stanovsky, R Schwartz, M Elhadad
NAACL 2021, 2021
332021
Docci: Descriptions of connected and contrasting images
Y Onoe, S Rane, Z Berger, Y Bitton, J Cho, R Garg, A Ku, Z Parekh, ...
European Conference on Computer Vision, 291-309, 2024
322024
Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G
J Li, A Fang, G Smyrnis, M Ivgi, M Jordan, S Gadre, H Bansal, E Guha, ...
Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, and Vaishaal Shankar …, 2024
292024
WinoGAViL: Gamified association benchmark to challenge vision-and-language models
Y Bitton, NB Guetta, R Yosef, Y Elovici, M Bansal, G Stanovsky, ...
NeurIPS 2022, Oral, Datasets and Benchmarks, 2022
292022
Data efficient masked language modeling for vision and language
Y Bitton, G Stanovsky, M Elhadad, R Schwartz
EMNLP 2021, Findings, 2021
242021
VASR: Visual Analogies of Situation Recognition
Y Bitton, R Yosef, E Strugo, D Shahaf, R Schwartz, G Stanovsky
AAAI 2023 (Oral), 2022
222022
ImageInWords: Unlocking Hyper-Detailed Image Descriptions
R Garg, A Burns, BK Ayan, Y Bitton, C Montgomery, Y Onoe, A Bunner, ...
arXiv preprint arXiv:2405.02793, 2024
172024
VideoPhy: Evaluating Physical Commonsense for Video Generation
H Bansal, Z Lin, T Xie, Z Zong, M Yarom, Y Bitton, C Jiang, Y Sun, ...
arXiv preprint arXiv:2406.03520, 2024
152024
Irfl: Image recognition of figurative language
R Yosef, Y Bitton, D Shahaf
arXiv preprint arXiv:2303.15445, 2023
152023
A chain-of-thought is as strong as its weakest link: A benchmark for verifiers of reasoning chains
A Jacovi, Y Bitton, B Bohnet, J Herzig, O Honovich, M Tseng, M Collins, ...
arXiv preprint arXiv:2402.00559, 2024
102024
Cross-lingual Unified Medical Language System entity linking in online health communities
Y Bitton, R Cohen, T Schifter, E Bachmat, M Elhadad, N Elhadad
Journal of the American Medical Informatics Association 27 (10), 1585-1592, 2020
92020
ParallelPARC: A Scalable Pipeline for Generating Natural-Language Analogies
O Sultan, Y Bitton, R Yosef, D Shahaf
arXiv preprint arXiv:2403.01139, 2024
72024
Videocon: Robust video-language alignment via contrast captions
H Bansal, Y Bitton, I Szpektor, KW Chang, A Grover
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
72024
The system can't perform the operation now. Try again later.
Articles 1–20