Follow
Jay Mohta
Jay Mohta
Amazon
Verified email at ncsu.edu
Title
Cited by
Cited by
Year
Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning
H Liu, D Tam, M Muqeeth, J Mohta, T Huang, M Bansal, CA Raffel
Advances in Neural Information Processing Systems 35, 1950-1965, 2022
3952022
The impact of domain shift on the calibration of fine-tuned models
J Mohta, C Raffel
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
12021
Are large language models good annotators?
J Mohta, KE Ak, Y Xu, M Shen
NeurIPS 2023 Workshop on I Can’t Believe It’s Not Better (ICBINB), 2023
2023
Prompting language models improves performance in imbalanced setting
J Mohta
Association for Computational Linguistics Proceedings of The Fourth Workshop …, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–4