Nathan Scales
Cited by
Cited by
Large language models encode clinical knowledge
K Singhal, S Azizi, T Tu, SS Mahdavi, J Wei, HW Chung, N Scales, ...
Nature 620 (7972), 172-180, 2023
Least-to-most prompting enables complex reasoning in large language models
D Zhou, N Schärli, L Hou, J Wei, N Scales, X Wang, D Schuurmans, C Cui, ...
arXiv preprint arXiv:2205.10625, 2022
Measuring compositional generalization: A comprehensive method on realistic data
D Keysers, N Schärli, N Scales, H Buisman, D Furrer, S Kashubin, ...
arXiv preprint arXiv:1912.09713, 2019
Challenging big-bench tasks and whether chain-of-thought can solve them
M Suzgun, N Scales, N Schärli, S Gehrmann, Y Tay, HW Chung, ...
arXiv preprint arXiv:2210.09261, 2022
Large language models can be easily distracted by irrelevant context
F Shi, X Chen, K Misra, N Scales, D Dohan, EH Chi, N Schärli, D Zhou
International Conference on Machine Learning, 31210-31227, 2023
Compositional generalization in semantic parsing: Pre-training vs. specialized architectures
D Furrer, M van Zee, N Scales, N Schärli
arXiv preprint arXiv:2007.08970, 2020
Compositional semantic parsing with large language models
A Drozdov, N Schärli, E Akyürek, N Scales, X Song, X Chen, O Bousquet, ...
arXiv preprint arXiv:2209.15003, 2022
*-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
D Tsarkov, T Tihon, N Scales, N Momchev, D Sinopalnikov, N Schärli
Proceedings of the AAAI Conference on Artificial Intelligence 35 (11), 9949-9957, 2021
Prompting Machine-Learned Models Using Chains of Thought
JW Wei, D Zhou, DE Schuurmans, QV Le, MP Bosma, EHH Chi, ...
US Patent App. 17/881,746, 2023
Using Chains of Thought to Prompt Machine-Learned Models Pre-Trained on Diversified Objectives
JW Wei, D Zhou, X Wang, DE Schuurmans, QV Le, MP Bosma, EHH Chi, ...
US Patent App. 18/160,776, 2023
Conceptual SCAN: Learning With and About Rules
N Scales, N Schärli, A Babiker, YH Liu, M Dehghani, O Bousquet
The system can't perform the operation now. Try again later.
Articles 1–11