Follow
Asa Cooper Stickland
Asa Cooper Stickland
Postdoctoral Researcher, New York University
Verified email at ed.ac.uk - Homepage
Title
Cited by
Cited by
Year
Bert and pals: Projected attention layers for efficient adaptation in multi-task learning
AC Stickland, I Murray
International Conference on Machine Learning, 5986-5995, 2019
2772019
The reversal curse: Llms trained on" a is b" fail to learn" b is a"
L Berglund, M Tong, M Kaufmann, M Balesni, AC Stickland, T Korbak, ...
arXiv preprint arXiv:2309.12288, 2023
722023
Gpqa: A graduate-level google-proof q&a benchmark
D Rein, BL Hou, AC Stickland, J Petty, RY Pang, J Dirani, J Michael, ...
arXiv preprint arXiv:2311.12022, 2023
462023
Recipes for adapting pre-trained monolingual and multilingual models to machine translation
AC Stickland, X Li, M Ghazvininejad
arXiv preprint arXiv:2004.14911, 2020
382020
Multilingual domain adaptation for NMT: Decoupling language and domain information with adapters
AC Stickland, A Berard, V Nikoulina
arXiv preprint arXiv:2110.09574, 2021
262021
Deep transformers with latent depth
X Li, A Cooper Stickland, Y Tang, X Kong
Advances in Neural Information Processing Systems 33, 1736-1746, 2020
222020
Diverse ensembles improve calibration
AC Stickland, I Murray
arXiv preprint arXiv:2007.04206, 2020
212020
Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, and Owain Evans
L Berglund, AC Stickland, M Balesni
Taken out of context: On measuring situational awareness in llms, 2023
152023
Taken out of context: On measuring situational awareness in LLMs
L Berglund, AC Stickland, M Balesni, M Kaufmann, M Tong, T Korbak, ...
arXiv preprint arXiv:2309.00667, 2023
92023
When does Parameter-Efficient Transfer Learning Work for Machine Translation?
A Üstün, AC Stickland
arXiv preprint arXiv:2205.11277, 2022
72022
Max Kaufmann, Meg Tong, Tomasz Korbak, Daniel Kokotajlo, and Owain Evans. 2023. Taken out of context: On measuring situational awareness in LLMs
L Berglund, AC Stickland, M Balesni
arXiv preprint arXiv:2309.00667, 2023
62023
Robustification of multilingual language models to real-world noise in crosslingual zero-shot settings with robust contrastive pretraining
AC Stickland, S Sengupta, J Krone, S Mansour, H He
arXiv preprint arXiv:2210.04782, 2022
62022
Robustification of Multilingual Language Models to Real-world Noise with Robust Contrastive Pretraining.
AC Stickland, S Sengupta, J Krone, S Mansour, H He
arXiv preprint arXiv:2210.04782, 2022
12022
Regularising Fisher Information Improves Cross-lingual Generalisation
AC Stickland, I Murray
Proceedings of the 1st Workshop on Multilingual Representation Learning, 238-241, 2021
12021
Targeted Latent Adversarial Training Improves Robustness to Persistent Harmful Behaviors in LLMs
A Sheshadri, A Ewart, P Guo, A Lynch, C Wu, V Hebbar, H Sleight, ...
arXiv preprint arXiv:2407.15549, 2024
2024
Future Events as Backdoor Triggers: Investigating Temporal Vulnerabilities in LLMs
S Price, A Panickssery, S Bowman, AC Stickland
arXiv preprint arXiv:2407.04108, 2024
2024
Steering Without Side Effects: Improving Post-Deployment Control of Language Models
AC Stickland, A Lyzhov, J Pfau, S Mahdi, SR Bowman
arXiv preprint arXiv:2406.15518, 2024
2024
BERT and PALs: Projected Attention Layers
AC Stickland, I Murray
The system can't perform the operation now. Try again later.
Articles 1–18