Nathan Hoyen Ng
Nathan Hoyen Ng
Verified email at - Homepage
Cited by
Cited by
fairseq: A fast, extensible toolkit for sequence modeling
M Ott, S Edunov, A Baevski, A Fan, S Gross, N Ng, D Grangier, M Auli
arXiv preprint arXiv:1904.01038, 2019
Facebook FAIR's WMT19 news translation task submission
N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov
arXiv preprint arXiv:1907.06616, 2019
SSMBA: Self-supervised manifold based data augmentation for improving out-of-domain robustness
N Ng, K Cho, M Ghassemi
arXiv preprint arXiv:2009.10195, 2020
Simple and effective noisy channel modeling for neural machine translation
K Yee, N Ng, YN Dauphin, M Auli
arXiv preprint arXiv:1908.05731, 2019
If influence functions are the answer, then what is the question?
J Bae, N Ng, A Lo, M Ghassemi, RB Grosse
Advances in Neural Information Processing Systems 35, 17953-17967, 2022
Predicting surgery duration with neural heteroscedastic regression
NH Ng, RA Gabriel, J McAuley, C Elkan, ZC Lipton
Machine Learning for Healthcare Conference, 100-111, 2017
Embryo staging with weakly-supervised region selection and dynamically-decoded predictions
T Lau, N Ng, J Gingold, N Desai, J McAuley, ZC Lipton
Machine Learning for Healthcare Conference, 663-679, 2019
Predicting Out-of-Domain Generalization with Neighborhood Invariance
N Ng, N Hulkund, K Cho, M Ghassemi
arXiv preprint arXiv:2207.02093, 2022
Improving dialogue breakdown detection with semi-supervised learning
N Ng, M Ghassemi, N Thangarajan, J Pan, Q Guo
arXiv preprint arXiv:2011.00136, 2020
Improving Black-box Robustness with In-Context Rewriting
K O'Brien, N Ng, I Puri, J Mendez, H Palangi, Y Kim, M Ghassemi, ...
arXiv preprint arXiv:2402.08225, 2024
Blind Biological Sequence Denoising with Self-Supervised Set Learning
NH Ng, JW Park, JH Lee, RL Kelly, S Ra, K Cho
Transactions on Machine Learning Research, 2023
The system can't perform the operation now. Try again later.
Articles 1–11