Frederick Liu
Frederick Liu
Verified email at
Cited by
Cited by
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
The creation and analysis of a website privacy policy corpus
S Wilson, F Schaub, AA Dara, F Liu, S Cherivirala, PG Leon, ...
Proceedings of the 54th Annual Meeting of the Association for Computational …, 2016
Estimating training data influence by tracing gradient descent
G Pruthi, F Liu, M Sundararajan, S Kale
Advances in Neural Information Processing Systems 33, 2020
Attention-based multimodal neural machine translation
PY Huang, F Liu, SR Shiang, J Oh, C Dyer
Proceedings of the First Conference on Machine Translation: Volume 2, Shared …, 2016
Crowdsourcing annotations for websites' privacy policies: Can it really work?
S Wilson, F Schaub, R Ramanath, N Sadeh, F Liu, NA Smith, F Liu
Proceedings of the 25th International Conference on World Wide Web, 133-143, 2016
Incorporating Priors with Feature Attribution on Text Classification
F Liu, B Avci
ACL 2019, 2019
TensorFlow model garden
H Yu, C Chen, X Du, Y Li, A Rashwan, L Hou, P Jin, F Yang, F Liu, J Kim, ...
Model Garden for TensorFlow., 2020
Learning character-level compositionality with visual features
F Liu, H Lu, C Lo, G Neubig
ACL 2017, 2017
Handling homographs in neural machine translation
F Liu, H Lu, G Neubig
NAACL 2018, 2017
Analyzing privacy policies at scale: From crowdsourcing to automated annotations
S Wilson, F Schaub, F Liu, KM Sathyendra, D Smullen, S Zimmeck, ...
ACM Transactions on the Web (TWEB) 13 (1), 1-29, 2018
Detecting errors and estimating accuracy on unlabeled data with self-training ensembles
J Chen, F Liu, B Avci, X Wu, Y Liang, S Jha
Advances in Neural Information Processing Systems 34, 14980-14992, 2021
Differentially-private" draw and discard" machine learning
V Pihur, A Korolova, F Liu, S Sankuratripati, M Yung, D Huang, R Zeng
arXiv preprint arXiv:1807.04369, 2018
Towards tracing factual knowledge in language models back to the training data
E Akyürek, T Bolukbasi, F Liu, B Xiong, I Tenney, J Andreas, K Guu
arXiv preprint arXiv:2205.11482, 2022
EncT5: A Framework for Fine-tuning T5 as Non-autoregressive Models
F Liu, T Huang, S Lyu, S Shakeri, H Yu, J Li
arXiv preprint arXiv:2110.08426, 2021
Leveraging redundancy in attention with reuse transformers
S Bhojanapalli, A Chakrabarti, A Veit, M Lukasik, H Jain, F Liu, YW Chang, ...
arXiv preprint arXiv:2110.06821, 2021
First is Better Than Last for Language Data Influence
CK Yeh, A Taly, M Sundararajan, F Liu, PK Ravikumar
Advances in Neural Information Processing Systems, 2022
Chefs' random tables: Non-trigonometric random features
V Likhosherstov, KM Choromanski, KA Dubey, F Liu, T Sarlos, A Weller
Advances in Neural Information Processing Systems 35, 34559-34573, 2022
Threading the needle of on and off-manifold value functions for shapley explanations
CK Yeh, KY Lee, F Liu, P Ravikumar
International Conference on Artificial Intelligence and Statistics, 1485-1502, 2022
Augmentation with projection: Towards an effective and efficient data augmentation paradigm for distillation
Z Wang, Y Wu, F Liu, D Liu, L Hou, H Yu, J Li, H Ji
arXiv preprint arXiv:2210.11768, 2022
The system can't perform the operation now. Try again later.
Articles 1–20