Self-refine: Iterative refinement with self-feedback A Madaan, N Tandon, P Gupta, S Hallinan, L Gao, S Wiegreffe, U Alon, ... Advances in Neural Information Processing Systems 36, 46534-46594, 2023 | 1343 | 2023 |
PAL: Program-aided Language Models L Gao, A Madaan, S Zhou, U Alon, P Liu, Y Yang, J Callan, G Neubig ICML 2023, 2022 | 789 | 2022 |
Active retrieval augmented generation Z Jiang, FF Xu, L Gao, Z Sun, Q Liu, J Dwivedi-Yu, Y Yang, J Callan, ... Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023 | 472 | 2023 |
Unsupervised corpus aware language model pre-training for dense passage retrieval L Gao, J Callan ACL 2022, 2021 | 344 | 2021 |
Rarr: Researching and revising what language models say, using language models L Gao, Z Dai, P Pasupat, A Chen, AT Chaganty, Y Fan, VY Zhao, N Lao, ... arXiv preprint arXiv:2210.08726, 2022 | 274 | 2022 |
Precise Zero-Shot Dense Retrieval without Relevance Labels L Gao, X Ma, J Lin, J Callan ACL 2023, 2022 | 270 | 2022 |
Condenser: a Pre-training Architecture for Dense Retrieval L Gao, J Callan EMNLP 2021, 2021 | 266 | 2021 |
COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List L Gao, Z Dai, J Callan NAACL 2021, 2021 | 248 | 2021 |
Complementing lexical retrieval with semantic residual embedding L Gao, Z Dai, T Chen, Z Fan, B Van Durme, J Callan ECIR 2021, 2020 | 198* | 2020 |
Rethink training of BERT rerankers in multi-stage retrieval pipeline L Gao, Z Dai, J Callan ECIR 2021, 2021 | 134 | 2021 |
Scaling deep contrastive learning batch size under memory limited setup L Gao, Y Zhang, J Han, J Callan arXiv preprint arXiv:2101.06983, 2021 | 96 | 2021 |
Tevatron: An efficient and flexible toolkit for dense retrieval L Gao, X Ma, J Lin, J Callan arXiv preprint arXiv:2203.05765, 2022 | 91* | 2022 |
Rapid and accurate determination of nanopore ionic current using a steric exclusion model J Wilson, K Sarthak, W Si, L Gao, A Aksimentiev Acs Sensors 4 (3), 634-644, 2019 | 79 | 2019 |
Modularized transfomer-based ranking framework L Gao, Z Dai, J Callan EMNLP 2020, 2020 | 64* | 2020 |
Understanding BERT rankers under distillation L Gao, Z Dai, J Callan Proceedings of the 2020 ACM SIGIR on International Conference on Theory of …, 2020 | 52 | 2020 |
Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer Z Jiang, L Gao, J Araki, H Ding, Z Wang, J Callan, G Neubig EMNLP 2022, 2022 | 36 | 2022 |
In-context principle learning from mistakes T Zhang, A Madaan, L Gao, S Zheng, S Mishra, Y Yang, N Tandon, ... arXiv preprint arXiv:2402.05403, 2024 | 26 | 2024 |
Attributed text generation via post-hoc research and revision L Gao, Z Dai, P Pasupat, A Chen, AT Chaganty, Y Fan, VY Zhao, N Lao, ... arXiv preprint arXiv:2210.08726, 2022 | 23 | 2022 |
Flame: Factuality-aware alignment for large language models SC Lin, L Gao, B Oguz, W Xiong, J Lin, S Yih, X Chen Advances in Neural Information Processing Systems 37, 115588-115614, 2024 | 22 | 2024 |
DataFinder: Scientific dataset recommendation from natural language descriptions V Viswanathan, L Gao, T Wu, P Liu, G Neubig arXiv preprint arXiv:2305.16636, 2023 | 15 | 2023 |