Classifying relations via long short term memory networks along shortest dependency paths Y Xu, L Mou, G Li, Y Chen, H Peng, Z Jin Proceedings of the 2015 conference on empirical methods in natural language …, 2015 | 792 | 2015 |
A convolutional attention network for extreme summarization of source code M Allamanis, H Peng, C Sutton International conference on machine learning, 2091-2100, 2016 | 742 | 2016 |
Random feature attention H Peng, N Pappas, D Yogatama, R Schwartz, NA Smith, L Kong arXiv preprint arXiv:2103.02143, 2021 | 347 | 2021 |
Complexity-based prompting for multi-step reasoning Y Fu, H Peng, A Sabharwal, P Clark, T Khot The Eleventh International Conference on Learning Representations, 2022 | 313 | 2022 |
Contextualized perturbation for textual adversarial attack D Li, Y Zhang, H Peng, L Chen, C Brockett, MT Sun, B Dolan arXiv preprint arXiv:2009.07502, 2020 | 244 | 2020 |
Specializing smaller language models towards multi-step reasoning Y Fu, H Peng, L Ou, A Sabharwal, T Khot International Conference on Machine Learning, 10421-10430, 2023 | 181 | 2023 |
Deep encoder, shallow decoder: Reevaluating non-autoregressive machine translation J Kasai, N Pappas, H Peng, J Cross, NA Smith arXiv preprint arXiv:2006.10369, 2020 | 175 | 2020 |
Building program vector representations for deep learning H Peng, L Mou, G Li, Y Liu, L Zhang, Z Jin Knowledge Science, Engineering and Management: 8th International Conference …, 2015 | 173 | 2015 |
Discriminative neural sentence modeling by tree-based convolution L Mou, H Peng, G Li, Y Xu, L Zhang, Z Jin arXiv preprint arXiv:1504.01106, 2015 | 159 | 2015 |
Deep multitask learning for semantic dependency parsing H Peng, S Thomson, NA Smith arXiv preprint arXiv:1704.06855, 2017 | 150 | 2017 |
Improving language model negotiation with self-play and in-context learning from ai feedback Y Fu, H Peng, T Khot, M Lapata arXiv preprint arXiv:2305.10142, 2023 | 121 | 2023 |
Lm-infinite: Simple on-the-fly length generalization for large language models C Han, Q Wang, W Xiong, Y Chen, H Ji, S Wang arXiv preprint arXiv:2308.16137, 2023 | 95 | 2023 |
Mint: Evaluating llms in multi-turn interaction with tools and language feedback X Wang, Z Wang, J Liu, Y Chen, L Yuan, H Peng, H Ji arXiv preprint arXiv:2309.10691, 2023 | 86 | 2023 |
Tailor: Generating and perturbing text with semantic controls A Ross, T Wu, H Peng, ME Peters, M Gardner arXiv preprint arXiv:2107.07150, 2021 | 80 | 2021 |
Classifying relations via long short term memory networks along shortest dependency path X Yan, L Mou, G Li, Y Chen, H Peng, Z Jin arXiv preprint arXiv:1508.03720, 2015 | 75 | 2015 |
Learning joint semantic parsers from disjoint data H Peng, S Thomson, S Swayamdipta, NA Smith arXiv preprint arXiv:1804.05990, 2018 | 71 | 2018 |
Text generation with exemplar-based adaptive decoding H Peng, AP Parikh, M Faruqui, B Dhingra, D Das arXiv preprint arXiv:1904.04428, 2019 | 67 | 2019 |
Executable code actions elicit better llm agents X Wang, Y Chen, L Yuan, Y Zhang, Y Li, H Peng, H Ji arXiv preprint arXiv:2402.01030, 2024 | 61 | 2024 |
How does gpt obtain its ability? tracing emergent abilities of language models to their sources Y Fu, H Peng, T Khot Yao Fu’s Notion, 2022 | 56 | 2022 |
Data engineering for scaling language models to 128k context Y Fu, R Panda, X Niu, X Yue, H Hajishirzi, Y Kim, H Peng arXiv preprint arXiv:2402.10171, 2024 | 51 | 2024 |