Follow
Mark Chen
Mark Chen
Research Scientist, OpenAI
Verified email at openai.com
Title
Cited by
Cited by
Year
Language models are few-shot learners
T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ...
Advances in neural information processing systems 33, 1877-1901, 2020
219192020
Hierarchical text-conditional image generation with clip latents
A Ramesh, P Dhariwal, A Nichol, C Chu, M Chen
arXiv preprint arXiv:2204.06125 1 (2), 3, 2022
37402022
Zero-shot text-to-image generation
A Ramesh, M Pavlov, G Goh, S Gray, C Voss, A Radford, M Chen, ...
International Conference on Machine Learning, 8821-8831, 2021
32762021
Evaluating large language models trained on code
M Chen, J Tworek, H Jun, Q Yuan, HPO Pinto, J Kaplan, H Edwards, ...
arXiv preprint arXiv:2107.03374, 2021
1815*2021
Glide: Towards photorealistic image generation and editing with text-guided diffusion models
A Nichol, P Dhariwal, A Ramesh, P Shyam, P Mishkin, B McGrew, ...
arXiv preprint arXiv:2112.10741, 2021
17842021
Generative pretraining from pixels
M Chen, A Radford, R Child, J Wu, H Jun, D Luan, I Sutskever
International conference on machine learning, 1691-1703, 2020
13632020
Training verifiers to solve math word problems
K Cobbe, V Kosaraju, M Bavarian, M Chen, H Jun, L Kaiser, M Plappert, ...
arXiv preprint arXiv:2110.14168, 2021
8872021
Gpt-4 technical report
J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ...
arXiv preprint arXiv:2303.08774, 2023
2912023
Scaling laws for autoregressive generative modeling
T Henighan, J Kaplan, M Katz, M Chen, C Hesse, J Jackson, H Jun, ...
arXiv preprint arXiv:2010.14701, 2020
2142020
Point-e: A system for generating 3d point clouds from complex prompts
A Nichol, H Jun, P Dhariwal, P Mishkin, M Chen
arXiv preprint arXiv:2212.08751, 2022
2092022
Consistency models
Y Song, P Dhariwal, M Chen, I Sutskever
1982023
DALL· E: Creating images from text
A Ramesh, M Pavlov, G Goh, S Gray, M Chen, R Child, V Misra, P Mishkin, ...
OpenAI blog. https://openai. com/blog/dall-e, 2021
762021
Efficient training of language models to fill in the middle
M Bavarian, H Jun, N Tezak, J Schulman, C McLeavey, J Tworek, M Chen
arXiv preprint arXiv:2207.14255, 2022
732022
Hierarchical text-conditional image generation with clip latents. arXiv 2022
A Ramesh, P Dhariwal, A Nichol, C Chu, M Chen
arXiv preprint arXiv:2204.06125, 2022
672022
Hierarchical text-conditional image generation with CLIP latents. arXiv
A Ramesh, P Dhariwal, A Nichol, C Chu, M Chen
arXiv preprint arXiv:2204.06125, 2022
572022
Distribution augmentation for generative modeling
H Jun, R Child, M Chen, J Schulman, A Ramesh, A Radford, I Sutskever
International Conference on Machine Learning, 5006-5019, 2020
512020
Language models are few-shot learners
B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, ...
arXiv preprint arXiv:2005.14165, 2020
362020
Using temporal correlations and full distributions to separate intrinsic and extrinsic fluctuations in biological systems
A Hilfinger, M Chen, J Paulsson
Physical review letters 109 (24), 248104, 2012
202012
Systems and methods for generating natural language using language models trained on computer code
M Chen, J Tworek, I Sutskever, W Zaremba, JUN Heewoo, HPDEO PINTO
US Patent App. 18/321,921, 2024
2024
Systems and methods for generating code using language models trained on computer code
M Chen, J Tworek, I Sutskever, W Zaremba, JUN Heewoo, HPDEO PINTO
US Patent App. 18/321,852, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20