Benjamin Eysenbach
Benjamin Eysenbach
CMU, Google
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Diversity is all you need: Learning skills without a reward function
B Eysenbach, A Gupta, J Ibarz, S Levine
International Conference on Learning Representations, 2019
3202019
Clustervision: Visual supervision of unsupervised clustering
BC Kwon, B Eysenbach, J Verma, K Ng, C De Filippi, WF Stewart, A Perer
IEEE transactions on visualization and computer graphics 24 (1), 142-151, 2017
862017
Self-consistent trajectory autoencoder: Hierarchical reinforcement learning with trajectory embeddings
JD Co-Reyes, YX Liu, A Gupta, B Eysenbach, P Abbeel, S Levine
International Conference on Machine Learning, 2018
772018
Search on the replay buffer: Bridging planning and reinforcement learning
B Eysenbach, RR Salakhutdinov, S Levine
Advances in Neural Information Processing Systems, 15246-15257, 2019
692019
Efficient exploration via state marginal matching
L Lee, B Eysenbach, E Parisotto, E Xing, S Levine, R Salakhutdinov
arXiv preprint arXiv:1906.05274, 2019
662019
Unsupervised meta-learning for reinforcement learning
A Gupta, B Eysenbach, C Finn, S Levine
arXiv preprint arXiv:1806.04640, 2018
622018
Leave No Trace: Learning to reset for safe and autonomous reinforcement learning
B Eysenbach, S Gu, J Ibarz, S Levine
International Conference on Learning Representations, 2018
622018
Unsupervised curricula for visual meta-reinforcement learning
A Jabri, K Hsu, A Gupta, B Eysenbach, S Levine, C Finn
Advances in Neural Information Processing Systems, 2019
282019
If MaxEnt RL is the Answer, What is the Question?
B Eysenbach, S Levine
arXiv preprint arXiv:1910.01913, 2019
182019
Learning to reach goals without reinforcement learning
D Ghosh, A Gupta, J Fu, A Reddy, C Devin, B Eysenbach, S Levine
arXiv preprint arXiv:1912.06088, 2019
17*2019
Rewriting history with inverse rl: Hindsight inference for policy improvement
B Eysenbach, X Geng, S Levine, R Salakhutdinov
arXiv preprint arXiv:2002.11089, 2020
102020
Who is mistaken?
B Eysenbach, C Vondrick, A Torralba
arXiv preprint arXiv:1612.01175, 2016
72016
Learning to be Safe: Deep RL with a Safety Critic
K Srinivasan, B Eysenbach, S Ha, J Tan, C Finn
arXiv preprint arXiv:2010.14603, 2020
52020
Maximum entropy rl (provably) solves some robust rl problems
B Eysenbach, S Levine
arXiv preprint arXiv:2103.06257, 2021
42021
C-Learning: Learning to Achieve Goals via Recursive Classification
B Eysenbach, R Salakhutdinov, S Levine
arXiv preprint arXiv:2011.08909, 2020
42020
F-irl: Inverse reinforcement learning via state marginal matching
T Ni, H Sikchi, Y Wang, T Gupta, L Lee, B Eysenbach
arXiv preprint arXiv:2011.04709, 2020
32020
Weakly-supervised reinforcement learning for controllable behavior
L Lee, B Eysenbach, R Salakhutdinov, SS Gu, C Finn
arXiv preprint arXiv:2004.02860, 2020
32020
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
B Eysenbach, S Levine, R Salakhutdinov
arXiv preprint arXiv:2103.12656, 2021
12021
ViNG: Learning Open-World Navigation with Visual Goals
D Shah, B Eysenbach, G Kahn, N Rhinehart, S Levine
arXiv preprint arXiv:2012.09812, 2020
12020
Interactive Visualization for Debugging RL
S Deshpande, B Eysenbach, J Schneider
arXiv preprint arXiv:2008.07331, 2020
12020
The system can't perform the operation now. Try again later.
Articles 1–20