Amir-massoud Farahmand
Amir-massoud Farahmand
Vector Institute, University of Toronto
Verified email at vectorinstitute.ai - Homepage
Title
Cited by
Cited by
Year
Error propagation for approximate policy and value iteration
A Farahmand, C Szepesvári, R Munos
Advances in Neural Information Processing Systems (NeurIPS), 568-576, 2010
1672010
Regularized Policy Iteration
A Farahmand, M Ghavamzadeh, S Mannor, C Szepesvári
Advances in Neural Information Processing Systems 21 (NeurIPS 2008), 441-448, 2009
1622009
Learning from Limited Demonstrations
B Kim, A Farahmand, J Pineau, D Precup
Advances in Neural Information Processing Systems (NeurIPS), 2859-2867, 2013
932013
Manifold-adaptive dimension estimation
A Farahmand, C Szepesvári, JY Audibert
Proceedings of the 24th International Conference on Machine Learning (ICML …, 2007
932007
Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
American Control Conference (ACC), 725-730, 2009
89*2009
Robust jacobian estimation for uncalibrated visual servoing
A Shademan, A Farahmand, M Jägersand
IEEE International Conference on Robotics and Automation (ICRA), 5564-5569, 2010
682010
Regularized policy iteration with nonparametric function spaces
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Journal of Machine Learning Research (JMLR) 17 (1), 4809-4874, 2016
632016
Global visual-motor estimation for uncalibrated visual servoing
A Farahmand, A Shademan, M Jagersand
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS …, 2007
52*2007
Model Selection in Reinforcement Learning
AM Farahmand, C Szepesvári
Machine learning 85 (3), 299-332, 2011
502011
Value-aware loss function for model-based reinforcement learning
A Farahmand, A Barreto, D Nikovski
Artificial Intelligence and Statistics (AISTATS), 1486-1494, 2017
442017
Action-Gap Phenomenon in Reinforcement Learning
AM Farahmand
Neural Information Processing Systems (NeurIPS), 2011
372011
Regularization in Reinforcement Learning
AM Farahmand
Department of Computing Science, University of Alberta, 2011
362011
Approximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions
DA Huang, AM Farahmand, KM Kitani, JA Bagnell
AAAI Conference on Artificial Intelligence (AAAI), 2015
312015
Model-based and model-free reinforcement learning for visual servoing
A Farahmand, A Shademan, M Jagersand, C Szepesvári
IEEE International Conference on Robotics and Automation (ICRA), 2917-2924, 2009
30*2009
Deep reinforcement learning for partial differential equation control
A Farahmand, S Nabi, DN Nikovski
American Control Conference (ACC), 3120-3127, 2017
292017
Attentional network for visual object detection
K Hara, MY Liu, O Tuzel, A Farahmand
arXiv preprint arXiv:1702.01478, 2017
262017
Regularized Fitted Q-iteration: Application to Bounded Resource Planning
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
26*2008
Regularized fitted Q-iteration: Application to planning
AM Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Recent Advances in Reinforcement Learning, 55-68, 2008
26*2008
Iterative Value-Aware Model Learning
A Farahmand
Advances in Neural Information Processing Systems (NeurIPS), 9072-9083, 2018
242018
Interaction of Culture-based Learning and Cooperative Co-evolution and its Application to Automatic Behavior-based System Design
AM Farahmand, MN Ahmadabadi, C Lucas, BN Araabi
IEEE Transactions on Evolutionary Computation 14 (1), 23-57, 2010
242010
The system can't perform the operation now. Try again later.
Articles 1–20