What are the benefits and drawbacks of using curiosity-driven exploration in model-free RL?
Curiosity-driven exploration is a way of enhancing the learning process of model-free reinforcement learning (RL) agents. Model-free RL means that the agent does not have a model of the environment and learns from trial and error. Curiosity-driven exploration means that the agent seeks novel and informative states and actions, rather than just following a predefined reward function. In this article, you will learn about the benefits and drawbacks of using curiosity-driven exploration in model-free RL.