Skip to content

Developed TD Actor-Critic and solved Grid-world, Open AI 'Lunar Lander-v2' and 'Cartpole-v1' environments.

Notifications You must be signed in to change notification settings

nkrgit/Solving-Lunar-Lander-Env-using-Actor-Critic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 

Repository files navigation

Solving Lunar-Lander-v2, CartPole-v1 Env using Actor-Critic

Goal:

  • The goal is to understand the concept of policy gradient algorithms, develop the TD actor-critic algorithm and apply it to solve OpenAI gym environments.

Part 1:

  • Implemented TD actor-critic algorithm from scratch.
    • Defined 3 separate networks actor, critic, and policy.
    • Actor is used to taking any action based on the decision given by the critic network. A policy network is used to predict probabilities to act by actor-network.

Part 2:

  • Solved Grid-world, Open AI 'Lunar Lander-v2' and 'Cartpole-v1' complex environments. (i.e. refer to report for results)

Insights:

  • The actor-critic algorithm performed well on Grid-world and converged sooner. I tried different setups for Lunar Lander with different hidden nodes and optimizer learning rates, small learning rates provided better results.

  • On cartpole the actor-critic took around 450 episodes to converge on contrary to DQN in less than 100 episodes. So, the actor-critic takes a lot of time exploring and converges slower compared to value-based approximation algorithms.

About

Developed TD Actor-Critic and solved Grid-world, Open AI 'Lunar Lander-v2' and 'Cartpole-v1' environments.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published