--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 370.25 +/- 8.94 name: mean_reward verified: true --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## **Metrics** > The trained PPO agent achieves a mean reward of *370.24* ± *8.90* on **LunarLander-v2**. This means it consistently lands successfully, demonstrating both high performance and stability across multiple episodes. ## Usage (with Stable-baselines3) check **lunarlanding.ipynb** for code. ```python from huggingface_sb3 import load_from_hub, package_to_hub from huggingface_hub import notebook_login from stable_baselines3 import PPO from stable_baselines3.common.env_util import make_vec_env from stable_baselines3.common.evaluation import evaluate_policy from stable_baselines3.common.monitor import Monitor import gymnasium as gym # First, we create our environment called LunarLander-v2 env = gym.make("LunarLander-v2") # Then we reset this environment observation, info = env.reset() for _ in range(20): # Take a random action action = env.action_space.sample() print("Action taken:", action) # Do this action in the environment and get # next_state, reward, terminated, truncated and info observation, reward, terminated, truncated, info = env.step(action) # If the game is terminated (in our case we land, crashed) or truncated (timeout) if terminated or truncated: # Reset the environment print("Environment is reset") observation, info = env.reset() env.close() # We create our environment with gym.make("") env = gym.make("LunarLander-v2") env.reset() print("_____OBSERVATION SPACE_____ \n") print("Observation Space Shape", env.observation_space.shape) print("Sample observation", env.observation_space.sample()) # Get a random observation print("\n _____ACTION SPACE_____ \n") print("Action Space Shape", env.action_space.n) print("Action Space Sample", env.action_space.sample()) # Take a random action env = make_vec_env('LunarLander-v2', n_envs=16) # Create environment env = gym.make('LunarLander-v2') # Instantiate the agent wuth policy model = PPO( policy = 'MlpPolicy', env = env, n_steps = 1024, batch_size = 64, n_epochs = 4, gamma = 0.999, gae_lambda = 0.98, ent_coef = 0.01, verbose=1) # SOLUTION # Train it for 1,000,000 timesteps model.learn(total_timesteps=1000000) # Save the model model_name = "ppo-LunarLander-v2" model.save(model_name) #@title eval_env = Monitor(gym.make("LunarLander-v2", render_mode='rgb_array')) mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") ``` > HuggingFace-Training "LunarLander Task" From Deep Reinforcement Learning Course.