PPO Huggy Training ๐ป
My Github Repo: PardhuSreeRushiVarma20060119/HuggingFace-Training/ppo-HuggyTraining
This repository contains a trained Proximal Policy Optimization (PPO) agent playing the Huggy environment, built using the Unity ML-Agents library and integrated with the Hugging Face Hub.
The goal of this project was to explore reinforcement learning (RL) with Unity environments and make the trained agent accessible and interactive through Hugging Face.
๐ Usage
If youโre new to ML-Agents, check out the official ML-Agents Documentation for setup, installation, and training details.
You can also dive into Hugging Faceโs Deep RL Course for step-by-step guidance on how to train agents and upload them to the Hub.
๐ฎ Play with your Huggy ๐
This is the fun part! You can interactively play with your trained Huggy agent directly in your browser:
๐ Open the Huggy game here: Huggy on Hugging Face Spaces
- Click โPlay with my Huggy modelโ
- In Step 1, enter your exact Hugging Face username (case-sensitive). Example:
VarmaHF
- In Step 2, select the repository:
ppo-HuggyTraining
- In Step 3, choose the model checkpoint you want to replay.
๐ก During training, multiple model checkpoints were saved (e.g., every 200,000 timesteps).
You can try different versions to observe how Huggy improves over time.
For example, the most recent model file is: Huggy.onnx
โ๏ธ Training Setup
- Environment: Huggy (Unity ML-Agents)
- Algorithm: PPO (Proximal Policy Optimization)
- Frameworks: Unity ML-Agents + Hugging Face Hub
- Integration: Model packaged and uploaded to Hugging Face for sharing and deployment
๐ Results
The PPO agent was trained successfully and learned to play the Huggy environment.
Thanks to the Hugging Face integration, you can:
- Preview the trained agent
- Replay and test different checkpoints
- Interactively compare performance improvements over training
๐ References
โจ Enjoy training, exploring, and playing with Huggy! ๐ป
- Downloads last month
- 48