Gyaneshere commited on
Commit
0fb9f0f
·
verified ·
1 Parent(s): d438ef5

Added ReadMe

Browse files

Added ReadMe file

Files changed (1) hide show
  1. README.md +29 -3
README.md CHANGED
@@ -1,3 +1,29 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - unity-ml-agents
4
+ - ml-agents
5
+ - deep-reinforcement-learning
6
+ - reinforcement-learning
7
+ - ML-Agents-SoccerTwos
8
+ library_name: ml-agents
9
+ ---
10
+
11
+ # **poca** Agent playing **SoccerTwos**
12
+ This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
13
+
14
+ ## Usage (with ML-Agents)
15
+ The Documentation: https://github.com/huggingface/ml-agents#get-started
16
+ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
17
+
18
+
19
+ ### Resume the training
20
+ ```
21
+ mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
22
+ ```
23
+ ### Watch your Agent play
24
+ You can watch your agent **playing directly in your browser:**.
25
+
26
+ 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
27
+ 2. Step 1: Write your model_id: caioiglesias/ML-Agents-SoccerTwos
28
+ 3. Step 2: Select your *.nn /*.onnx file
29
+ 4. Click on Watch the agent play 👀