Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
observation
imagewidth (px)
64
64
action
int32
0
14
reward
float32
0
13
terminated
bool
2 classes
truncated
bool
2 classes
10
0
false
false
8
0
false
false
6
0
false
false
7
0
false
false
7
0
false
false
4
0
false
false
12
0
false
false
11
0
false
false
5
0
false
false
2
0
false
false
4
0
false
false
8
0
false
false
1
0
false
false
9
0
false
false
7
0
false
false
2
0
false
false
12
0
false
false
2
0
false
false
14
0
false
false
8
0
false
false
7
0
false
false
1
0
false
false
10
0
false
false
2
0
false
false
14
0
false
false
14
0
false
false
8
0
false
false
2
0
false
false
5
0
false
false
8
0
false
false
8
0
false
false
13
0
false
false
8
0
false
false
7
0
false
false
5
0
false
false
2
0
false
false
5
0
false
false
2
0
false
false
9
0
false
false
8
0
false
false
5
0
false
false
5
0
false
false
11
0
false
false
9
0
false
false
2
0
false
false
2
0
false
false
4
0
false
false
8
0
false
false
8
0
false
false
3
0
false
false
8
0
false
false
7
0
false
false
6
0
false
false
7
0
false
false
4
0
false
false
3
0
false
false
0
0
false
false
10
0
false
false
4
0
false
false
7
0
false
false
7
0
false
false
6
0
false
false
13
0
false
false
3
0
false
false
7
0
false
false
3
0
false
false
3
0
false
false
10
0
false
false
10
0
false
false
0
0
false
false
6
0
false
false
0
0
false
false
7
0
false
false
0
0
false
false
0
0
false
false
5
1
false
false
4
0
false
false
5
0
false
false
5
0
false
false
1
0
false
false
0
0
false
false
14
0
false
false
0
0
false
false
0
0
false
false
9
0
false
false
2
0
false
true
8
0
false
false
14
0
false
false
11
0
false
false
7
0
false
false
7
0
false
false
7
0
false
false
8
0
false
false
5
0
false
false
4
0
false
false
12
0
false
false
7
0
false
false
14
0
false
false
7
0
false
false
5
0
false
false
End of preview. Expand in Data Studio

Procgen Benchmark

This dataset contains expert trajectories generated by a PPO reinforcement learning agent trained on each of the 16 procedurally-generated gym environments from the Procgen Benchmark. The environments were created on distribution_mode=easy and with unlimited levels.

Disclaimer: This is not an official repository from OpenAI.

Dataset Usage

Regular usage (for environment bigfish):

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="train")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="test")

Usage with PyTorch (for environment bossfight):

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="train").with_format("torch")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="test").with_format("torch")

Agent Performance

The PPO RL agent was trained for 25M steps on each environment and obtained the following final performance metrics on the evaluation environment. These values are attain or surpass the performance described in "Easy Difficulty Baseline Results" in Appendix I of the paper.

Environment Steps (Train) Steps (Test) Return Observation
bigfish 9,000,000 1,000,000 33.79
bossfight 9,000,000 1,000,000 11.47
caveflyer 9,000,000 1,000,000 09.42
chaser 9,000,000 1,000,000 10.55
climber 9,000,000 1,000,000 11.30
coinrun 9,000,000 1,000,000 09.02
dodgeball 9,000,000 1,000,000 13.90
fruitbot 9,000,000 1,000,000 31.58
heist 9,000,000 1,000,000 08.32
jumper 9,000,000 1,000,000 08.10
leaper 9,000,000 1,000,000 06.32
maze 9,000,000 1,000,000 09.95
miner 9,000,000 1,000,000 12.02
ninja 9,000,000 1,000,000 09.32
plunder 9,000,000 1,000,000 24.18
starpilot 9,000,000 1,000,000 49.79

Dataset Structure

Data Instances

Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_t, terminated_t, truncated_t).

{'action': 1,
 'observation': [[[0, 166, 253],
                  [0, 174, 255],
                  [0, 170, 251],
                  [0, 191, 255],
                  [0, 191, 255],
                  [0, 221, 255],
                  [0, 243, 255],
                  [0, 248, 255],
                  [0, 243, 255],
                  [10, 239, 255],
                  [25, 255, 255],
                  [0, 241, 255],
                  [0, 235, 255],
                  [17, 240, 255],
                  [10, 243, 255],
                  [27, 253, 255],
                  [39, 255, 255],
                  [58, 255, 255],
                  [85, 255, 255],
                  [111, 255, 255],
                  [135, 255, 255],
                  [151, 255, 255],
                  [173, 255, 255],
...
                  [0, 0, 37],
                  [0, 0, 39]]],
 'reward': 0.0,
 'terminated': False,
 'truncated': False}

Data Fields

  • observation: The current RGB observation from the environment.
  • action: The action predicted by the agent for the current observation.
  • reward: The received reward for the current observation.
  • terminated: If the episode has terminated with the current observation.
  • truncated: If the episode is truncated with the current observation.

Data Splits

The dataset is divided into a train (90%) and test (10%) split. Each environment-dataset has in sum 10M steps (data points).

Dataset Creation

The dataset was created by training an RL agent with PPO for 25M steps in each environment. The trajectories where generated by sampling from the predicted action distribution at each step (not taking the argmax). The environments were created on distribution_mode=easy and with unlimited levels.

Procgen Benchmark

The Procgen Benchmark, released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.

Downloads last month
7,612