File size: 4,595 Bytes
f01bb86
 
 
 
 
 
4718ff9
 
 
 
4b05aed
 
 
4718ff9
 
 
 
 
 
 
 
 
 
 
f01bb86
b309adf
f08b5dc
 
 
b309adf
aa1ef1b
 
b309adf
 
aa1ef1b
b309adf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f08b5dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
- config_name: full
  data_files:
  - split: train
    path: full/train-*
tags:
- chess
pretty_name: Lichess Elite Database in UCI format
dataset_info:
  config_name: full
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_bytes: 11624616189
    num_examples: 27272283
  download_size: 6905406421
  dataset_size: 11624616189
---

# Lichess.org Elite Database in UCI format


This dataset was created using the [lichess elite database](https://database.nikonoel.fr/). 
It includes the all games up to December 2024. 
The full list of files included in this dataset are located in `lichess_file_list.txt`.


After downloading the zip files, the zip files were processed using the following script.
The goal was to create three UCI-encoded datasets from the lichess elite datset, where all games are deduplicated across all three datasets:
  1. has_promote: includes non-checkmate games that include pawn promotions
  1. no_promote: includes non-checkmate games without pawn promotions
  1. checkmates: includes games that end in checkmate
```sh
#!/usr/bin/env fish

# Deduplicate all Lichess elite database files
pgn-extract --notags --nocomments --nonags --novars -w10000 --noduplicates -o all_deduplicated.san -f lichess_file_list.txt 2>dedupe_output.txt

echo "Deduplication complete. Output saved to all_deduplicated.san."

# Partition games into checkmates and non-checkmates
pgn-extract --notags --nocomments --nonags --novars -w100000 --checkmate -o checkmates.san -n others.san all_deduplicated.san  2>/dev/null

echo "Games partitioned: checkmates.san and others.san created."

# Further partition non-checkmate games based on pawn promotions
grep -B1 "=" others.san > has_promote.san
grep -B1 -v "=" others.san > no_promote.san

echo "Non-checkmate games split into has_promote.san and no_promote.san."

# Convert each SAN file to UCI format
pgn-extract -Wlalg --noresults --nochecks --nomovenumbers --notags --nocomments --nonags --novars -w100000 -o has_promote.uci has_promote.san 2>/dev/null
pgn-extract -Wlalg --noresults --nochecks --nomovenumbers --notags --nocomments --nonags --novars -w100000 -o no_promote.uci no_promote.san 2>/dev/null
pgn-extract -Wlalg --noresults --nochecks --nomovenumbers --notags --nocomments --nonags --novars -w100000 -o checkmates.uci checkmates.san 2>/dev/null

echo "SAN files converted to UCI format."

# Add "#" to the end of each move sequence in checkmates.uc
sed -i '/./s/$/#/' checkmates.uci

echo "Checkmate sequences updated with '#' as EOS token."

# Remove all blank lines
sed -i '/^$/d' has_promote.uci
sed -i '/^$/d' no_promote.uci
sed -i '/^$/d' checkmates.uci

echo "Blank lines removed. Finished."
```

Once the raw files were processed, the UCI-encoded files contained the following number of games:

- checkmates.uci:  3,708,644 games that end in checkmate
- no_promote.uci: 23,201,987 games that do not end in checkmate and have no pawn promotions
- has_promote.uci:   361,652 games that do not end in checkmate and have at least one pawn promotion

I wanted to balance the number of games from each category with a bias toward games that end in checkmate (so the LLM learns to finish the game rather than simply carry-on playing without a goal).
To accomplish this, I selected all games from `checkmates.uci` and `has_promote.uci`, and selected games from the `no_promote.uci` at a 5:1 ratio to the number of games in `has_promote.uci`, i.e., 1,808,260 games.
A simple python script for this is:

```python
from datasets import load_dataset, concatenate_datasets

checkmates = load_dataset("text", data_files="lichess-elite/checkmates.uci")["train"]
no_promote = load_dataset("text", data_files="lichess-elite/no_promote.uci")["train"]
has_promote = load_dataset("text", data_files="lichess-elite/has_promote.uci")["train"]

# shuffle to ensure we're selecting across the entire dataset
no_promote_subset = no_promote.shuffle(seed=42).select(range(5 * len(has_promote)))

# shuffle entire dataset
ds = concatenate_datasets([checkmates, has_promote, no_promote_subset]).shuffle(seed=42)
```

## Special thanks
Special thanks to [nikonoel](https://database.nikonoel.fr) and the curators of the Lichess Elite Database.

## Citation Information

If you use this dataset, please cite it as follows:

```
@misc{lichess_uci,
  author = {Davis, Austin},
  title = {Lichess.org Elite Database in UCI format},
  year = {2025},
  howpublished = {\url{https://huggingface.co/datasets/austindavis/lichess-elite-uci}},
}
```