Datasets:
File size: 1,522 Bytes
b478de8 6a953d3 59abb56 ef597e0 b478de8 2fee5b2 2331a48 b478de8 2fee5b2 3ad2336 2fee5b2 05251f5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 51355991596
num_examples: 15930958
download_size: 29126915011
dataset_size: 51355991596
language:
- ja
- en
- code
size_categories:
- 10M<n<100M
license: apache-2.0
---
# JetCopper-10B
## Description
JetCopper-10B was created by extracting a portion of the data after cleaning, filtering, and deduplicating the following datasets.
* The japanese subset of [C4](https://huggingface.co/datasets/allenai/c4)
* The japanese subset of [CC-100](https://data.statmt.org/cc-100)
* The japanese subset of [OSCAR-2301](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301)
* The japanese subset of [HPLT Datasets v1.2](https://hplt-project.org/datasets/v1.2)
* [wiki40b-ja](https://huggingface.co/datasets/range3/wiki40b-ja)
This dataset was used to pre-train [Contrail-200m-64k](https://huggingface.co/sudy-super/Contrail-200m-64k) when we participated in [LOCAL AI HACKATHON #000](https://imminent-land-e64.notion.site/000-2024-04-01-8b9b0ce5c2454002ac8ecdc6311e3a49).
## The number of tokens (Using tokenizer of [calm2-chat](https://huggingface.co/cyberagent/calm2-7b-chat))
| Language | The number of tokens |
| --- | --- |
| Japanese | 4.7b |
| English | 5b |
| Code | 0.9b |
## NOTE
This dataset has not passed sentence end boundary determination or Perplexity Filtering, so there is room for improvement in quality. |