You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for CLIPCJK_Gen

Dataset summary

The CLIPCJK_Gen Dataset is a Japanese-language dataset of images generated from 17 japanese fonts and is supposedly used to train a CLIP-like model for japanese character recognition. This dataset contains 2414 classes representing mostly Japanese characters but also punctuation, latin alphabet, etc.

How to use

This dataset is stored in the WebDataset format. To use it, you need to install the webdataset and huggingface_hub libraries.

pip install webdataset huggingface_hub

You can then stream the data directly from the Hub like this:

import torch
import webdataset as wds
from huggingface_hub import hf_hub_url, get_dataset_repo_info
from torchvision import transforms
from PIL import Image

repo_id = "amaurygau/ClipCJK_Gen"

info = get_dataset_repo_info(repo_id)
split = "train" # or "test"
shard_urls = [
    hf_hub_url(repo_id, f) 
    for f in info.siblings 
    if f.rfilename.startswith(f"{split}/") and f.rfilename.endswith(".tar")
]

# Create the webdataset pipeline
dataset = wds.WebDataset(shard_urls) 
    .shuffle(1000) 
    .decode("pil") 
    .to_tuple("png", "json") 

# iterate over the dataset
dataset_iterator = iter(dataset)
first_sample = next(dataset_iterator)
print(firt_sample)

## Example Output 
# (<PIL.Image.Image image mode=RGB size=224x224>,
#  {'filepath': '剛/剛_5.png', 'title': '剛:⿰岡刂'})

Dataset structure

Data instances

A data point comprises a path to an image with its label (that we call title). A sample looks like this:

{'__key__': 'sample_00089001',
 '__url__': 'train/train-000089.tar',
 'json': {'filepath': '剛/剛_5.png', 'title': '剛:⿰岡刂'},
 '__local_path__': 'train/train-000089.tar',
 'png': <PIL.Image.Image image mode=RGB size=224x224>}

Data fields

  • __key__: the unique identifier of the sample inside the shard.
  • __url__: the path to the shard (.tar file) containing the sample.
  • json: a dictionary containing metadata. The title field holds the character's label, including its IDS decomposition.
  • __local_path__: the path to the sample
  • png: the raw image data in bytes. Can be decoded into a PIL.Image object.

Data Splits

Train Test
Classes 2414 2414
Image Files 76392 15614
Labels 305565 35629

Note: The number of labels is higher than the number of images because each label is duplicated to match the character with the highest number of IDS patterns, as explained in the 'Label Biases' section

Dataset Creation

ClipCJK_Gen was built to provide enough date for an OpenCLIP VLM to predict and recognize Japanese Characters.

Source Data

Initial Data collection

The dataset was built by generating images of shape 224*224 of characters from 17 Japanese fonts listed here :

Data Annotation

All the data is annotated with the character from its Unicode code point. To which we then labels constructed with IDS from the CHISE project. Such as a label follows the rule Char:Char or Char:Char giving the following labels :

  • 剛:剛
  • 剛:⿰岡刂

Annotation Process

The code for annotation can be found here.

Considerations for using the Data

Label biases

Some of the characters do not possess the same number of IDS patterns possibles. Some possess 3 different patterns, some others none. In this case, to let any model view the same number of times a given image, we copy the first label as many times as necessary.

Cropped images

Some of the characters were not fully generated, and just a part of the character is visible. You can think of it as a forced data augmentation config for VLM models =D

Licensing Information

The licensing status of the dataset hinges on the legal status of the fonts' designers, which should be free for non-commercial use. So please, cite them.

Downloads last month
18