Dataset Viewer
Auto-converted to Parquet
datasetId
large_stringlengths
6
116
author
large_stringlengths
2
42
last_modified
large_stringdate
2021-04-29 15:34:29
2025-06-25 02:40:10
downloads
int64
0
3.97M
likes
int64
0
7.74k
tags
large listlengths
1
7.92k
task_categories
large listlengths
0
48
createdAt
large_stringdate
2022-03-02 23:29:22
2025-06-25 00:32:52
trending_score
float64
0
64
card
large_stringlengths
31
1.01M
1231czx/llama3_star_ep2_lr2e6_tmp0
1231czx
2024-12-22T02:50:26Z
17
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-21T13:56:31Z
0
--- dataset_info: features: - name: idx dtype: int64 - name: gt dtype: string - name: prompt dtype: string - name: level dtype: string - name: type dtype: string - name: solution dtype: string - name: my_solu sequence: string - name: pred sequence: string - name: rewards sequence: bool splits: - name: train num_bytes: 19763488 num_examples: 5000 download_size: 5513455 dataset_size: 19763488 configs: - config_name: default data_files: - split: train path: data/train-* ---
vlinhd11/viVoice-v1-5-desc_tokened
vlinhd11
2025-04-09T03:47:44Z
15
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-09T03:45:33Z
0
--- dataset_info: features: - name: channel dtype: string - name: text dtype: string - name: id dtype: string - name: description dtype: string - name: codes_list sequence: sequence: int64 - name: length dtype: int64 - name: length_codes_list dtype: int64 splits: - name: train num_bytes: 1881879799 num_examples: 600000 download_size: 399288915 dataset_size: 1881879799 configs: - config_name: default data_files: - split: train path: data/train-* ---
MadvaAparna/KannadaVibhaktiSamples
MadvaAparna
2025-02-06T10:32:25Z
14
0
[ "task_categories:token-classification", "language:kn", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[ "token-classification" ]
2025-02-06T10:19:05Z
0
--- task_categories: - token-classification language: - kn --- # Dataset Card for Dataset Name This is a dataset for the Named Entity Recognition (NER) task in Kannada language. Each instance represents one of the vibhakti cases. ## Dataset Details ### Dataset Description Kannada language supports eight (grammatical) cases [Refer https://kannadakalike.org/grammar/cases]. The cases are called vibhakti and corresponding suffixes, pratyaya. Noun words are inflected with the suffix corresponding to these cases to create another grammatically meaningful word. In this dataset, 12 sentences are composed for each case, thus resulting in a total of 96 sentences. Each sentence contains at least one word representing any named entity which is inflected with the suffix for the corresponding case. We include examples for PER, LOC and ORG class of entities. For PER class, we also include entities which span across multiple words such as caMdragupta maurya. This vibhakti dataset can be used to analyse how the presence of case specific suffixes in a named entity affects the prediction by the model. - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses Dataset can be used in case of the NER task. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ### Source Data Manually created #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed]
1231czx/llama3_star_ep2_lr2e6_tmp10
1231czx
2024-12-22T02:25:20Z
44
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-21T13:29:51Z
0
--- dataset_info: features: - name: idx dtype: int64 - name: gt dtype: string - name: prompt dtype: string - name: level dtype: string - name: type dtype: string - name: solution dtype: string - name: my_solu sequence: string - name: pred sequence: string - name: rewards sequence: bool splits: - name: train num_bytes: 52675019 num_examples: 15000 download_size: 18962519 dataset_size: 52675019 configs: - config_name: default data_files: - split: train path: data/train-* ---
YanAdjeNole/Document_ranking6_test
YanAdjeNole
2025-02-11T21:45:10Z
37
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-11T21:44:49Z
0
--- dataset_info: features: - name: id dtype: string - name: query dtype: string - name: answer dtype: int64 splits: - name: train num_bytes: 98068 num_examples: 10 - name: valid num_bytes: 98068 num_examples: 10 - name: test num_bytes: 1233512376 num_examples: 79300 download_size: 617793640 dataset_size: 1233708512 configs: - config_name: default data_files: - split: train path: data/train-* - split: valid path: data/valid-* - split: test path: data/test-* ---
Luongdzung/2280_math_exams_dataset_seed_42
Luongdzung
2024-11-13T10:06:26Z
19
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-11-13T10:06:25Z
0
--- dataset_info: features: - name: question dtype: string - name: id dtype: string - name: choices struct: - name: label sequence: string - name: text sequence: string - name: answerKey dtype: string - name: metadata struct: - name: grade dtype: string - name: subject dtype: string splits: - name: test num_bytes: 847141.6 num_examples: 2280 download_size: 383517 dataset_size: 847141.6 configs: - config_name: default data_files: - split: test path: data/test-* ---
noma1999/pad-gemma-2-n4
noma1999
2025-03-28T09:36:12Z
15
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-28T09:35:26Z
0
--- dataset_info: features: - name: prompt dtype: string - name: responses sequence: string - name: scores_mcq sequence: float64 - name: input sequence: string - name: scores_logp sequence: float64 - name: scores_avglogp sequence: float64 splits: - name: train num_bytes: 720561360 num_examples: 55321 - name: test num_bytes: 14946797 num_examples: 1130 download_size: 310024522 dataset_size: 735508157 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
guanghao/openr1_math_220k_qwen_cot
guanghao
2025-02-20T23:10:09Z
17
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-20T23:10:05Z
0
--- dataset_info: features: - name: problem dtype: string - name: answer dtype: string - name: input dtype: string splits: - name: train num_bytes: 67414809 num_examples: 93733 download_size: 33028053 dataset_size: 67414809 configs: - config_name: default data_files: - split: train path: data/train-* ---
shoyimobloqulov/law
shoyimobloqulov
2025-03-11T20:03:26Z
13
0
[ "license:apache-2.0", "region:us" ]
[]
2025-03-11T20:03:26Z
0
--- license: apache-2.0 ---
fawazahmed0/quran-data
fawazahmed0
2024-10-30T12:45:44Z
44
0
[ "license:cc0-1.0", "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-30T11:12:08Z
0
--- license: cc0-1.0 source: https://github.com/fawazahmed0/quran-api dataset_info: features: - name: chapter dtype: int64 - name: verse dtype: int64 - name: text dtype: string - name: name dtype: string - name: author dtype: string - name: language dtype: string - name: direction dtype: string - name: source dtype: string - name: comments dtype: string - name: link dtype: string - name: linkmin dtype: string splits: - name: train num_bytes: 1560189035 num_examples: 3055640 download_size: 310821206 dataset_size: 1560189035 configs: - config_name: default data_files: - split: train path: data/train-* ---
MusYW/test3
MusYW
2025-06-10T08:34:57Z
0
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-10T08:34:48Z
0
--- dataset_info: features: - name: id dtype: int32 - name: text dtype: string - name: source dtype: string - name: similarity dtype: float32 splits: - name: train num_bytes: 64178642 num_examples: 100000 download_size: 37138825 dataset_size: 64178642 configs: - config_name: default data_files: - split: train path: data/train-* ---
Ki1n/urillm_vanilla
Ki1n
2025-01-13T07:30:09Z
17
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-01-10T12:21:10Z
0
--- dataset_info: features: - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 220223800 num_examples: 50766 - name: test num_bytes: 6787067 num_examples: 1571 download_size: 121107888 dataset_size: 227010867 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
math-extraction-comp/microsoft__Phi-3-small-128k-instruct
math-extraction-comp
2025-01-12T21:14:58Z
18
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-01-11T12:24:34Z
0
--- dataset_info: features: - name: question dtype: string - name: gold dtype: string - name: target dtype: string - name: prediction dtype: string - name: subset dtype: string - name: lighteval-c24870ea_extracted_answer dtype: string - name: lighteval-c24870ea_score dtype: float64 - name: qwen_score dtype: float64 - name: lighteval-0f21c935_score dtype: float64 - name: harness_extracted_answer dtype: string - name: harness_score dtype: float64 - name: qwen_extracted_answer dtype: string - name: lighteval-0f21c935_extracted_answer dtype: string splits: - name: train num_bytes: 2217782 num_examples: 1150 download_size: 1030675 dataset_size: 2217782 configs: - config_name: default data_files: - split: train path: data/train-* ---
leo-step/arxiv-papers-10k
leo-step
2025-04-15T04:07:03Z
26
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-15T03:54:32Z
0
--- dataset_info: features: - name: title dtype: string - name: abstract dtype: string - name: url dtype: string - name: text dtype: string splits: - name: train num_bytes: 578155326 num_examples: 9776 download_size: 297175654 dataset_size: 578155326 configs: - config_name: default data_files: - split: train path: data/train-* --- This dataset consists of 9,776 cs.ai papers retrieved from ArXiv on April 14th, 2025. Each sample has the title, abstract, url, and text of the paper. The text was extracted using pymupdf and preprocessed by removing lines with less than 9 characters and those that did not start with an alphanumeric character. The lines were then joined together without any newline characters and word breaks. Useful for training small scale language models, embeddings, and autocomplete systems. If you find this dataset useful in your work, feel free to cite the following: ``` @misc{stepanewk2025arxivpapers10k, author = {Leo Stepanewk}, title = {ArXiv Papers 10k Dataset}, year = {2025}, howpublished = {\url{https://huggingface.co/datasets/leo-step/arxiv-papers-10k}}, } ```
takara-ai/MovieStills_Captioned_SmolVLM
takara-ai
2025-02-25T15:22:12Z
29
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-01-28T15:49:27Z
0
--- dataset_info: features: - name: image dtype: image - name: caption dtype: string splits: - name: train num_bytes: 17074868595.151 num_examples: 74891 download_size: 17062510943 dataset_size: 17074868595.151 configs: - config_name: default data_files: - split: train path: data/train-* --- <img src="https://takara.ai/images/logo-24/TakaraAi.svg" width="200" alt="Takara.ai Logo" /> From the Frontier Research Team at **Takara.ai** we present **MovieStills_Captioned_SmolVLM**, a dataset of 75,000 movie stills with high-quality synthetic captions generated using SmolVLM. --- ## Dataset Description This dataset contains 75,000 movie stills, each paired with a high-quality synthetic caption. It was generated using the **HuggingFaceTB/SmolVLM-256M-Instruct** model, designed for instruction-tuned multimodal tasks. The dataset aims to support image captioning tasks, particularly for machine learning research and application development in the domain of movie scenes and visual storytelling. **Languages:** The dataset captions are in English (ISO 639-1: `en`). **Domain:** Movie stills with general, descriptive captions for each image. ## Dataset Structure ### Data Fields Each dataset instance consists of: - **image:** A PIL image object representing a single movie still. - **caption:** A descriptive caption for the corresponding image. ### Example Instance ```json { "image": "<PIL.Image.Image image mode=RGB size=640x360>", "caption": "A man standing on a rainy street looking at a distant figure." } ``` ### Data Splits The dataset currently has no predefined splits (train/test/validation). Users can create custom splits as needed. ## Dataset Creation ### Process The dataset captions were generated using the **HuggingFaceTB/SmolVLM-256M-Instruct** model. The process involved: 1. Processing 75,000 movie stills with the ONNX Runtime (ONNXRT) for efficient inference. 2. Running inference on an **RTX 2080 Ti** GPU, which took approximately **25 hours** to complete. ### Source Data - **Source:** The dataset uses stills from the `killah-t-cell/movie_stills_captioned_dataset_local` dataset. ### Preprocessing - Images were provided in their original formats and converted into PIL objects. - Captions were generated using an instruction-tuned multimodal model to enhance descriptive quality. ## Considerations for Using the Data ### Potential Biases The dataset captions may reflect biases present in the source model (HuggingFaceTB/SmolVLM-256M-Instruct). As synthetic captions are generated from a single model, there may be limitations in diversity and linguistic nuance. ### Ethical Considerations This dataset is intended for research purposes. Users should be aware that captions might not fully reflect context or cultural sensitivities present in the movie stills. ### Limitations - No human verification was performed for caption accuracy. - The dataset is limited to English captions and may not generalise well to other languages or contexts. ## Additional Information **License:** The dataset is licensed under [Creative Commons BY 4.0](https://creativecommons.org/licenses/by/4.0/). **Citation:** Please cite the dataset using its Hugging Face repository citation format. ## Sample Usage Here's an example code snippet to load and use the dataset: ```python from datasets import load_dataset from PIL import Image # Load the dataset dataset = load_dataset("takara-ai/MovieStills_Captioned_SmolVLM") # Display a sample sample = dataset["train"][0] image = sample["image"] caption = sample["caption"] # Show the image and caption image.show() print(f"Caption: {caption}") ``` --- For research inquiries and press, please reach out to [email protected] > 人類を変革する
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
359