datasetId
large_stringlengths 6
107
| author
large_stringlengths 3
34
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-05-05 16:11:44
| downloads
int64 0
4.28M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
16
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-05-05 14:04:15
| trending_score
float64 1
39
⌀ | card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
armanc/scientific_papers | armanc | 2024-01-18T11:15:30Z | 2,713 | 163 | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"arxiv:1804.05685",
"region:us",
"abstractive-summarization"
] | [
"summarization"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: ScientificPapers
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: null
tags:
- abstractive-summarization
dataset_info:
- config_name: arxiv
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: section_names
dtype: string
splits:
- name: train
num_bytes: 7148341992
num_examples: 203037
- name: validation
num_bytes: 217125524
num_examples: 6436
- name: test
num_bytes: 217514961
num_examples: 6440
download_size: 4504646347
dataset_size: 7582982477
- config_name: pubmed
features:
- name: article
dtype: string
- name: abstract
dtype: string
- name: section_names
dtype: string
splits:
- name: train
num_bytes: 2252027383
num_examples: 119924
- name: validation
num_bytes: 127403398
num_examples: 6633
- name: test
num_bytes: 127184448
num_examples: 6658
download_size: 4504646347
dataset_size: 2506615229
---
# Dataset Card for "scientific_papers"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/armancohan/long-summarization
- **Paper:** [A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents](https://arxiv.org/abs/1804.05685)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 9.01 GB
- **Size of the generated dataset:** 10.09 GB
- **Total amount of disk used:** 19.10 GB
### Dataset Summary
Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, paragraphs separated by "/n".
- abstract: the abstract of the document, paragraphs separated by "/n".
- section_names: titles of sections, separated by "/n".
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### arxiv
- **Size of downloaded dataset files:** 4.50 GB
- **Size of the generated dataset:** 7.58 GB
- **Total amount of disk used:** 12.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" we have studied the leptonic decay @xmath0 , via the decay channel @xmath1 , using a sample of tagged @xmath2 decays collected...",
"article": "\"the leptonic decays of a charged pseudoscalar meson @xmath7 are processes of the type @xmath8 , where @xmath9 , @xmath10 , or @...",
"section_names": "[sec:introduction]introduction\n[sec:detector]data and the cleo- detector\n[sec:analysys]analysis method\n[sec:conclusion]summary"
}
```
#### pubmed
- **Size of downloaded dataset files:** 4.50 GB
- **Size of the generated dataset:** 2.51 GB
- **Total amount of disk used:** 7.01 GB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"abstract": "\" background and aim : there is lack of substantial indian data on venous thromboembolism ( vte ) . \\n the aim of this study was...",
"article": "\"approximately , one - third of patients with symptomatic vte manifests pe , whereas two - thirds manifest dvt alone .\\nboth dvt...",
"section_names": "\"Introduction\\nSubjects and Methods\\nResults\\nDemographics and characteristics of venous thromboembolism patients\\nRisk factors ..."
}
```
### Data Fields
The data fields are the same among all splits.
#### arxiv
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
#### pubmed
- `article`: a `string` feature.
- `abstract`: a `string` feature.
- `section_names`: a `string` feature.
### Data Splits
| name |train |validation|test|
|------|-----:|---------:|---:|
|arxiv |203037| 6436|6440|
|pubmed|119924| 6633|6658|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
zhoubolei/scene_parse_150 | zhoubolei | 2024-01-18T11:15:25Z | 1,272 | 29 | [
"task_categories:image-segmentation",
"task_ids:instance-segmentation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|ade20k",
"language:en",
"license:bsd-3-clause",
"size_categories:10K<n<100K",
"arxiv:1608.05442",
"region:us",
"scene-parsing"
] | [
"image-segmentation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- en
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|ade20k
task_categories:
- image-segmentation
task_ids:
- instance-segmentation
paperswithcode_id: ade20k
pretty_name: MIT Scene Parsing Benchmark
tags:
- scene-parsing
dataset_info:
- config_name: scene_parsing
features:
- name: image
dtype: image
- name: annotation
dtype: image
- name: scene_category
dtype:
class_label:
names:
'0': airport_terminal
'1': art_gallery
'2': badlands
'3': ball_pit
'4': bathroom
'5': beach
'6': bedroom
'7': booth_indoor
'8': botanical_garden
'9': bridge
'10': bullring
'11': bus_interior
'12': butte
'13': canyon
'14': casino_outdoor
'15': castle
'16': church_outdoor
'17': closet
'18': coast
'19': conference_room
'20': construction_site
'21': corral
'22': corridor
'23': crosswalk
'24': day_care_center
'25': sand
'26': elevator_interior
'27': escalator_indoor
'28': forest_road
'29': gangplank
'30': gas_station
'31': golf_course
'32': gymnasium_indoor
'33': harbor
'34': hayfield
'35': heath
'36': hoodoo
'37': house
'38': hunting_lodge_outdoor
'39': ice_shelf
'40': joss_house
'41': kiosk_indoor
'42': kitchen
'43': landfill
'44': library_indoor
'45': lido_deck_outdoor
'46': living_room
'47': locker_room
'48': market_outdoor
'49': mountain_snowy
'50': office
'51': orchard
'52': arbor
'53': bookshelf
'54': mews
'55': nook
'56': preserve
'57': traffic_island
'58': palace
'59': palace_hall
'60': pantry
'61': patio
'62': phone_booth
'63': establishment
'64': poolroom_home
'65': quonset_hut_outdoor
'66': rice_paddy
'67': sandbox
'68': shopfront
'69': skyscraper
'70': stone_circle
'71': subway_interior
'72': platform
'73': supermarket
'74': swimming_pool_outdoor
'75': television_studio
'76': indoor_procenium
'77': train_railway
'78': coral_reef
'79': viaduct
'80': wave
'81': wind_farm
'82': bottle_storage
'83': abbey
'84': access_road
'85': air_base
'86': airfield
'87': airlock
'88': airplane_cabin
'89': airport
'90': entrance
'91': airport_ticket_counter
'92': alcove
'93': alley
'94': amphitheater
'95': amusement_arcade
'96': amusement_park
'97': anechoic_chamber
'98': apartment_building_outdoor
'99': apse_indoor
'100': apse_outdoor
'101': aquarium
'102': aquatic_theater
'103': aqueduct
'104': arcade
'105': arch
'106': archaelogical_excavation
'107': archive
'108': basketball
'109': football
'110': hockey
'111': performance
'112': rodeo
'113': soccer
'114': armory
'115': army_base
'116': arrival_gate_indoor
'117': arrival_gate_outdoor
'118': art_school
'119': art_studio
'120': artists_loft
'121': assembly_line
'122': athletic_field_indoor
'123': athletic_field_outdoor
'124': atrium_home
'125': atrium_public
'126': attic
'127': auditorium
'128': auto_factory
'129': auto_mechanics_indoor
'130': auto_mechanics_outdoor
'131': auto_racing_paddock
'132': auto_showroom
'133': backstage
'134': backstairs
'135': badminton_court_indoor
'136': badminton_court_outdoor
'137': baggage_claim
'138': shop
'139': exterior
'140': balcony_interior
'141': ballroom
'142': bamboo_forest
'143': bank_indoor
'144': bank_outdoor
'145': bank_vault
'146': banquet_hall
'147': baptistry_indoor
'148': baptistry_outdoor
'149': bar
'150': barbershop
'151': barn
'152': barndoor
'153': barnyard
'154': barrack
'155': baseball_field
'156': basement
'157': basilica
'158': basketball_court_indoor
'159': basketball_court_outdoor
'160': bathhouse
'161': batters_box
'162': batting_cage_indoor
'163': batting_cage_outdoor
'164': battlement
'165': bayou
'166': bazaar_indoor
'167': bazaar_outdoor
'168': beach_house
'169': beauty_salon
'170': bedchamber
'171': beer_garden
'172': beer_hall
'173': belfry
'174': bell_foundry
'175': berth
'176': berth_deck
'177': betting_shop
'178': bicycle_racks
'179': bindery
'180': biology_laboratory
'181': bistro_indoor
'182': bistro_outdoor
'183': bleachers_indoor
'184': bleachers_outdoor
'185': boardwalk
'186': boat_deck
'187': boathouse
'188': bog
'189': bomb_shelter_indoor
'190': bookbindery
'191': bookstore
'192': bow_window_indoor
'193': bow_window_outdoor
'194': bowling_alley
'195': box_seat
'196': boxing_ring
'197': breakroom
'198': brewery_indoor
'199': brewery_outdoor
'200': brickyard_indoor
'201': brickyard_outdoor
'202': building_complex
'203': building_facade
'204': bullpen
'205': burial_chamber
'206': bus_depot_indoor
'207': bus_depot_outdoor
'208': bus_shelter
'209': bus_station_indoor
'210': bus_station_outdoor
'211': butchers_shop
'212': cabana
'213': cabin_indoor
'214': cabin_outdoor
'215': cafeteria
'216': call_center
'217': campsite
'218': campus
'219': natural
'220': urban
'221': candy_store
'222': canteen
'223': car_dealership
'224': backseat
'225': frontseat
'226': caravansary
'227': cardroom
'228': cargo_container_interior
'229': airplane
'230': boat
'231': freestanding
'232': carport_indoor
'233': carport_outdoor
'234': carrousel
'235': casino_indoor
'236': catacomb
'237': cathedral_indoor
'238': cathedral_outdoor
'239': catwalk
'240': cavern_indoor
'241': cavern_outdoor
'242': cemetery
'243': chalet
'244': chaparral
'245': chapel
'246': checkout_counter
'247': cheese_factory
'248': chemical_plant
'249': chemistry_lab
'250': chicken_coop_indoor
'251': chicken_coop_outdoor
'252': chicken_farm_indoor
'253': chicken_farm_outdoor
'254': childs_room
'255': choir_loft_interior
'256': church_indoor
'257': circus_tent_indoor
'258': circus_tent_outdoor
'259': city
'260': classroom
'261': clean_room
'262': cliff
'263': booth
'264': room
'265': clock_tower_indoor
'266': cloister_indoor
'267': cloister_outdoor
'268': clothing_store
'269': coast_road
'270': cockpit
'271': coffee_shop
'272': computer_room
'273': conference_center
'274': conference_hall
'275': confessional
'276': control_room
'277': control_tower_indoor
'278': control_tower_outdoor
'279': convenience_store_indoor
'280': convenience_store_outdoor
'281': corn_field
'282': cottage
'283': cottage_garden
'284': courthouse
'285': courtroom
'286': courtyard
'287': covered_bridge_interior
'288': crawl_space
'289': creek
'290': crevasse
'291': library
'292': cybercafe
'293': dacha
'294': dairy_indoor
'295': dairy_outdoor
'296': dam
'297': dance_school
'298': darkroom
'299': delicatessen
'300': dentists_office
'301': department_store
'302': departure_lounge
'303': vegetation
'304': desert_road
'305': diner_indoor
'306': diner_outdoor
'307': dinette_home
'308': vehicle
'309': dining_car
'310': dining_hall
'311': dining_room
'312': dirt_track
'313': discotheque
'314': distillery
'315': ditch
'316': dock
'317': dolmen
'318': donjon
'319': doorway_indoor
'320': doorway_outdoor
'321': dorm_room
'322': downtown
'323': drainage_ditch
'324': dress_shop
'325': dressing_room
'326': drill_rig
'327': driveway
'328': driving_range_indoor
'329': driving_range_outdoor
'330': drugstore
'331': dry_dock
'332': dugout
'333': earth_fissure
'334': editing_room
'335': electrical_substation
'336': elevated_catwalk
'337': door
'338': freight_elevator
'339': elevator_lobby
'340': elevator_shaft
'341': embankment
'342': embassy
'343': engine_room
'344': entrance_hall
'345': escalator_outdoor
'346': escarpment
'347': estuary
'348': excavation
'349': exhibition_hall
'350': fabric_store
'351': factory_indoor
'352': factory_outdoor
'353': fairway
'354': farm
'355': fastfood_restaurant
'356': fence
'357': cargo_deck
'358': ferryboat_indoor
'359': passenger_deck
'360': cultivated
'361': wild
'362': field_road
'363': fire_escape
'364': fire_station
'365': firing_range_indoor
'366': firing_range_outdoor
'367': fish_farm
'368': fishmarket
'369': fishpond
'370': fitting_room_interior
'371': fjord
'372': flea_market_indoor
'373': flea_market_outdoor
'374': floating_dry_dock
'375': flood
'376': florist_shop_indoor
'377': florist_shop_outdoor
'378': fly_bridge
'379': food_court
'380': football_field
'381': broadleaf
'382': needleleaf
'383': forest_fire
'384': forest_path
'385': formal_garden
'386': fort
'387': fortress
'388': foundry_indoor
'389': foundry_outdoor
'390': fountain
'391': freeway
'392': funeral_chapel
'393': funeral_home
'394': furnace_room
'395': galley
'396': game_room
'397': garage_indoor
'398': garage_outdoor
'399': garbage_dump
'400': gasworks
'401': gate
'402': gatehouse
'403': gazebo_interior
'404': general_store_indoor
'405': general_store_outdoor
'406': geodesic_dome_indoor
'407': geodesic_dome_outdoor
'408': ghost_town
'409': gift_shop
'410': glacier
'411': glade
'412': gorge
'413': granary
'414': great_hall
'415': greengrocery
'416': greenhouse_indoor
'417': greenhouse_outdoor
'418': grotto
'419': guardhouse
'420': gulch
'421': gun_deck_indoor
'422': gun_deck_outdoor
'423': gun_store
'424': hacienda
'425': hallway
'426': handball_court
'427': hangar_indoor
'428': hangar_outdoor
'429': hardware_store
'430': hat_shop
'431': hatchery
'432': hayloft
'433': hearth
'434': hedge_maze
'435': hedgerow
'436': heliport
'437': herb_garden
'438': highway
'439': hill
'440': home_office
'441': home_theater
'442': hospital
'443': hospital_room
'444': hot_spring
'445': hot_tub_indoor
'446': hot_tub_outdoor
'447': hotel_outdoor
'448': hotel_breakfast_area
'449': hotel_room
'450': hunting_lodge_indoor
'451': hut
'452': ice_cream_parlor
'453': ice_floe
'454': ice_skating_rink_indoor
'455': ice_skating_rink_outdoor
'456': iceberg
'457': igloo
'458': imaret
'459': incinerator_indoor
'460': incinerator_outdoor
'461': industrial_area
'462': industrial_park
'463': inn_indoor
'464': inn_outdoor
'465': irrigation_ditch
'466': islet
'467': jacuzzi_indoor
'468': jacuzzi_outdoor
'469': jail_indoor
'470': jail_outdoor
'471': jail_cell
'472': japanese_garden
'473': jetty
'474': jewelry_shop
'475': junk_pile
'476': junkyard
'477': jury_box
'478': kasbah
'479': kennel_indoor
'480': kennel_outdoor
'481': kindergarden_classroom
'482': kiosk_outdoor
'483': kitchenette
'484': lab_classroom
'485': labyrinth_indoor
'486': labyrinth_outdoor
'487': lagoon
'488': artificial
'489': landing
'490': landing_deck
'491': laundromat
'492': lava_flow
'493': lavatory
'494': lawn
'495': lean-to
'496': lecture_room
'497': legislative_chamber
'498': levee
'499': library_outdoor
'500': lido_deck_indoor
'501': lift_bridge
'502': lighthouse
'503': limousine_interior
'504': liquor_store_indoor
'505': liquor_store_outdoor
'506': loading_dock
'507': lobby
'508': lock_chamber
'509': loft
'510': lookout_station_indoor
'511': lookout_station_outdoor
'512': lumberyard_indoor
'513': lumberyard_outdoor
'514': machine_shop
'515': manhole
'516': mansion
'517': manufactured_home
'518': market_indoor
'519': marsh
'520': martial_arts_gym
'521': mastaba
'522': maternity_ward
'523': mausoleum
'524': medina
'525': menhir
'526': mesa
'527': mess_hall
'528': mezzanine
'529': military_hospital
'530': military_hut
'531': military_tent
'532': mine
'533': mineshaft
'534': mini_golf_course_indoor
'535': mini_golf_course_outdoor
'536': mission
'537': dry
'538': water
'539': mobile_home
'540': monastery_indoor
'541': monastery_outdoor
'542': moon_bounce
'543': moor
'544': morgue
'545': mosque_indoor
'546': mosque_outdoor
'547': motel
'548': mountain
'549': mountain_path
'550': mountain_road
'551': movie_theater_indoor
'552': movie_theater_outdoor
'553': mudflat
'554': museum_indoor
'555': museum_outdoor
'556': music_store
'557': music_studio
'558': misc
'559': natural_history_museum
'560': naval_base
'561': newsroom
'562': newsstand_indoor
'563': newsstand_outdoor
'564': nightclub
'565': nuclear_power_plant_indoor
'566': nuclear_power_plant_outdoor
'567': nunnery
'568': nursery
'569': nursing_home
'570': oasis
'571': oast_house
'572': observatory_indoor
'573': observatory_outdoor
'574': observatory_post
'575': ocean
'576': office_building
'577': office_cubicles
'578': oil_refinery_indoor
'579': oil_refinery_outdoor
'580': oilrig
'581': operating_room
'582': optician
'583': organ_loft_interior
'584': orlop_deck
'585': ossuary
'586': outcropping
'587': outhouse_indoor
'588': outhouse_outdoor
'589': overpass
'590': oyster_bar
'591': oyster_farm
'592': acropolis
'593': aircraft_carrier_object
'594': amphitheater_indoor
'595': archipelago
'596': questionable
'597': assembly_hall
'598': assembly_plant
'599': awning_deck
'600': back_porch
'601': backdrop
'602': backroom
'603': backstage_outdoor
'604': backstairs_indoor
'605': backwoods
'606': ballet
'607': balustrade
'608': barbeque
'609': basin_outdoor
'610': bath_indoor
'611': bath_outdoor
'612': bathhouse_outdoor
'613': battlefield
'614': bay
'615': booth_outdoor
'616': bottomland
'617': breakfast_table
'618': bric-a-brac
'619': brooklet
'620': bubble_chamber
'621': buffet
'622': bulkhead
'623': bunk_bed
'624': bypass
'625': byroad
'626': cabin_cruiser
'627': cargo_helicopter
'628': cellar
'629': chair_lift
'630': cocktail_lounge
'631': corner
'632': country_house
'633': country_road
'634': customhouse
'635': dance_floor
'636': deck-house_boat_deck_house
'637': deck-house_deck_house
'638': dining_area
'639': diving_board
'640': embrasure
'641': entranceway_indoor
'642': entranceway_outdoor
'643': entryway_outdoor
'644': estaminet
'645': farm_building
'646': farmhouse
'647': feed_bunk
'648': field_house
'649': field_tent_indoor
'650': field_tent_outdoor
'651': fire_trench
'652': fireplace
'653': flashflood
'654': flatlet
'655': floating_dock
'656': flood_plain
'657': flowerbed
'658': flume_indoor
'659': flying_buttress
'660': foothill
'661': forecourt
'662': foreshore
'663': front_porch
'664': garden
'665': gas_well
'666': glen
'667': grape_arbor
'668': grove
'669': guardroom
'670': guesthouse
'671': gymnasium_outdoor
'672': head_shop
'673': hen_yard
'674': hillock
'675': housing_estate
'676': housing_project
'677': howdah
'678': inlet
'679': insane_asylum
'680': outside
'681': juke_joint
'682': jungle
'683': kraal
'684': laboratorywet
'685': landing_strip
'686': layby
'687': lean-to_tent
'688': loge
'689': loggia_outdoor
'690': lower_deck
'691': luggage_van
'692': mansard
'693': meadow
'694': meat_house
'695': megalith
'696': mens_store_outdoor
'697': mental_institution_indoor
'698': mental_institution_outdoor
'699': military_headquarters
'700': millpond
'701': millrace
'702': natural_spring
'703': nursing_home_outdoor
'704': observation_station
'705': open-hearth_furnace
'706': operating_table
'707': outbuilding
'708': palestra
'709': parkway
'710': patio_indoor
'711': pavement
'712': pawnshop_outdoor
'713': pinetum
'714': piste_road
'715': pizzeria_outdoor
'716': powder_room
'717': pumping_station
'718': reception_room
'719': rest_stop
'720': retaining_wall
'721': rift_valley
'722': road
'723': rock_garden
'724': rotisserie
'725': safari_park
'726': salon
'727': saloon
'728': sanatorium
'729': science_laboratory
'730': scrubland
'731': scullery
'732': seaside
'733': semidesert
'734': shelter
'735': shelter_deck
'736': shelter_tent
'737': shore
'738': shrubbery
'739': sidewalk
'740': snack_bar
'741': snowbank
'742': stage_set
'743': stall
'744': stateroom
'745': store
'746': streetcar_track
'747': student_center
'748': study_hall
'749': sugar_refinery
'750': sunroom
'751': supply_chamber
'752': t-bar_lift
'753': tannery
'754': teahouse
'755': threshing_floor
'756': ticket_window_indoor
'757': tidal_basin
'758': tidal_river
'759': tiltyard
'760': tollgate
'761': tomb
'762': tract_housing
'763': trellis
'764': truck_stop
'765': upper_balcony
'766': vestibule
'767': vinery
'768': walkway
'769': war_room
'770': washroom
'771': water_fountain
'772': water_gate
'773': waterscape
'774': waterway
'775': wetland
'776': widows_walk_indoor
'777': windstorm
'778': packaging_plant
'779': pagoda
'780': paper_mill
'781': park
'782': parking_garage_indoor
'783': parking_garage_outdoor
'784': parking_lot
'785': parlor
'786': particle_accelerator
'787': party_tent_indoor
'788': party_tent_outdoor
'789': pasture
'790': pavilion
'791': pawnshop
'792': pedestrian_overpass_indoor
'793': penalty_box
'794': pet_shop
'795': pharmacy
'796': physics_laboratory
'797': piano_store
'798': picnic_area
'799': pier
'800': pig_farm
'801': pilothouse_indoor
'802': pilothouse_outdoor
'803': pitchers_mound
'804': pizzeria
'805': planetarium_indoor
'806': planetarium_outdoor
'807': plantation_house
'808': playground
'809': playroom
'810': plaza
'811': podium_indoor
'812': podium_outdoor
'813': police_station
'814': pond
'815': pontoon_bridge
'816': poop_deck
'817': porch
'818': portico
'819': portrait_studio
'820': postern
'821': power_plant_outdoor
'822': print_shop
'823': priory
'824': promenade
'825': promenade_deck
'826': pub_indoor
'827': pub_outdoor
'828': pulpit
'829': putting_green
'830': quadrangle
'831': quicksand
'832': quonset_hut_indoor
'833': racecourse
'834': raceway
'835': raft
'836': railroad_track
'837': railway_yard
'838': rainforest
'839': ramp
'840': ranch
'841': ranch_house
'842': reading_room
'843': reception
'844': recreation_room
'845': rectory
'846': recycling_plant_indoor
'847': refectory
'848': repair_shop
'849': residential_neighborhood
'850': resort
'851': rest_area
'852': restaurant
'853': restaurant_kitchen
'854': restaurant_patio
'855': restroom_indoor
'856': restroom_outdoor
'857': revolving_door
'858': riding_arena
'859': river
'860': road_cut
'861': rock_arch
'862': roller_skating_rink_indoor
'863': roller_skating_rink_outdoor
'864': rolling_mill
'865': roof
'866': roof_garden
'867': root_cellar
'868': rope_bridge
'869': roundabout
'870': roundhouse
'871': rubble
'872': ruin
'873': runway
'874': sacristy
'875': salt_plain
'876': sand_trap
'877': sandbar
'878': sauna
'879': savanna
'880': sawmill
'881': schoolhouse
'882': schoolyard
'883': science_museum
'884': scriptorium
'885': sea_cliff
'886': seawall
'887': security_check_point
'888': server_room
'889': sewer
'890': sewing_room
'891': shed
'892': shipping_room
'893': shipyard_outdoor
'894': shoe_shop
'895': shopping_mall_indoor
'896': shopping_mall_outdoor
'897': shower
'898': shower_room
'899': shrine
'900': signal_box
'901': sinkhole
'902': ski_jump
'903': ski_lodge
'904': ski_resort
'905': ski_slope
'906': sky
'907': skywalk_indoor
'908': skywalk_outdoor
'909': slum
'910': snowfield
'911': massage_room
'912': mineral_bath
'913': spillway
'914': sporting_goods_store
'915': squash_court
'916': stable
'917': baseball
'918': stadium_outdoor
'919': stage_indoor
'920': stage_outdoor
'921': staircase
'922': starting_gate
'923': steam_plant_outdoor
'924': steel_mill_indoor
'925': storage_room
'926': storm_cellar
'927': street
'928': strip_mall
'929': strip_mine
'930': student_residence
'931': submarine_interior
'932': sun_deck
'933': sushi_bar
'934': swamp
'935': swimming_hole
'936': swimming_pool_indoor
'937': synagogue_indoor
'938': synagogue_outdoor
'939': taxistand
'940': taxiway
'941': tea_garden
'942': tearoom
'943': teashop
'944': television_room
'945': east_asia
'946': mesoamerican
'947': south_asia
'948': western
'949': tennis_court_indoor
'950': tennis_court_outdoor
'951': tent_outdoor
'952': terrace_farm
'953': indoor_round
'954': indoor_seats
'955': theater_outdoor
'956': thriftshop
'957': throne_room
'958': ticket_booth
'959': tobacco_shop_indoor
'960': toll_plaza
'961': tollbooth
'962': topiary_garden
'963': tower
'964': town_house
'965': toyshop
'966': track_outdoor
'967': trading_floor
'968': trailer_park
'969': train_interior
'970': train_station_outdoor
'971': station
'972': tree_farm
'973': tree_house
'974': trench
'975': trestle_bridge
'976': tundra
'977': rail_indoor
'978': rail_outdoor
'979': road_indoor
'980': road_outdoor
'981': turkish_bath
'982': ocean_deep
'983': ocean_shallow
'984': utility_room
'985': valley
'986': van_interior
'987': vegetable_garden
'988': velodrome_indoor
'989': velodrome_outdoor
'990': ventilation_shaft
'991': veranda
'992': vestry
'993': veterinarians_office
'994': videostore
'995': village
'996': vineyard
'997': volcano
'998': volleyball_court_indoor
'999': volleyball_court_outdoor
'1000': voting_booth
'1001': waiting_room
'1002': walk_in_freezer
'1003': warehouse_indoor
'1004': warehouse_outdoor
'1005': washhouse_indoor
'1006': washhouse_outdoor
'1007': watchtower
'1008': water_mill
'1009': water_park
'1010': water_tower
'1011': water_treatment_plant_indoor
'1012': water_treatment_plant_outdoor
'1013': block
'1014': cascade
'1015': cataract
'1016': fan
'1017': plunge
'1018': watering_hole
'1019': weighbridge
'1020': wet_bar
'1021': wharf
'1022': wheat_field
'1023': whispering_gallery
'1024': widows_walk_interior
'1025': windmill
'1026': window_seat
'1027': barrel_storage
'1028': winery
'1029': witness_stand
'1030': woodland
'1031': workroom
'1032': workshop
'1033': wrestling_ring_indoor
'1034': wrestling_ring_outdoor
'1035': yard
'1036': youth_hostel
'1037': zen_garden
'1038': ziggurat
'1039': zoo
'1040': forklift
'1041': hollow
'1042': hutment
'1043': pueblo
'1044': vat
'1045': perfume_shop
'1046': steel_mill_outdoor
'1047': orchestra_pit
'1048': bridle_path
'1049': lyceum
'1050': one-way_street
'1051': parade_ground
'1052': pump_room
'1053': recycling_plant_outdoor
'1054': chuck_wagon
splits:
- name: train
num_bytes: 8468086
num_examples: 20210
- name: test
num_bytes: 744607
num_examples: 3352
- name: validation
num_bytes: 838032
num_examples: 2000
download_size: 1179202534
dataset_size: 10050725
- config_name: instance_segmentation
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 862611544
num_examples: 20210
- name: test
num_bytes: 212493928
num_examples: 3352
- name: validation
num_bytes: 87502294
num_examples: 2000
download_size: 1197393920
dataset_size: 1162607766
---
# Dataset Card for MIT Scene Parsing Benchmark
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MIT Scene Parsing Benchmark homepage](http://sceneparsing.csail.mit.edu/)
- **Repository:** [Scene Parsing repository (Caffe/Torch7)](https://github.com/CSAILVision/sceneparsing),[Scene Parsing repository (PyTorch)](https://github.com/CSAILVision/semantic-segmentation-pytorch) and [Instance Segmentation repository](https://github.com/CSAILVision/placeschallenge/tree/master/instancesegmentation)
- **Paper:** [Scene Parsing through ADE20K Dataset](http://people.csail.mit.edu/bzhou/publication/scene-parse-camera-ready.pdf) and [Semantic Understanding of Scenes through ADE20K Dataset](https://arxiv.org/abs/1608.05442)
- **Leaderboard:** [MIT Scene Parsing Benchmark leaderboard](http://sceneparsing.csail.mit.edu/#:~:text=twice%20per%20week.-,leaderboard,-Organizers)
- **Point of Contact:** [Bolei Zhou](mailto:[email protected])
### Dataset Summary
Scene parsing is the task of segmenting and parsing an image into different image regions associated with semantic categories, such as sky, road, person, and bed. MIT Scene Parsing Benchmark (SceneParse150) provides a standard training and evaluation platform for the algorithms of scene parsing. The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts. Specifically, the benchmark is divided into 20K images for training, 2K images for validation, and another batch of held-out images for testing. There are in total 150 semantic categories included for evaluation, which include e.g. sky, road, grass, and discrete objects like person, car, bed. Note that there are non-uniform distribution of objects occuring in the images, mimicking a more natural object occurrence in daily scene.
The goal of this benchmark is to segment and parse an image into different image regions associated with semantic categories, such as sky, road, person, and bedThis benchamark is similar to semantic segmentation tasks in COCO and Pascal Dataset, but the data is more scene-centric and with a diverse range of object categories. The data for this benchmark comes from ADE20K Dataset which contains more than 20K scene-centric images exhaustively annotated with objects and object parts.
### Supported Tasks and Leaderboards
- `scene-parsing`: The goal of this task is to segment the whole image densely into semantic classes (image regions), where each pixel is assigned a class label such as the region of *tree* and the region of *building*.
[The leaderboard](http://sceneparsing.csail.mit.edu/#:~:text=twice%20per%20week.-,leaderboard,-Organizers) for this task ranks the models by considering the mean of the pixel-wise accuracy and class-wise IoU as the final score. Pixel-wise accuracy indicates the ratio of pixels which are correctly predicted, while class-wise IoU indicates the Intersection of Union of pixels averaged over all the 150 semantic categories. Refer to the [Development Kit](https://github.com/CSAILVision/sceneparsing) for the detail.
- `instance-segmentation`: The goal of this task is to detect the object instances inside an image and further generate the precise segmentation masks of the objects. Its difference compared to the task of scene parsing is that in scene parsing there is no instance concept for the segmented regions, instead in instance segmentation if there are three persons in the scene, the network is required to segment each one of the person regions. This task doesn't have an active leaderboard. The performance of the instance segmentation algorithms is evaluated by Average Precision (AP, or mAP), following COCO evaluation metrics. For each image, at most 255 top-scoring instance masks are taken across all categories. Each instance mask prediction is only considered if its IoU with ground truth is above a certain threshold. There are 10 IoU thresholds of 0.50:0.05:0.95 for evaluation. The final AP is averaged across 10 IoU thresholds and 100 categories. You can refer to COCO evaluation page for more explanation: http://mscoco.org/dataset/#detections-eval
### Languages
English.
## Dataset Structure
### Data Instances
A data point comprises an image and its annotation mask, which is `None` in the testing set. The `scene_parsing` configuration has an additional `scene_category` field.
#### `scene_parsing`
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=683x512 at 0x1FF32A3EDA0>,
'annotation': <PIL.PngImagePlugin.PngImageFile image mode=L size=683x512 at 0x1FF32E5B978>,
'scene_category': 0
}
```
#### `instance_segmentation`
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=256x256 at 0x20B51B5C400>,
'annotation': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=256x256 at 0x20B57051B38>
}
```
### Data Fields
#### `scene_parsing`
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `annotation`: A `PIL.Image.Image` object containing the annotation mask.
- `scene_category`: A scene category for the image (e.g. `airport_terminal`, `canyon`, `mobile_home`).
> **Note**: annotation masks contain labels ranging from 0 to 150, where 0 refers to "other objects". Those pixels are not considered in the official evaluation. Refer to [this file](https://github.com/CSAILVision/sceneparsing/blob/master/objectInfo150.csv) for the information about the labels of the 150 semantic categories, including indices, pixel ratios and names.
#### `instance_segmentation`
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `annotation`: A `PIL.Image.Image` object containing the annotation mask.
> **Note**: in the instance annotation masks, the R(ed) channel encodes category ID, and the G(reen) channel encodes instance ID. Each object instance has a unique instance ID regardless of its category ID. In the dataset, all images have <256 object instances. Refer to [this file (train split)](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/instanceInfo100_train.txt) and to [this file (validation split)](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/instanceInfo100_val.txt) for the information about the labels of the 100 semantic categories. To find the mapping between the semantic categories for `instance_segmentation` and `scene_parsing`, refer to [this file](https://github.com/CSAILVision/placeschallenge/blob/master/instancesegmentation/categoryMapping.txt).
### Data Splits
The data is split into training, test and validation set. The training data contains 20210 images, the testing data contains 3352 images and the validation data contains 2000 images.
## Dataset Creation
### Curation Rationale
The rationale from the paper for the ADE20K dataset from which this benchmark originates:
> Semantic understanding of visual scenes is one of the holy grails of computer vision. Despite efforts of the community in data collection, there are still few image datasets covering a wide range of scenes and object categories with pixel-wise annotations for scene understanding. In this work, we present a densely annotated dataset ADE20K, which spans diverse annotations of scenes, objects, parts of objects, and
in some cases even parts of parts.
> The motivation of this work is to collect a dataset that has densely annotated images (every pixel has a semantic label) with a large and an unrestricted open vocabulary. The
images in our dataset are manually segmented in great detail, covering a diverse set of scenes, object and object part categories. The challenge for collecting such annotations is finding reliable annotators, as well as the fact that labeling is difficult if the class list is not defined in advance. On the other hand, open vocabulary naming also suffers from naming inconsistencies across different annotators. In contrast,
our dataset was annotated by a single expert annotator, providing extremely detailed and exhaustive image annotations. On average, our annotator labeled 29 annotation segments per image, compared to the 16 segments per image labeled by external annotators (like workers from Amazon Mechanical Turk). Furthermore, the data consistency and quality are much higher than that of external annotators.
### Source Data
#### Initial Data Collection and Normalization
Images come from the LabelMe, SUN datasets, and Places and were selected to cover the 900 scene categories defined in the SUN database.
This benchmark was built by selecting the top 150 objects ranked by their total pixel ratios from the ADE20K dataset. As the original images in the ADE20K dataset have various sizes, for simplicity those large-sized images were rescaled to make their minimum heights or widths as 512. Among the 150 objects, there are 35 stuff classes (i.e., wall, sky, road) and 115 discrete objects (i.e., car, person, table). The annotated pixels of the 150 objects occupy 92.75% of all the pixels in the dataset, where the stuff classes occupy 60.92%, and discrete objects occupy 31.83%.
#### Who are the source language producers?
The same as in the LabelMe, SUN datasets, and Places datasets.
### Annotations
#### Annotation process
Annotation process for the ADE20K dataset:
> **Image Annotation.** For our dataset, we are interested in having a diverse set of scenes with dense annotations of all the objects present. Images come from the LabelMe, SUN datasets, and Places and were selected to cover the 900 scene categories defined in the SUN database. Images were annotated by a single expert worker using the LabelMe interface. Fig. 2 shows a snapshot of the annotation interface and one fully segmented image. The worker provided three types of annotations: object segments with names, object parts, and attributes. All object instances are segmented independently so that the dataset could be used to train and evaluate detection or segmentation algorithms. Datasets such as COCO, Pascal or Cityscape start by defining a set of object categories of interest. However, when labeling all the objects in a scene, working with a predefined list of objects is not possible as new categories
appear frequently (see fig. 5.d). Here, the annotator created a dictionary of visual concepts where new classes were added constantly to ensure consistency in object naming. Object parts are associated with object instances. Note that parts can have parts too, and we label these associations as well. For example, the ‘rim’ is a part of a ‘wheel’, which in turn is part of a ‘car’. A ‘knob’ is a part of a ‘door’
that can be part of a ‘cabinet’. The total part hierarchy has a depth of 3. The object and part hierarchy is in the supplementary materials.
> **Annotation Consistency.** Defining a labeling protocol is relatively easy when the labeling task is restricted to a fixed list of object classes, however it becomes challenging when the class list is openended. As the goal is to label all the objects within each image, the list of classes grows unbounded. >Many object classes appear only a few times across the entire collection of images. However, those rare >object classes cannot be ignored as they might be important elements for the interpretation of the scene. >Labeling in these conditions becomes difficult because we need to keep a growing list of all the object >classes in order to have a consistent naming across the entire dataset. Despite the annotator’s best effort, >the process is not free of noise. To analyze the annotation consistency we took a subset of 61 randomly >chosen images from the validation set, then asked our annotator to annotate them again (there is a time difference of six months). One expects that there are some differences between the two annotations. A few examples are shown in Fig 3. On average, 82.4% of the pixels got the same label. The remaining 17.6% of pixels had some errors for which we grouped into three error types as follows:
>
> • Segmentation quality: Variations in the quality of segmentation and outlining of the object boundary. One typical source of error arises when segmenting complex objects such as buildings and trees, which can be segmented with different degrees of precision. 5.7% of the pixels had this type of error.
>
> • Object naming: Differences in object naming (due to ambiguity or similarity between concepts, for instance calling a big car a ‘car’ in one segmentation and a ‘truck’ in the another one, or a ‘palm tree’ a‘tree’. 6.0% of the pixels had naming issues. These errors can be reduced by defining a very precise terminology, but this becomes much harder with a large growing vocabulary.
>
> • Segmentation quantity: Missing objects in one of the two segmentations. There is a very large number of objects in each image and some images might be annotated more thoroughly than others. For example, in the third column of Fig 3 the annotator missed some small objects in different annotations. 5.9% of the pixels are due to missing labels. A similar issue existed in segmentation datasets such as the Berkeley Image segmentation dataset.
>
> The median error values for the three error types are: 4.8%, 0.3% and 2.6% showing that the mean value is dominated by a few images, and that the most common type of error is segmentation quality.
To further compare the annotation done by our single expert annotator and the AMT-like annotators, 20 images
from the validation set are annotated by two invited external annotators, both with prior experience in image labeling. The first external annotator had 58.5% of inconsistent pixels compared to the segmentation provided by our annotator, and the second external annotator had 75% of the inconsistent pixels. Many of these inconsistencies are due to the poor quality of the segmentations provided by external annotators (as it has been observed with AMT which requires multiple verification steps for quality control). For the
best external annotator (the first one), 7.9% of pixels have inconsistent segmentations (just slightly worse than our annotator), 14.9% have inconsistent object naming and 35.8% of the pixels correspond to missing objects, which is due to the much smaller number of objects annotated by the external annotator in comparison with the ones annotated by our expert annotator. The external annotators labeled on average 16 segments per image while our annotator provided 29 segments per image.
#### Who are the annotators?
Three expert annotators and the AMT-like annotators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Refer to the `Annotation Consistency` subsection of `Annotation Process`.
## Additional Information
### Dataset Curators
Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso and Antonio Torralba.
### Licensing Information
The MIT Scene Parsing Benchmark dataset is licensed under a [BSD 3-Clause License](https://github.com/CSAILVision/sceneparsing/blob/master/LICENSE).
### Citation Information
```bibtex
@inproceedings{zhou2017scene,
title={Scene Parsing through ADE20K Dataset},
author={Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2017}
}
@article{zhou2016semantic,
title={Semantic understanding of scenes through the ade20k dataset},
author={Zhou, Bolei and Zhao, Hang and Puig, Xavier and Fidler, Sanja and Barriuso, Adela and Torralba, Antonio},
journal={arXiv preprint arXiv:1608.05442},
year={2016}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
scan-tasks/scan-tasks | scan-tasks | 2024-01-18T11:15:22Z | 1,750 | 6 | [
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:bsd",
"size_categories:10K<n<100K",
"arxiv:1711.00350",
"region:us",
"multi-turn"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- bsd
multilinguality:
- monolingual
pretty_name: SCAN
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: scan
tags:
- multi-turn
dataset_info:
- config_name: simple
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3217770
num_examples: 16728
- name: test
num_bytes: 799912
num_examples: 4182
download_size: 4080388
dataset_size: 4017682
- config_name: addprim_jump
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2535625
num_examples: 14670
- name: test
num_bytes: 1508445
num_examples: 7706
download_size: 4111174
dataset_size: 4044070
- config_name: addprim_turn_left
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3908891
num_examples: 21890
- name: test
num_bytes: 170063
num_examples: 1208
download_size: 4148216
dataset_size: 4078954
- config_name: filler_num0
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2513034
num_examples: 15225
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 2892291
dataset_size: 2843121
- config_name: filler_num1
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2802865
num_examples: 16290
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 3185317
dataset_size: 3132952
- config_name: filler_num2
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3106220
num_examples: 17391
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 3491975
dataset_size: 3436307
- config_name: filler_num3
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3412704
num_examples: 18528
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 3801870
dataset_size: 3742791
- config_name: length
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2672464
num_examples: 16990
- name: test
num_bytes: 1345218
num_examples: 3920
download_size: 4080388
dataset_size: 4017682
- config_name: template_around_right
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2513034
num_examples: 15225
- name: test
num_bytes: 1229757
num_examples: 4476
download_size: 3801870
dataset_size: 3742791
- config_name: template_jump_around_right
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3412704
num_examples: 18528
- name: test
num_bytes: 330087
num_examples: 1173
download_size: 3801870
dataset_size: 3742791
- config_name: template_opposite_right
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 2944398
num_examples: 15225
- name: test
num_bytes: 857943
num_examples: 4476
download_size: 3861420
dataset_size: 3802341
- config_name: template_right
features:
- name: commands
dtype: string
- name: actions
dtype: string
splits:
- name: train
num_bytes: 3127623
num_examples: 15225
- name: test
num_bytes: 716403
num_examples: 4476
download_size: 3903105
dataset_size: 3844026
config_names:
- addprim_jump
- addprim_turn_left
- filler_num0
- filler_num1
- filler_num2
- filler_num3
- length
- simple
- template_around_right
- template_jump_around_right
- template_opposite_right
- template_right
---
# Dataset Card for "scan"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/brendenlake/SCAN](https://github.com/brendenlake/SCAN)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 224.18 MB
- **Size of the generated dataset:** 44.53 MB
- **Total amount of disk used:** 268.71 MB
### Dataset Summary
SCAN tasks with various splits.
SCAN is a set of simple language-driven navigation tasks for studying
compositional learning and zero-shot generalization.
See https://github.com/brendenlake/SCAN for a description of the splits.
Example usage:
data = datasets.load_dataset('scan/length')
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### addprim_jump
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 4.05 MB
- **Total amount of disk used:** 22.73 MB
An example of 'train' looks as follows.
```
```
#### addprim_turn_left
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 4.09 MB
- **Total amount of disk used:** 22.76 MB
An example of 'train' looks as follows.
```
```
#### filler_num0
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 2.85 MB
- **Total amount of disk used:** 21.53 MB
An example of 'train' looks as follows.
```
```
#### filler_num1
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 3.14 MB
- **Total amount of disk used:** 21.82 MB
An example of 'train' looks as follows.
```
```
#### filler_num2
- **Size of downloaded dataset files:** 18.69 MB
- **Size of the generated dataset:** 3.44 MB
- **Total amount of disk used:** 22.12 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### addprim_jump
- `commands`: a `string` feature.
- `actions`: a `string` feature.
#### addprim_turn_left
- `commands`: a `string` feature.
- `actions`: a `string` feature.
#### filler_num0
- `commands`: a `string` feature.
- `actions`: a `string` feature.
#### filler_num1
- `commands`: a `string` feature.
- `actions`: a `string` feature.
#### filler_num2
- `commands`: a `string` feature.
- `actions`: a `string` feature.
### Data Splits
| name |train|test|
|-----------------|----:|---:|
|addprim_jump |14670|7706|
|addprim_turn_left|21890|1208|
|filler_num0 |15225|1173|
|filler_num1 |16290|1173|
|filler_num2 |17391|1173|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{Lake2018GeneralizationWS,
title={Generalization without Systematicity: On the Compositional Skills of
Sequence-to-Sequence Recurrent Networks},
author={Brenden M. Lake and Marco Baroni},
booktitle={ICML},
year={2018},
url={https://arxiv.org/pdf/1711.00350.pdf},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
inparallel/saudinewsnet | inparallel | 2024-01-18T11:15:20Z | 111 | 8 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:ar",
"license:unknown",
"size_categories:10K<n<100K",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: saudinewsnet
dataset_info:
features:
- name: source
dtype: string
- name: url
dtype: string
- name: date_extracted
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 103654105
num_examples: 31030
download_size: 29014166
dataset_size: 103654105
---
# Dataset Card for "saudinewsnet"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SaudiNewsNet](https://github.com/parallelfold/SaudiNewsNet)
- **Repository:** [Website](https://github.com/parallelfold/SaudiNewsNet)
- **Paper:** [More Information Needed]
- **Point of Contact:** [Mazen Abdulaziz](mailto:[email protected])
- **Size of downloaded dataset files:** 29.01 MB
- **Size of the generated dataset:** 103.65 MB
- **Total amount of disk used:** 132.67 MB
### Dataset Summary
The dataset contains a set of 31,030 Arabic newspaper articles alongwith metadata, extracted from various online Saudi newspapers and written in MSA.
The dataset currently contains **31,030** Arabic articles (with a total number of **8,758,976 words**). The articles were extracted from the following Saudi newspapers (sorted by number of articles):
- [Al-Riyadh](http://www.alriyadh.com/) (4,852 articles)
- [Al-Jazirah](http://al-jazirah.com/) (3,690 articles)
- [Al-Yaum](http://alyaum.com/) (3,065 articles)
- [Al-Eqtisadiya](http://aleqt.com/) (2,964 articles)
- [Al-Sharq Al-Awsat](http://aawsat.com/) (2,947 articles)
- [Okaz](http://www.okaz.com.sa/) (2,846 articles)
- [Al-Watan](http://alwatan.com.sa/) (2,279 articles)
- [Al-Madina](http://www.al-madina.com/) (2,252 articles)
- [Al-Weeam](http://alweeam.com.sa/) (2,090 articles)
- [Ain Alyoum](http://3alyoum.com/) (2,080 articles)
- [Sabq](http://sabq.org/) (1,411 articles)
- [Saudi Press Agency](http://www.spa.gov.sa) (369 articles)
- [Arreyadi](http://www.arreyadi.com.sa/) (133 articles)
- [Arreyadiyah](http://www.arreyadiyah.com/) (52 articles)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 29.01 MB
- **Size of the generated dataset:** 103.65 MB
- **Total amount of disk used:** 132.67 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"author": "الرياض: محمد الحميدي",
"content": "\"في وقت تتهيأ فيه السعودية لإطلاق الإصدار الثاني من العملات المعدنية، لا تزال التداول بمبالغ النقود المصنوعة من المعدن مستقرة عن...",
"date_extracted": "2015-07-22 01:18:37",
"source": "aawsat",
"title": "\"«العملة المعدنية» السعودية تسجل انحسارًا تاريخيًا وسط تهيؤ لإطلاق الإصدار الثاني\"...",
"url": "\"http://aawsat.com/home/article/411671/«العملة-المعدنية»-السعودية-تسجل-انحسارًا-تاريخيًا-وسط-تهيؤ-لإطلاق-الإصدار-الثاني\"..."
}
```
### Data Fields
The data fields are the same among all splits.
- **`source`** (str): The source newspaper.
- **`url`** (str): The full URL from which the article was extracted.
- **`date_extracted`** (str): The timestamp of the date on which the article was extracted. It has the format `YYYY-MM-DD hh:mm:ss`. Notice that this field does not necessarily represent the date on which the article was authored (or made available online), however for articles stamped with a date of extraction after August 1, 2015, this field most probably represents the date of authoring.
- **`title`** (str): The title of the article. Contains missing values that were replaced with an empty string.
- **`author`** (str): The author of the article. Contains missing values that were replaced with an empty string.
- **`content`** (str): The content of the article.
### Data Splits
| name |train|
|-------|----:|
|default|31030|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
| String Identifier | Newspaper |
| ------------------ | --------- |
| aawsat | [Al-Sharq Al-Awsat](http://aawsat.com/) |
| aleqtisadiya | [Al-Eqtisadiya](http://aleqt.com/) |
| aljazirah | [Al-Jazirah](http://al-jazirah.com/) |
| almadina | [Al-Madina](http://www.al-madina.com/) |
| alriyadh | [Al-Riyadh](http://www.alriyadh.com/) |
| alwatan | [Al-Watan](http://alwatan.com.sa/) |
| alweeam | [Al-Weeam](http://alweeam.com.sa/) |
| alyaum | [Al-Yaum](http://alyaum.com/) |
| arreyadi | [Arreyadi](http://www.arreyadi.com.sa/) |
| arreyadiyah | [Arreyadi](http://www.arreyadiyah.com/) |
| okaz | [Okaz](http://www.okaz.com.sa/) |
| sabq | [Sabq](http://sabq.org/) |
| was | [Saudi Press Agency](http://www.spa.gov.sa/) |
| 3alyoum | [Ain Alyoum](http://3alyoum.com/) |
#### Initial Data Collection and Normalization
The Modern Standard Arabic texts crawled from the Internet.
#### Who are the source language producers?
Newspaper Websites.
### Annotations
The dataset does not contain any additional annotations.
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
### Citation Information
```
@misc{hagrima2015,
author = "M. Alhagri",
title = "Saudi Newspapers Arabic Corpus (SaudiNewsNet)",
year = 2015,
url = "http://github.com/ParallelMazen/SaudiNewsNet"
}
```
### Contributions
Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset. |
INK-USC/riddle_sense | INK-USC | 2024-01-18T11:14:43Z | 100 | 24 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:1K<n<10K",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: RiddleSense
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
dataset_info:
features:
- name: answerKey
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 720715
num_examples: 3510
- name: validation
num_bytes: 208276
num_examples: 1021
- name: test
num_bytes: 212790
num_examples: 1184
download_size: 2083122
dataset_size: 1141781
---
# Dataset Card for RiddleSense
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://inklab.usc.edu/RiddleSense/
- **Repository:** https://github.com/INK-USC/RiddleSense/
- **Paper:** https://inklab.usc.edu/RiddleSense/riddlesense_acl21_paper.pdf
- **Leaderboard:** https://inklab.usc.edu/RiddleSense/#leaderboard
- **Point of Contact:** [Yuchen Lin]([email protected])
### Dataset Summary
Answering such a riddle-style question is a challenging cognitive process, in that it requires
complex commonsense reasoning abilities, an understanding of figurative language, and counterfactual reasoning
skills, which are all important abilities for advanced natural language understanding (NLU). However,
there is currently no dedicated datasets aiming to test these abilities. Herein, we present RiddleSense,
a new multiple-choice question answering task, which comes with the first large dataset (5.7k examples) for answering
riddle-style commonsense questions. We systematically evaluate a wide range of models over the challenge,
and point out that there is a large gap between the best-supervised model and human performance suggesting
intriguing future research in the direction of higher-order commonsense reasoning and linguistic creativity towards
building advanced NLU systems.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"answerKey": "E",
"choices": {
"label": ["A", "B", "C", "D", "E"],
"text": ["throw", "bit", "gallow", "mouse", "hole"]
},
"question": "A man is incarcerated in prison, and as his punishment he has to carry a one tonne bag of sand backwards and forwards across a field the size of a football pitch. What is the one thing he can put in it to make it lighter?"
}
```
### Data Fields
Data Fields
The data fields are the same among all splits.
default
- `answerKey`: a string feature.
- `question`: a string feature.
- `choices`: a dictionary feature containing:
- `label`: a string feature.
- `text`: a string feature.
### Data Splits
|name| train| validation| test|
|---|---|---|---|
|default| 3510| 1021| 1184|
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The copyright of RiddleSense dataset is consistent with the terms of use of the fan websites and the intellectual property and privacy rights of the original sources. All of our riddles and answers are from fan websites that can be accessed freely. The website owners state that you may print and download material from the sites solely for non-commercial use provided that we agree not to change or delete any copyright or proprietary notices from the materials. The dataset users must agree that they will only use the dataset for research purposes before they can access the both the riddles and our annotations. We do not vouch for the potential bias or fairness issue that might exist within the riddles. You do not have the right to redistribute them. Again, you must not use this dataset for any commercial purposes.
### Citation Information
```
@InProceedings{lin-etal-2021-riddlesense,
title={RiddleSense: Reasoning about Riddle Questions Featuring Linguistic Creativity and Commonsense Knowledge},
author={Lin, Bill Yuchen and Wu, Ziyi and Yang, Yichi and Lee, Dong-Ho and Ren, Xiang},
journal={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP 2021): Findings},
year={2021}
}
```
### Contributions
Thanks to [@ziyiwu9494](https://github.com/ziyiwu9494) for adding this dataset. |
kdexd/red_caps | kdexd | 2024-01-18T11:14:38Z | 740,731 | 59 | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"arxiv:2111.11431",
"region:us"
] | [
"image-to-text"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- image-to-text
task_ids:
- image-captioning
paperswithcode_id: redcaps
pretty_name: RedCaps
dataset_info:
features:
- name: image_id
dtype: string
- name: author
dtype: string
- name: image_url
dtype: string
- name: raw_caption
dtype: string
- name: caption
dtype: string
- name: subreddit
dtype:
class_label:
names:
'0': abandonedporn
'1': abandoned
'2': absoluteunits
'3': airplants
'4': alltheanimals
'5': amateurphotography
'6': amateurroomporn
'7': animalporn
'8': antiques
'9': antkeeping
'10': ants
'11': aquariums
'12': architectureporn
'13': artefactporn
'14': astronomy
'15': astrophotography
'16': australiancattledog
'17': australianshepherd
'18': autumnporn
'19': averagebattlestations
'20': awwducational
'21': awwnverts
'22': axolotls
'23': backpacking
'24': backyardchickens
'25': baking
'26': ballpython
'27': barista
'28': bassfishing
'29': battlestations
'30': bbq
'31': beagle
'32': beardeddragons
'33': beekeeping
'34': beerandpizza
'35': beerporn
'36': beerwithaview
'37': beginnerwoodworking
'38': bengalcats
'39': bento
'40': bernesemountaindogs
'41': berries
'42': bettafish
'43': bicycling
'44': bikecommuting
'45': birding
'46': birdphotography
'47': birdpics
'48': birdsofprey
'49': birds
'50': blackcats
'51': blacksmith
'52': bladesmith
'53': boatporn
'54': bonsai
'55': bookporn
'56': bookshelf
'57': bordercollie
'58': bostonterrier
'59': botanicalporn
'60': breadit
'61': breakfastfood
'62': breakfast
'63': bridgeporn
'64': brochet
'65': budgetfood
'66': budgies
'67': bulldogs
'68': burgers
'69': butterflies
'70': cabinporn
'71': cactus
'72': cakedecorating
'73': cakewin
'74': cameras
'75': campingandhiking
'76': camping
'77': carnivorousplants
'78': carpentry
'79': carporn
'80': cassetteculture
'81': castiron
'82': castles
'83': casualknitting
'84': catpictures
'85': cats
'86': ceramics
'87': chameleons
'88': charcuterie
'89': cheesemaking
'90': cheese
'91': chefit
'92': chefknives
'93': chickens
'94': chihuahua
'95': chinchilla
'96': chinesefood
'97': churchporn
'98': cider
'99': cityporn
'100': classiccars
'101': cockatiel
'102': cocktails
'103': coffeestations
'104': coins
'105': cookiedecorating
'106': corgi
'107': cornsnakes
'108': cozyplaces
'109': crafts
'110': crestedgecko
'111': crochet
'112': crossstitch
'113': crows
'114': crystals
'115': cupcakes
'116': dachshund
'117': damnthatsinteresting
'118': desertporn
'119': designmyroom
'120': desksetup
'121': dessertporn
'122': dessert
'123': diy
'124': dobermanpinscher
'125': doggos
'126': dogpictures
'127': drunkencookery
'128': duck
'129': dumpsterdiving
'130': earthporn
'131': eatsandwiches
'132': embroidery
'133': entomology
'134': equestrian
'135': espresso
'136': exposureporn
'137': eyebleach
'138': f1porn
'139': farming
'140': femalelivingspace
'141': fermentation
'142': ferrets
'143': fireporn
'144': fishing
'145': fish
'146': flowers
'147': flyfishing
'148': foodporn
'149': food
'150': foraging
'151': fossilporn
'152': fountainpens
'153': foxes
'154': frenchbulldogs
'155': frogs
'156': gardening
'157': gardenwild
'158': geckos
'159': gemstones
'160': geologyporn
'161': germanshepherds
'162': glutenfree
'163': goldenretrievers
'164': goldfish
'165': gold
'166': greatpyrenees
'167': grilledcheese
'168': grilling
'169': guineapigs
'170': gunporn
'171': guns
'172': hamsters
'173': handtools
'174': healthyfood
'175': hedgehog
'176': helicopters
'177': herpetology
'178': hiking
'179': homestead
'180': horses
'181': hotpeppers
'182': houseplants
'183': houseporn
'184': husky
'185': icecreamery
'186': indoorgarden
'187': infrastructureporn
'188': insects
'189': instantpot
'190': interestingasfuck
'191': interiordesign
'192': itookapicture
'193': jellyfish
'194': jewelry
'195': kayakfishing
'196': kayaking
'197': ketorecipes
'198': knifeporn
'199': knives
'200': labrador
'201': leathercraft
'202': leopardgeckos
'203': lizards
'204': lookatmydog
'205': macarons
'206': machineporn
'207': macroporn
'208': malelivingspace
'209': mead
'210': mealprepsunday
'211': mechanicalkeyboards
'212': mechanicalpencils
'213': melts
'214': metalworking
'215': microgreens
'216': microporn
'217': mildlyinteresting
'218': mineralporn
'219': monitors
'220': monstera
'221': mostbeautiful
'222': motorcycleporn
'223': muglife
'224': mushroomgrowers
'225': mushroomporn
'226': mushrooms
'227': mycology
'228': natureisfuckinglit
'229': natureporn
'230': nebelung
'231': orchids
'232': otters
'233': outdoors
'234': owls
'235': parrots
'236': pelletgrills
'237': pens
'238': perfectfit
'239': permaculture
'240': photocritique
'241': photographs
'242': pics
'243': pitbulls
'244': pizza
'245': plantbaseddiet
'246': plantedtank
'247': plantsandpots
'248': plants
'249': pomeranians
'250': pottery
'251': pourpainting
'252': proplifting
'253': pugs
'254': pug
'255': quilting
'256': rabbits
'257': ramen
'258': rarepuppers
'259': reeftank
'260': reptiles
'261': resincasting
'262': roomporn
'263': roses
'264': rottweiler
'265': ruralporn
'266': sailing
'267': salsasnobs
'268': samoyeds
'269': savagegarden
'270': scotch
'271': seaporn
'272': seriouseats
'273': sewing
'274': sharks
'275': shiba
'276': shihtzu
'277': shrimptank
'278': siamesecats
'279': siberiancats
'280': silverbugs
'281': skyporn
'282': sloths
'283': smoking
'284': snails
'285': snakes
'286': sneakers
'287': sneks
'288': somethingimade
'289': soup
'290': sourdough
'291': sousvide
'292': spaceporn
'293': spicy
'294': spiderbro
'295': spiders
'296': squirrels
'297': steak
'298': streetphotography
'299': succulents
'300': superbowl
'301': supermodelcats
'302': sushi
'303': tacos
'304': tarantulas
'305': tastyfood
'306': teaporn
'307': tea
'308': tequila
'309': terrariums
'310': thedepthsbelow
'311': thriftstorehauls
'312': tinyanimalsonfingers
'313': tonightsdinner
'314': toolporn
'315': tools
'316': torties
'317': tortoise
'318': tractors
'319': trailrunning
'320': trains
'321': trucks
'322': turtle
'323': underwaterphotography
'324': upcycling
'325': urbanexploration
'326': urbanhell
'327': veganfoodporn
'328': veganrecipes
'329': vegetablegardening
'330': vegetarian
'331': villageporn
'332': vintageaudio
'333': vintage
'334': vinyl
'335': volumeeating
'336': watches
'337': waterporn
'338': weatherporn
'339': wewantplates
'340': wildernessbackpacking
'341': wildlifephotography
'342': wine
'343': winterporn
'344': woodcarving
'345': woodworking
'346': workbenches
'347': workspaces
'348': yarnaddicts
'349': zerowaste
- name: score
dtype: int32
- name: created_utc
dtype: timestamp[s, tz=UTC]
- name: permalink
dtype: string
- name: crosspost_parents
sequence: string
config_name: all
splits:
- name: train
num_bytes: 3378544525
num_examples: 12011121
download_size: 1061908181
dataset_size: 3378544525
---
# Dataset Card for RedCaps
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [RedCaps homepage](https://redcaps.xyz/)
- **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader)
- **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431)
- **Leaderboard:**
- **Point of Contact:** [Karan Desai](mailto:[email protected])
### Dataset Summary
RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.
Images and captions from Reddit depict and describe a wide variety of objects and scenes.
The data is collected from a manually curated set of subreddits (350 total),
which give coarse image labels and allow steering of the dataset composition
without labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and
fine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image
labels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually
unrelated images through a common semantic meaning (r/perfectfit).
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("red_caps", "rabbits_2017")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
Some image links point to more than one image. You can process and downloaded those as follows:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import os
import re
import urllib
import PIL.Image
import datasets
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"]))
return batch
def process_image_urls(batch):
processed_batch_image_urls = []
for image_url in batch["image_url"]:
processed_example_image_urls = []
image_url_splits = re.findall(r"http\S+", image_url)
for image_url_split in image_url_splits:
if "imgur" in image_url_split and "," in image_url_split:
for image_url_part in image_url_split.split(","):
if not image_url_part:
continue
image_url_part = image_url_part.strip()
root, ext = os.path.splitext(image_url_part)
if not root.startswith("http"):
root = "http://i.imgur.com/" + root
root = root.split("#")[0]
if not ext:
ext = ".jpg"
ext = re.split(r"[?%]", ext)[0]
image_url_part = root + ext
processed_example_image_urls.append(image_url_part)
else:
processed_example_image_urls.append(image_url_split)
processed_batch_image_urls.append(processed_example_image_urls)
batch["image_url"] = processed_batch_image_urls
return batch
dset = load_dataset("red_caps", "rabbits_2017")
dset = dset.map(process_image_urls, batched=True, num_proc=4)
features = dset["train"].features.copy()
features["image"] = datasets.Sequence(datasets.Image())
num_threads = 20
dset = dset.map(fetch_images, batched=True, batch_size=100, features=features, fn_kwargs={"num_threads": num_threads})
```
Note that in the above code, we use the `datasets.Sequence` feature to represent a list of images for the multi-image links.
### Supported Tasks and Leaderboards
From the paper:
> We have used our dataset to train deep neural networks that perform image captioning, and
that learn transferable visual representations for a variety of downstream visual recognition tasks
(image classification, object detection, instance segmentation).
> We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks,
such as image or text retrieval or text-to-image synthesis.
### Languages
All of the subreddits in RedCaps use English as their primary language.
## Dataset Structure
### Data Instances
Each instance in RedCaps represents a single Reddit image post:
```
{
'image_id': 'bpzj7r',
'author': 'djasz1',
'image_url': 'https://i.redd.it/ho0wntksivy21.jpg',
'raw_caption': 'Found on a friend’s property in the Keys FL. She is now happily living in my house.',
'caption': 'found on a friend's property in the keys fl. she is now happily living in my house.', 'subreddit': 3,
'score': 72,
'created_utc': datetime.datetime(2019, 5, 18, 1, 36, 41),
'permalink': '/r/airplants/comments/bpzj7r/found_on_a_friends_property_in_the_keys_fl_she_is/', 'crosspost_parents': None
}
```
### Data Fields
- `image_id`: Unique alphanumeric ID of the image post (assigned by Reddit).
- `author`: Reddit username of the image post author.
- `image_url`: Static URL for downloading the image associated with the post.
- `raw_caption`: Textual description of the image, written by the post author.
- `caption`: Cleaned version of "raw_caption" by us (see Q35).
- `subreddit`: Name of subreddit where the post was submitted.
- `score`: Net upvotes (discounting downvotes) received by the image post. This field is equal to `None` if the image post is a crosspost.
- `created_utc`: Integer time epoch (in UTC) when the post was submitted to Reddit.
- `permalink`: Partial URL of the Reddit post (https://reddit.com/<permalink>).
- `crosspost_parents`: List of parent posts. This field is optional.
### Data Splits
All the data is contained in training set. The training set has nearly 12M (12,011,111) instances.
From the paper:
> We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while
the validation split is derived from downstream task(s). If users require a validation split, we
recommend sampling it such that it follows the same subreddit distribution as entire dataset.
## Dataset Creation
### Curation Rationale
From the paper:
> Large datasets of image-text pairs are widely used for pre-training generic representations
that transfer to a variety of downstream vision and vision-and-language tasks. Existing public
datasets of this kind were curated from search engine results (SBU Captions [1]) or HTML
alt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex
data filtering to deal with noisy web data. Due to aggressive filtering, their data collection is
inefficient and diversity is artificially supressed. We argue that the quality of data depends on
its source, and the human intent behind its creation. In this work, we explore Reddit – a social
media platform, for curating high quality data. We introduce RedCaps – a large dataset of
12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to
existing datasets, we discuss how Reddit as a data source leads to fast and lightweight collection,
better data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> **Data Collection Pipeline**
Reddit’s uniform structure allows us to parallelize data collection as independent tasks – each task
involves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning.
**Step 1**. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits
have their own rules, community norms, and moderators so curating subreddits allows us to steer the
dataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots,
etc) and post titles tend to describe image content (rather than making jokes, political commentary,
etc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the
number of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or
comment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on
general photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund),
plants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food
(r/steak, r/macarons), scenery (r/cityporn1
, r/desertporn), or activities (r/carpentry, r/kayaking).
In total we collect data from 350 subreddits; the full list can be found in Appendix A.
**Step 2**. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image
posts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months
after their creation to let upvotes stabilize. We only collect posts with images hosted on three domains:
Reddit (i.redd.it), Imgur (i.imgur.com), and Flickr (staticflickr.com). Some image posts contain
multiple images (gallery posts) – in this case we only collect the first image and associate it with
the caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts
marked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content.
**Step 3**. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale
sources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase
captions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following
[29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets
((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc],
image resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram:
@user], and other references (link in comments). Finally, like [31] we replace social media
handles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy.
Due to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them,
as subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard
captions without nouns or that don’t overlap image tags, we do not discard any instances in this step.
Through this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is
less resource-intensive than existing datasets – we do not require webpage crawlers, search engines,
or large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more
subreddits and collecting posts from future years. Next, we perform additional filtering to mitigate
user privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances.
#### Who are the source language producers?
Reddit is the singular data source for RedCaps.
### Annotations
#### Annotation process
The dataset is built using fully automatic data collection pipeline which doesn't require any human annotators.
#### Who are the annotators?
The annotation process doesn't require any human annotators.
### Personal and Sensitive Information
From the paper:
> **Does the dataset relate to people?**
The dataset pertains to people in that people wrote the captions and posted images to Reddit
that we curate in RedCaps. We made specific design choices while curating RedCaps to avoid
large quantities of images containing people:
(a) We collect data from manually curated subreddits in which most contain primarily pertains
to animals, objects, places, or activities. We exclude all subreddits whose primary purpose
is to share and describe images of people (such as celebrity photos or user selfies).
(b) We use an off-the-shelf face detector to find and remove images with potential presence of
human faces. We manually checked 50K random images in RedCaps (Q16) and found 79
images with identifiable human faces – the entire dataset may have ≈19K (0.15%) images
with identifiable people. Refer Section 2.2 in the main paper.
> **Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in
combination with other data) from the dataset?**
Yes, all instances in RedCaps include Reddit usernames of their post authors. This could be
used to look up the Reddit user profile, and some Reddit users may have identifying information
in their profiles. Some images may contain human faces which could be identified by
appearance. However, note that all this information is already public on Reddit, and searching it
in RedCaps is no easier than searching directly on Reddit.
> **Were the individuals in question notified about the data collection?**
No. Reddit users are anonymous by default, and are not required to share their personal contact
information (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps
image posts is by sending them private messages on Reddit. This is practically difficult to do
manually, and will be classified as spam and blocked by Reddit if attempted to programmatically
send a templated message to millions of users.
> **Did the individuals in question consent to the collection and use of their data?**
Users did not explicitly consent to the use of their data in our dataset. However, by uploading
their data on Reddit, they consent that it would appear on the Reddit plaform and will be
accessible via the official Reddit API (which we use to collect RedCaps).
> **If consent was obtained, were the consenting individuals provided with a mechanism to
revoke their consent in the future or for certain uses?**
Users have full control over the presence of their data in our dataset. If users wish to revoke
their consent, they can delete the underlying Reddit post – it will be automatically removed
dfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request
form on our dataset website for anybody to request removal of an individual instance if it is
potentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.).
## Considerations for Using the Data
### Social Impact of Dataset
From the paper:
> **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,
a data protection impact analysis) been conducted?**
No.
### Discussion of Biases
From the paper:
> **Harmful Stereotypes**: Another concern with
Reddit data is that images or language may represent harmful stereotypes about gender, race, or other
characteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation
for collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35]
whose training data includes at least 63K documents from banned or quarantined subreddits which
may contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways:
> * **NSFW images**: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low
precision (∼1%) – most detections are non-NSFW images with pink and beige hues.
> * **Potentially derogatory language**: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels.
> **Reddit demographics**: Reddit’s user demographics are not representative of the population at large.
Compared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs
22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users
are predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United
States [58]. All of the subreddits in RedCaps use English as their primary language. Taken together,
these demographic biases likely also bias the types of objects and places that appear in images on
Reddit, and the language used to describe these images. We do not offer explicit countermeasures to
these biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51].
Subtler issues may also exist, such as imbalanced representation of demographic groups [59] or
gender bias in object co-occurrence [60] or language [61]. These are hard to control in internet
data, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G.
> **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?**
The scale of RedCaps means that we are unable to verify the contents of all images and
captions. However we have tried to minimize the possibility that RedCaps contains data that
might be offensive, insulting, threatening, or might cause anxiety via the following mitigations:
(a) We manually curate the set of subreddits from which to collect data; we only chose
subreddits that are not marked NSFW and which generally contain non-offensive content.
(b) Within our curated subreddits, we did not include any posts marked NSFW.
(c) We removed all instances whose captions contained any of the 400 potentially offensive
words or phrases. Refer Section 2.2 in the main paper.
(d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector.
We manually checked 50K random images in RedCaps and found one image containing
nudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper
> **Does the dataset identify any subpopulations (e.g., by age, gender)?**
RedCaps does not explicitly identify any subpopulations. Since some images contain people
and captions are free-form natural language written by Reddit users, it is possible that some
captions may identify people appearing in individual images as part of a subpopulation.
> **Were any ethical review processes conducted (e.g., by an institutional review board)?**
We did not conduct a formal ethical review process via institutional review boards. However,
as described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms
to try and remove instances that could be problematic.
### Other Known Limitations
From the paper:
> **Are there any errors, sources of noise, or redundancies in the dataset?**
RedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured.
Some instances may also have duplicate images and captions – Reddit users may have shared
the same image post in multiple subreddits. Such redundancies constitute a very small fraction
of the dataset, and should have almost no effect in training large-scale models.
> **Does the dataset contain data that might be considered confidential (e.g., data that is
protected by legal privilege or by doctor-patient confidentiality, data that includes the
content of individuals non-public communications)?**
No, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps.
## Additional Information
### Dataset Curators
From the paper:
> Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps:
Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson.
### Licensing Information
The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/
api-terms) and users must comply with Reddit User Agreeement, Content Policy,
and Privacy Policy – all accessible at https://www.redditinc.com/policies.
From the paper:
> RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.
### Citation Information
```bibtex
@misc{desai2021redcaps,
title={RedCaps: web-curated image-text data created by the people, for the people},
author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
year={2021},
eprint={2111.11431},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
deepmind/pg19 | deepmind | 2024-01-18T11:12:51Z | 2,140 | 54 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"arxiv:1911.05507",
"region:us"
] | [
"text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: pg-19
pretty_name: PG-19
dataset_info:
features:
- name: short_book_title
dtype: string
- name: publication_date
dtype: int32
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11453688452
num_examples: 28602
- name: validation
num_bytes: 17402295
num_examples: 50
- name: test
num_bytes: 40482852
num_examples: 100
download_size: 11740397875
dataset_size: 11511573599
---
# Dataset Card for "pg19"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/pg19](https://github.com/deepmind/pg19)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 11.74 GB
- **Size of the generated dataset:** 11.51 GB
- **Total amount of disk used:** 23.25 GB
### Dataset Summary
This repository contains the PG-19 language modeling benchmark.
It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919.
It also contains metadata of book titles and publication dates.
PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark.
Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date).
Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text.
To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table.
One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 11.74 GB
- **Size of the generated dataset:** 11.51 GB
- **Total amount of disk used:** 23.25 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"publication_date": 1907,
"short_book_title": "La Fiammetta by Giovanni Boccaccio",
"text": "\"\\n\\n\\n\\nProduced by Ted Garvin, Dave Morgan and PG Distributed Proofreaders\\n\\n\\n\\n\\nLA FIAMMETTA\\n\\nBY\\n\\nGIOVANNI BOCCACCIO\\n...",
"url": "http://www.gutenberg.org/ebooks/10006"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `short_book_title`: a `string` feature.
- `publication_date`: a `int32` feature.
- `url`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|28602| 50| 100|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
### Citation Information
```
@article{raecompressive2019,
author = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and
Hillier, Chloe and Lillicrap, Timothy P},
title = {Compressive Transformers for Long-Range Sequence Modelling},
journal = {arXiv preprint},
url = {https://arxiv.org/abs/1911.05507},
year = {2019},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lucidrains](https://github.com/lucidrains), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
peoples-daily-ner/peoples_daily_ner | peoples-daily-ner | 2024-01-18T11:12:44Z | 464 | 12 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"license:unknown",
"size_categories:10K<n<100K",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: People's Daily NER
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
config_name: peoples_daily_ner
splits:
- name: train
num_bytes: 14972456
num_examples: 20865
- name: validation
num_bytes: 1676741
num_examples: 2319
- name: test
num_bytes: 3346975
num_examples: 4637
download_size: 8385672
dataset_size: 19996172
---
# Dataset Card for People's Daily NER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/OYE93/Chinese-NLP-Corpus/tree/master/NER/People's%20Daily)
- **Repository:** [Github](https://github.com/OYE93/Chinese-NLP-Corpus/)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
No citation available for this dataset.
### Contributions
Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset. |
peixiang/pec | peixiang | 2024-01-18T11:12:41Z | 38 | 7 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_ids:dialogue-modeling",
"task_ids:utterance-retrieval",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"text-generation",
"fill-mask",
"text-retrieval"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-retrieval
task_ids:
- dialogue-modeling
- utterance-retrieval
paperswithcode_id: pec
pretty_name: Persona-Based Empathetic Conversational
dataset_info:
- config_name: happy
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 643196978
num_examples: 157195
- name: test
num_bytes: 92003042
num_examples: 22730
- name: validation
num_bytes: 81132088
num_examples: 19829
download_size: 252434681
dataset_size: 816332108
- config_name: offmychest
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 518616402
num_examples: 123968
- name: test
num_bytes: 64173390
num_examples: 15324
- name: validation
num_bytes: 66675909
num_examples: 16004
download_size: 252434681
dataset_size: 649465701
- config_name: all
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 1162655628
num_examples: 281163
- name: test
num_bytes: 156310498
num_examples: 38054
- name: validation
num_bytes: 147940164
num_examples: 35833
download_size: 252434681
dataset_size: 1466906290
config_names:
- all
- happy
- offmychest
---
# Dataset Card for PEC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [PEC repository](https://github.com/zhongpeixiang/PEC)
- **Paper:** [Towards Persona-Based Empathetic Conversational Models](https://www.aclweb.org/anthology/2020.emnlp-main.531/)
- **Point of Contact:** [Peixiang Zhong](mailto:[email protected])
### Dataset Summary
The PEC dataset is an English-language dataset of open-domain conversations gathered from two subreddits on Reddit, i.e., happy and offmychest. PEC has around 350K persona-based empathetic conversations. Each utterance is associated with a speaker, and each speaker has a persona of multiple persona sentences. The conversations in PEC are more empathetic than casual conversations. The conversations in the happy domain are mostly positive, whereas the conversations in the offmychest domain are mostly negative.
### Supported Tasks and Leaderboards
- `dialogue-modeling`, `utterance-retrieval`: this dataset can be used to train a generative or retrieval-based conversational model.
### Languages
English
## Dataset Structure
### Data Instances
A typical data example comprises a list of context utterances, a list of context speakers, a response to the context, the response speaker and the persona of the response speaker.
An example from PEC looks as follows:
```
{'context': ['found out this morning i got a job promotion ! ! !'],
'context_speakers': ['HeWentToJared91'],
'personas': [
"i ca n't stand working in the ugli .",
'i ’ve always liked my eyes except for the fact that they ca n’t shoot lasers',
'i feel really bad about myself as a person right now , and i could really use a hand .',
'i drank a coffee , and it just made me feel even more exhausted .',
'i want a natsuki t shirt',
"i 've dealt with depression in the past .",
'i love red dead 2'],
'response': "you look like a nice person ! we 're proud of you , and i bet you earned that promotion !",
'response_speaker': 'tylock'}
```
### Data Fields
- `context`: a list of strings, each string denotes a context utterance.
- `context_speakers`: a list of strings, each string denotes a speaker.
- `response`: a string denoting the response to the `context`.
- `response_speaker`: a string denoting the speaker of `response`.
- `personas`: a list of strings, each string denotes a persona sentence of `response_speaker`.
### Data Splits
The data is split into a training, validation and test set for each of the three domains. Note that the *all* domain is the concatenation of the *happy* and *offmychest* domains.
| domain | train | validation | test |
|------------|-------:|-----------:|------:|
| happy | 157195 | 19829 | 22730 |
| offmychest | 123968 | 16004 | 15324 |
| all | 281163 | 35833 | 38054 |
## Dataset Creation
### Curation Rationale
PEC was built to provide a testbed for machines to learn persona-based empathetic responding. In our empirical analysis, we found that different personas have different styles of empathetic responding. This dataset can also be used to investigate the link between persona and empathy in human conversations. According to our human assessment, the conversations on the happy and offmychest subreddits are significantly more empathetic than casual conversations.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained via the [pushshift API](https://pushshift.io/using-bigquery-with-reddit-data/) via Google BigQuery.
#### Who are the source language producers?
The language producers are users of the [r/happy](https://www.reddit.com/r/happy/), and [r/offmychest](https://www.reddit.com/r/offmychest/) subreddits between 2012 and 2020. No further demographic information was available from the data source.
### Annotations
#### Annotation process
The dataset does not contain any additional annotations.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset includes the speaker IDs of users on *happy* and *offmychest* subreddits.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop more personalised and empathetic conversational systems, which is an important milestone towards truly human-like conversational agents.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
A small portion of the dataset has the issues of sexism, hate, and harassment. The persona sentences are noisy.
## Additional Information
### Dataset Curators
The dataset was initially created by Peixiang Zhong, Chen Zhang, Hao Wang, Yong Liu, and Chunyan Miao, jointly done at Nanyang Technological University and Alibaba Group.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{zhong-etal-2020-towards,
title = "Towards Persona-Based Empathetic Conversational Models",
author = "Zhong, Peixiang and
Zhang, Chen and
Wang, Hao and
Liu, Yong and
Miao, Chunyan",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.531",
pages = "6556--6566"
}
```
### Contributions
Thanks to [@zhongpeixiang](https://github.com/zhongpeixiang) for adding this dataset. |
Helsinki-NLP/open_subtitles | Helsinki-NLP | 2024-01-18T11:11:17Z | 815 | 68 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:af",
"language:ar",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:ko",
"language:lt",
"language:lv",
"language:mk",
"language:ml",
"language:ms",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:uk",
"language:ur",
"language:vi",
"language:zh",
"license:unknown",
"size_categories:10K<n<100K",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- bg
- bn
- br
- bs
- ca
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- ko
- lt
- lv
- mk
- ml
- ms
- nl
- 'no'
- pl
- pt
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- zh
language_bcp47:
- pt-BR
- ze-EN
- ze-ZH
- zh-CN
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1M<n<10M
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: opensubtitles
pretty_name: OpenSubtitles
dataset_info:
- config_name: bs-eo
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: bs
dtype: uint32
- name: eo
dtype: uint32
- name: sentenceIds
struct:
- name: bs
sequence: uint32
- name: eo
sequence: uint32
- name: translation
dtype:
translation:
languages:
- bs
- eo
splits:
- name: train
num_bytes: 1204266
num_examples: 10989
download_size: 333050
dataset_size: 1204266
- config_name: fr-hy
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: fr
dtype: uint32
- name: hy
dtype: uint32
- name: sentenceIds
struct:
- name: fr
sequence: uint32
- name: hy
sequence: uint32
- name: translation
dtype:
translation:
languages:
- fr
- hy
splits:
- name: train
num_bytes: 132450
num_examples: 668
download_size: 41861
dataset_size: 132450
- config_name: da-ru
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: da
dtype: uint32
- name: ru
dtype: uint32
- name: sentenceIds
struct:
- name: da
sequence: uint32
- name: ru
sequence: uint32
- name: translation
dtype:
translation:
languages:
- da
- ru
splits:
- name: train
num_bytes: 1082649105
num_examples: 7543012
download_size: 267995167
dataset_size: 1082649105
- config_name: en-hi
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: en
dtype: uint32
- name: hi
dtype: uint32
- name: sentenceIds
struct:
- name: en
sequence: uint32
- name: hi
sequence: uint32
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: train
num_bytes: 13845544
num_examples: 93016
download_size: 2967295
dataset_size: 13845544
- config_name: bn-is
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: bn
dtype: uint32
- name: is
dtype: uint32
- name: sentenceIds
struct:
- name: bn
sequence: uint32
- name: is
sequence: uint32
- name: translation
dtype:
translation:
languages:
- bn
- is
splits:
- name: train
num_bytes: 6371251
num_examples: 38272
download_size: 1411625
dataset_size: 6371251
config_names:
- bn-is
- bs-eo
- da-ru
- en-hi
- fr-hy
---
# Dataset Card for OpenSubtitles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/OpenSubtitles.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2016/pdf/62_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/OpenSubtitles.php
E.g.
`dataset = load_dataset("open_subtitles", lang1="fi", lang2="hi")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- af
- ar
- bg
- bn
- br
- bs
- ca
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- ko
- lt
- lv
- mk
- ml
- ms
- nl
- no
- pl
- pt
- pt_br: Portuguese (Brazil) (pt-BR)
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- ze_en: English constituent of Bilingual Chinese-English (subtitles displaying two languages at once, one per line)
- ze_zh: Chinese constituent of Bilingual Chinese-English (subtitles displaying two languages at once, one per line)
- zh_cn: Simplified Chinese (zh-CN, `zh-Hans`)
- zh_tw: Traditional Chinese (zh-TW, `zh-Hant`)
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
Maluuba/newsqa | Maluuba | 2024-01-18T11:10:32Z | 142 | 15 | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: newsqa
pretty_name: NewsQA
dataset_info:
- config_name: combined-csv
features:
- name: story_id
dtype: string
- name: story_text
dtype: string
- name: question
dtype: string
- name: answer_char_ranges
dtype: string
splits:
- name: train
num_bytes: 465942194
num_examples: 119633
download_size: 0
dataset_size: 465942194
- config_name: combined-json
features:
- name: storyId
dtype: string
- name: text
dtype: string
- name: type
dtype: string
- name: questions
sequence:
- name: q
dtype: string
- name: isAnswerAbsent
dtype: int32
- name: isQuestionBad
dtype: int32
- name: consensus
struct:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: answers
sequence:
- name: sourcerAnswers
sequence:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: validated_answers
sequence:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: count
dtype: int32
splits:
- name: train
num_bytes: 68667276
num_examples: 12744
download_size: 0
dataset_size: 68667276
- config_name: split
features:
- name: story_id
dtype: string
- name: story_text
dtype: string
- name: question
dtype: string
- name: answer_token_ranges
dtype: string
splits:
- name: train
num_bytes: 362031288
num_examples: 92549
- name: test
num_bytes: 19763673
num_examples: 5126
- name: validation
num_bytes: 19862778
num_examples: 5166
download_size: 0
dataset_size: 401657739
config_names:
- combined-csv
- combined-json
- split
---
# Dataset Card for NewsQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.microsoft.com/en-us/research/project/newsqa-dataset/
- **Repository:** https://github.com/Maluuba/newsqa
- **Paper:** https://www.aclweb.org/anthology/W17-2623/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs.
Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
```
{'storyId': './cnn/stories/42d01e187213e86f5fe617fe32e716ff7fa3afc4.story',
'text': 'NEW DELHI, India (CNN) -- A high court in northern India on Friday acquitted a wealthy businessman facing the death sentence for the killing of a teen in a case dubbed "the house of horrors."\n\n\n\nMoninder Singh Pandher was sentenced to death by a lower court in February.\n\n\n\nThe teen was one of 19 victims -- children and young women -- in one of the most gruesome serial killings in India in recent years.\n\n\n\nThe Allahabad high court has acquitted Moninder Singh Pandher, his lawyer Sikandar B. Kochar told CNN.\n\n\n\nPandher and his domestic employee Surinder Koli were sentenced to death in February by a lower court for the rape and murder of the 14-year-old.\n\n\n\nThe high court upheld Koli\'s death sentence, Kochar said.\n\n\n\nThe two were arrested two years ago after body parts packed in plastic bags were found near their home in Noida, a New Delhi suburb. Their home was later dubbed a "house of horrors" by the Indian media.\n\n\n\nPandher was not named a main suspect by investigators initially, but was summoned as co-accused during the trial, Kochar said.\n\n\n\nKochar said his client was in Australia when the teen was raped and killed.\n\n\n\nPandher faces trial in the remaining 18 killings and could remain in custody, the attorney said.',
'type': 'train',
'questions': {'q': ['What was the amount of children murdered?',
'When was Pandher sentenced to death?',
'The court aquitted Moninder Singh Pandher of what crime?',
'who was acquitted',
'who was sentenced',
'What was Moninder Singh Pandher acquitted for?',
'Who was sentenced to death in February?',
'how many people died',
'How many children and young women were murdered?'],
'isAnswerAbsent': [0, 0, 0, 0, 0, 0, 0, 0, 0],
'isQuestionBad': [0, 0, 0, 0, 0, 0, 0, 0, 0],
'consensus': [{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False},
{'s': 261, 'e': 271, 'badQuestion': False, 'noAnswer': False},
{'s': 624, 'e': 640, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 129, 'e': 151, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False},
{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False}],
'answers': [{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]}]},
{'sourcerAnswers': [{'s': [261],
'e': [271],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [258], 'e': [271], 'badQuestion': [False], 'noAnswer': [False]},
{'s': [261], 'e': [271], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [26],
'e': [33],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]},
{'s': [624], 'e': [640], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [195], 'e': [218], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [0],
'e': [0],
'badQuestion': [False],
'noAnswer': [True]},
{'s': [195, 232],
'e': [218, 271],
'badQuestion': [False, False],
'noAnswer': [False, False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]}]},
{'sourcerAnswers': [{'s': [129],
'e': [192],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [129], 'e': [151], 'badQuestion': [False], 'noAnswer': [False]},
{'s': [133], 'e': [151], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [195], 'e': [218], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [294], 'e': [297], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [294], 'e': [297], 'badQuestion': [False], 'noAnswer': [False]}]}],
'validated_answers': [{'s': [0, 294],
'e': [0, 297],
'badQuestion': [False, False],
'noAnswer': [True, False],
'count': [1, 2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [624],
'e': [640],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [129],
'e': [151],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []}]}}
```
### Data Fields
Configuration: combined-csv
- 'story_id': An identifier of the story.
- 'story_text': Text of the story.
- 'question': A question about the story.
- 'answer_char_ranges': The raw data collected for character based indices to answers in story_text. E.g. 196:228|196:202,217:228|None. Answers from different crowdsourcers are separated by `|`; within those, multiple selections from the same crowdsourcer are separated by `,`. `None` means the crowdsourcer thought there was no answer to the question in the story. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
Configuration: combined-json
- 'storyId': An identifier of the story.
- 'text': Text of the story.
- 'type': Split type. Will be "train", "validation" or "test".
- 'questions': A list containing the following:
- 'q': A question about the story.
- 'isAnswerAbsent': Proportion of crowdsourcers that said there was no answer to the question in the story.
- 'isQuestionBad': Proportion of crowdsourcers that said the question does not make sense.
- 'consensus': The consensus answer. Use this field to pick the best continuous answer span from the text. If you want to know about a question having multiple answers in the text then you can use the more detailed "answers" and "validated_answers". The object can have start and end positions like in the example above or can be {"badQuestion": true} or {"noAnswer": true}. Note that there is only one consensus answer since it's based on the majority agreement of the crowdsourcers.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The validator said that the question did not make sense.
- 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
- 'answers': The answers from various crowdsourcers.
- 'sourcerAnswers': The answer provided from one crowdsourcer.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The crowdsourcer said that the question did not make sense.
- 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
- 'validated_answers': The answers from the validators.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The validator said that the question did not make sense.
- 'noAnswer': The validator said that there was no answer to the question in the text.
- 'count': The number of validators that agreed with this answer.
Configuration: split
- 'story_id': An identifier of the story
- 'story_text': text of the story
- 'question': A question about the story.
- 'answer_token_ranges': Word based indices to answers in story_text. E.g. 196:202,217:228. Multiple selections from the same answer are separated by `,`. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
### Data Splits
| name | train | validation | test |
|---------------|-----------:|-----------:|--------:|
| combined-csv | 119633 | | |
| combined-json | 12744 | | |
| split | 92549 | 5166 | 5126 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
NewsQA Code
Copyright (c) Microsoft Corporation
All rights reserved.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
© 2020 GitHub, Inc.
### Citation Information
@inproceedings{trischler2017newsqa,
title={NewsQA: A Machine Comprehension Dataset},
author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},
booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},
pages={191--200},
year={2017}
### Contributions
Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset. |
IWSLT/mt_eng_vietnamese | IWSLT | 2024-01-18T11:09:37Z | 296 | 26 | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"language:vi",
"license:unknown",
"size_categories:100K<n<1M",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
multilinguality:
- multilingual
language:
- en
- vi
license:
- unknown
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: MtEngVietnamese
dataset_info:
- config_name: iwslt2015-vi-en
features:
- name: translation
dtype:
translation:
languages:
- vi
- en
splits:
- name: train
num_bytes: 32478282
num_examples: 133318
- name: validation
num_bytes: 323743
num_examples: 1269
- name: test
num_bytes: 323743
num_examples: 1269
download_size: 32323025
dataset_size: 33125768
- config_name: iwslt2015-en-vi
features:
- name: translation
dtype:
translation:
languages:
- en
- vi
splits:
- name: train
num_bytes: 32478282
num_examples: 133318
- name: validation
num_bytes: 323743
num_examples: 1269
- name: test
num_bytes: 323743
num_examples: 1269
download_size: 32323025
dataset_size: 33125768
---
# Dataset Card for mt_eng_vietnamese
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
English, Vietnamese
## Dataset Structure
### Data Instances
An example from the dataset:
```
{
'translation': {
'en': 'In 4 minutes , atmospheric chemist Rachel Pike provides a glimpse of the massive scientific effort behind the bold headlines on climate change , with her team -- one of thousands who contributed -- taking a risky flight over the rainforest in pursuit of data on a key molecule .',
'vi': 'Trong 4 phút , chuyên gia hoá học khí quyển Rachel Pike giới thiệu sơ lược về những nỗ lực khoa học miệt mài đằng sau những tiêu đề táo bạo về biến đổi khí hậu , cùng với đoàn nghiên cứu của mình -- hàng ngàn người đã cống hiến cho dự án này -- một chuyến bay mạo hiểm qua rừng già để tìm kiếm thông tin về một phân tử then chốt .'
}
}
```
### Data Fields
- translation:
- en: text in english
- vi: text in vietnamese
### Data Splits
train: 133318, validation: 1269, test: 1269
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{Luong-Manning:iwslt15,
Address = {Da Nang, Vietnam}
Author = {Luong, Minh-Thang and Manning, Christopher D.},
Booktitle = {International Workshop on Spoken Language Translation},
Title = {Stanford Neural Machine Translation Systems for Spoken Language Domain},
Year = {2015}}
```
### Contributions
Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput) for adding this dataset. |
UdS-LSV/menyo20k_mt | UdS-LSV | 2024-01-18T11:08:52Z | 111 | 3 | [
"task_categories:translation",
"annotations_creators:expert-generated",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"source_datasets:original",
"language:en",
"language:yo",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"arxiv:2103.08647",
"region:us"
] | [
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
- found
language_creators:
- found
language:
- en
- yo
license:
- cc-by-nc-4.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: menyo-20k
pretty_name: MENYO-20k
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- yo
config_name: menyo20k_mt
splits:
- name: train
num_bytes: 2551345
num_examples: 10070
- name: validation
num_bytes: 870011
num_examples: 3397
- name: test
num_bytes: 1905432
num_examples: 6633
download_size: 5206234
dataset_size: 5326788
---
# Dataset Card for MENYO-20k
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/uds-lsv/menyo-20k_MT/
- **Paper:** [The Effect of Domain and Diacritics in Yorùbá-English Neural Machine Translation](https://arxiv.org/abs/2103.08647)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MENYO-20k is a multi-domain parallel dataset with texts obtained from news articles, ted talks, movie transcripts, radio transcripts, science and technology texts, and other short articles curated from the web and professional translators. The dataset has 20,100 parallel sentences split into 10,070 training sentences, 3,397 development sentences, and 6,633 test sentences (3,419 multi-domain, 1,714 news domain, and 1,500 ted talks speech transcript domain).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Languages are English and Yoruba.
## Dataset Structure
### Data Instances
An instance example:
```
{'translation':
{'en': 'Unit 1: What is Creative Commons?',
'yo': 'Ìdá 1: Kín ni Creative Commons?'
}
}
```
### Data Fields
- `translation`:
- `en`: English sentence.
- `yo`: Yoruba sentence.
### Data Splits
Training, validation and test splits are available.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is open but for non-commercial use because some data sources like Ted talks and JW news require permission for commercial use.
The dataset is licensed under Creative Commons [Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) License: https://github.com/uds-lsv/menyo-20k_MT/blob/master/LICENSE
### Citation Information
If you use this dataset, please cite this paper:
```
@inproceedings{adelani-etal-2021-effect,
title = "The Effect of Domain and Diacritics in {Y}oruba{--}{E}nglish Neural Machine Translation",
author = "Adelani, David and
Ruiter, Dana and
Alabi, Jesujoba and
Adebonojo, Damilola and
Ayeni, Adesina and
Adeyemi, Mofe and
Awokoya, Ayodele Esther and
Espa{\~n}a-Bonet, Cristina",
booktitle = "Proceedings of the 18th Biennial Machine Translation Summit (Volume 1: Research Track)",
month = aug,
year = "2021",
address = "Virtual",
publisher = "Association for Machine Translation in the Americas",
url = "https://aclanthology.org/2021.mtsummit-research.6",
pages = "61--75",
abstract = "Massively multilingual machine translation (MT) has shown impressive capabilities and including zero and few-shot translation between low-resource language pairs. However and these models are often evaluated on high-resource languages with the assumption that they generalize to low-resource ones. The difficulty of evaluating MT models on low-resource pairs is often due to lack of standardized evaluation datasets. In this paper and we present MENYO-20k and the first multi-domain parallel corpus with a especially curated orthography for Yoruba{--}English with standardized train-test splits for benchmarking. We provide several neural MT benchmarks and compare them to the performance of popular pre-trained (massively multilingual) MT models both for the heterogeneous test set and its subdomains. Since these pre-trained models use huge amounts of data with uncertain quality and we also analyze the effect of diacritics and a major characteristic of Yoruba and in the training data. We investigate how and when this training condition affects the final quality of a translation and its understandability.Our models outperform massively multilingual models such as Google ($+8.7$ BLEU) and Facebook M2M ($+9.1$) when translating to Yoruba and setting a high quality benchmark for future research.",
}
```
### Contributions
Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
|
allenai/math_qa | allenai | 2024-01-18T11:08:38Z | 9,619 | 103 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:extended|aqua_rat",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: MathQA
size_categories:
- 10K<n<100K
source_datasets:
- extended|aqua_rat
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mathqa
dataset_info:
features:
- name: Problem
dtype: string
- name: Rationale
dtype: string
- name: options
dtype: string
- name: correct
dtype: string
- name: annotated_formula
dtype: string
- name: linear_formula
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 1844184
num_examples: 2985
- name: train
num_bytes: 18368826
num_examples: 29837
- name: validation
num_bytes: 2752969
num_examples: 4475
download_size: 7302821
dataset_size: 22965979
---
# Dataset Card for MathQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://math-qa.github.io/math-QA/](https://math-qa.github.io/math-QA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms](https://aclanthology.org/N19-1245/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 7.30 MB
- **Size of the generated dataset:** 22.96 MB
- **Total amount of disk used:** 30.27 MB
### Dataset Summary
We introduce a large-scale dataset of math word problems.
Our dataset is gathered by using a new representation language to annotate over the AQuA-RAT dataset with fully-specified operational programs.
AQuA-RAT has provided the questions, options, rationale, and the correct options.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 7.30 MB
- **Size of the generated dataset:** 22.96 MB
- **Total amount of disk used:** 30.27 MB
An example of 'train' looks as follows.
```
{
"Problem": "a multiple choice test consists of 4 questions , and each question has 5 answer choices . in how many r ways can the test be completed if every question is unanswered ?",
"Rationale": "\"5 choices for each of the 4 questions , thus total r of 5 * 5 * 5 * 5 = 5 ^ 4 = 625 ways to answer all of them . answer : c .\"",
"annotated_formula": "power(5, 4)",
"category": "general",
"correct": "c",
"linear_formula": "power(n1,n0)|",
"options": "a ) 24 , b ) 120 , c ) 625 , d ) 720 , e ) 1024"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `Problem`: a `string` feature.
- `Rationale`: a `string` feature.
- `options`: a `string` feature.
- `correct`: a `string` feature.
- `annotated_formula`: a `string` feature.
- `linear_formula`: a `string` feature.
- `category`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|29837| 4475|2985|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@inproceedings{amini-etal-2019-mathqa,
title = "{M}ath{QA}: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms",
author = "Amini, Aida and
Gabriel, Saadia and
Lin, Shanchuan and
Koncel-Kedziorski, Rik and
Choi, Yejin and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1245",
doi = "10.18653/v1/N19-1245",
pages = "2357--2367",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
deepmind/math_dataset | deepmind | 2024-01-18T11:08:35Z | 7,471 | 123 | [
"language:en",
"arxiv:1904.01557",
"region:us"
] | [] | 2022-03-02T23:29:22Z | null | ---
pretty_name: Mathematics Dataset
language:
- en
paperswithcode_id: mathematics
dataset_info:
- config_name: algebra__linear_1d
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 516405
num_examples: 10000
- name: train
num_bytes: 92086245
num_examples: 1999998
download_size: 2333082954
dataset_size: 92602650
- config_name: algebra__linear_1d_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1018090
num_examples: 10000
- name: train
num_bytes: 199566926
num_examples: 1999998
download_size: 2333082954
dataset_size: 200585016
- config_name: algebra__linear_2d
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 666095
num_examples: 10000
- name: train
num_bytes: 126743526
num_examples: 1999998
download_size: 2333082954
dataset_size: 127409621
- config_name: algebra__linear_2d_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1184664
num_examples: 10000
- name: train
num_bytes: 234405885
num_examples: 1999998
download_size: 2333082954
dataset_size: 235590549
- config_name: algebra__polynomial_roots
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 868630
num_examples: 10000
- name: train
num_bytes: 163134199
num_examples: 1999998
download_size: 2333082954
dataset_size: 164002829
- config_name: algebra__polynomial_roots_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1281321
num_examples: 10000
- name: train
num_bytes: 251435312
num_examples: 1999998
download_size: 2333082954
dataset_size: 252716633
- config_name: algebra__sequence_next_term
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 752459
num_examples: 10000
- name: train
num_bytes: 138735194
num_examples: 1999998
download_size: 2333082954
dataset_size: 139487653
- config_name: algebra__sequence_nth_term
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 947764
num_examples: 10000
- name: train
num_bytes: 175945643
num_examples: 1999998
download_size: 2333082954
dataset_size: 176893407
- config_name: arithmetic__add_or_sub
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 483725
num_examples: 10000
- name: train
num_bytes: 89690356
num_examples: 1999998
download_size: 2333082954
dataset_size: 90174081
- config_name: arithmetic__add_or_sub_in_base
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 502221
num_examples: 10000
- name: train
num_bytes: 93779137
num_examples: 1999998
download_size: 2333082954
dataset_size: 94281358
- config_name: arithmetic__add_sub_multiple
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 498421
num_examples: 10000
- name: train
num_bytes: 90962782
num_examples: 1999998
download_size: 2333082954
dataset_size: 91461203
- config_name: arithmetic__div
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 421520
num_examples: 10000
- name: train
num_bytes: 78417908
num_examples: 1999998
download_size: 2333082954
dataset_size: 78839428
- config_name: arithmetic__mixed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 513364
num_examples: 10000
- name: train
num_bytes: 93989009
num_examples: 1999998
download_size: 2333082954
dataset_size: 94502373
- config_name: arithmetic__mul
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 394004
num_examples: 10000
- name: train
num_bytes: 73499093
num_examples: 1999998
download_size: 2333082954
dataset_size: 73893097
- config_name: arithmetic__mul_div_multiple
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 497308
num_examples: 10000
- name: train
num_bytes: 91406689
num_examples: 1999998
download_size: 2333082954
dataset_size: 91903997
- config_name: arithmetic__nearest_integer_root
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 705630
num_examples: 10000
- name: train
num_bytes: 137771237
num_examples: 1999998
download_size: 2333082954
dataset_size: 138476867
- config_name: arithmetic__simplify_surd
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1261753
num_examples: 10000
- name: train
num_bytes: 207753790
num_examples: 1999998
download_size: 2333082954
dataset_size: 209015543
- config_name: calculus__differentiate
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1025947
num_examples: 10000
- name: train
num_bytes: 199013993
num_examples: 1999998
download_size: 2333082954
dataset_size: 200039940
- config_name: calculus__differentiate_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1343416
num_examples: 10000
- name: train
num_bytes: 263757570
num_examples: 1999998
download_size: 2333082954
dataset_size: 265100986
- config_name: comparison__closest
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 681229
num_examples: 10000
- name: train
num_bytes: 132274822
num_examples: 1999998
download_size: 2333082954
dataset_size: 132956051
- config_name: comparison__closest_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1071089
num_examples: 10000
- name: train
num_bytes: 210658152
num_examples: 1999998
download_size: 2333082954
dataset_size: 211729241
- config_name: comparison__kth_biggest
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 797185
num_examples: 10000
- name: train
num_bytes: 149077463
num_examples: 1999998
download_size: 2333082954
dataset_size: 149874648
- config_name: comparison__kth_biggest_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1144556
num_examples: 10000
- name: train
num_bytes: 221547532
num_examples: 1999998
download_size: 2333082954
dataset_size: 222692088
- config_name: comparison__pair
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 452528
num_examples: 10000
- name: train
num_bytes: 85707543
num_examples: 1999998
download_size: 2333082954
dataset_size: 86160071
- config_name: comparison__pair_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 946187
num_examples: 10000
- name: train
num_bytes: 184702998
num_examples: 1999998
download_size: 2333082954
dataset_size: 185649185
- config_name: comparison__sort
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 712498
num_examples: 10000
- name: train
num_bytes: 131752705
num_examples: 1999998
download_size: 2333082954
dataset_size: 132465203
- config_name: comparison__sort_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1114257
num_examples: 10000
- name: train
num_bytes: 213871896
num_examples: 1999998
download_size: 2333082954
dataset_size: 214986153
- config_name: measurement__conversion
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 592904
num_examples: 10000
- name: train
num_bytes: 118650852
num_examples: 1999998
download_size: 2333082954
dataset_size: 119243756
- config_name: measurement__time
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 584278
num_examples: 10000
- name: train
num_bytes: 116962599
num_examples: 1999998
download_size: 2333082954
dataset_size: 117546877
- config_name: numbers__base_conversion
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 490881
num_examples: 10000
- name: train
num_bytes: 90363333
num_examples: 1999998
download_size: 2333082954
dataset_size: 90854214
- config_name: numbers__div_remainder
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 644523
num_examples: 10000
- name: train
num_bytes: 125046212
num_examples: 1999998
download_size: 2333082954
dataset_size: 125690735
- config_name: numbers__div_remainder_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1151347
num_examples: 10000
- name: train
num_bytes: 226341870
num_examples: 1999998
download_size: 2333082954
dataset_size: 227493217
- config_name: numbers__gcd
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 659492
num_examples: 10000
- name: train
num_bytes: 127914889
num_examples: 1999998
download_size: 2333082954
dataset_size: 128574381
- config_name: numbers__gcd_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1206805
num_examples: 10000
- name: train
num_bytes: 237534189
num_examples: 1999998
download_size: 2333082954
dataset_size: 238740994
- config_name: numbers__is_factor
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 396129
num_examples: 10000
- name: train
num_bytes: 75875988
num_examples: 1999998
download_size: 2333082954
dataset_size: 76272117
- config_name: numbers__is_factor_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 949828
num_examples: 10000
- name: train
num_bytes: 185369842
num_examples: 1999998
download_size: 2333082954
dataset_size: 186319670
- config_name: numbers__is_prime
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 385749
num_examples: 10000
- name: train
num_bytes: 73983639
num_examples: 1999998
download_size: 2333082954
dataset_size: 74369388
- config_name: numbers__is_prime_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 947888
num_examples: 10000
- name: train
num_bytes: 184808483
num_examples: 1999998
download_size: 2333082954
dataset_size: 185756371
- config_name: numbers__lcm
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 717978
num_examples: 10000
- name: train
num_bytes: 136826050
num_examples: 1999998
download_size: 2333082954
dataset_size: 137544028
- config_name: numbers__lcm_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1127744
num_examples: 10000
- name: train
num_bytes: 221148668
num_examples: 1999998
download_size: 2333082954
dataset_size: 222276412
- config_name: numbers__list_prime_factors
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 585749
num_examples: 10000
- name: train
num_bytes: 109982816
num_examples: 1999998
download_size: 2333082954
dataset_size: 110568565
- config_name: numbers__list_prime_factors_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1053510
num_examples: 10000
- name: train
num_bytes: 205379513
num_examples: 1999998
download_size: 2333082954
dataset_size: 206433023
- config_name: numbers__place_value
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 496977
num_examples: 10000
- name: train
num_bytes: 95180091
num_examples: 1999998
download_size: 2333082954
dataset_size: 95677068
- config_name: numbers__place_value_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1011130
num_examples: 10000
- name: train
num_bytes: 197187918
num_examples: 1999998
download_size: 2333082954
dataset_size: 198199048
- config_name: numbers__round_number
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 570636
num_examples: 10000
- name: train
num_bytes: 111472483
num_examples: 1999998
download_size: 2333082954
dataset_size: 112043119
- config_name: numbers__round_number_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1016754
num_examples: 10000
- name: train
num_bytes: 201057283
num_examples: 1999998
download_size: 2333082954
dataset_size: 202074037
- config_name: polynomials__add
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1308455
num_examples: 10000
- name: train
num_bytes: 257576092
num_examples: 1999998
download_size: 2333082954
dataset_size: 258884547
- config_name: polynomials__coefficient_named
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1137226
num_examples: 10000
- name: train
num_bytes: 219716251
num_examples: 1999998
download_size: 2333082954
dataset_size: 220853477
- config_name: polynomials__collect
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 774709
num_examples: 10000
- name: train
num_bytes: 143743260
num_examples: 1999998
download_size: 2333082954
dataset_size: 144517969
- config_name: polynomials__compose
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1209763
num_examples: 10000
- name: train
num_bytes: 233651887
num_examples: 1999998
download_size: 2333082954
dataset_size: 234861650
- config_name: polynomials__evaluate
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 599446
num_examples: 10000
- name: train
num_bytes: 114538250
num_examples: 1999998
download_size: 2333082954
dataset_size: 115137696
- config_name: polynomials__evaluate_composed
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1148362
num_examples: 10000
- name: train
num_bytes: 226022455
num_examples: 1999998
download_size: 2333082954
dataset_size: 227170817
- config_name: polynomials__expand
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1057353
num_examples: 10000
- name: train
num_bytes: 202338235
num_examples: 1999998
download_size: 2333082954
dataset_size: 203395588
- config_name: polynomials__simplify_power
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1248040
num_examples: 10000
- name: train
num_bytes: 216407582
num_examples: 1999998
download_size: 2333082954
dataset_size: 217655622
- config_name: probability__swr_p_level_set
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1159050
num_examples: 10000
- name: train
num_bytes: 227540179
num_examples: 1999998
download_size: 2333082954
dataset_size: 228699229
- config_name: probability__swr_p_sequence
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1097442
num_examples: 10000
- name: train
num_bytes: 215865725
num_examples: 1999998
download_size: 2333082954
dataset_size: 216963167
---
# Dataset Card for "math_dataset"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/mathematics_dataset](https://github.com/deepmind/mathematics_dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 130.65 GB
- **Size of the generated dataset:** 9.08 GB
- **Total amount of disk used:** 139.73 GB
### Dataset Summary
Mathematics database.
This dataset code generates mathematical question and answer pairs,
from a range of question types at roughly school-level difficulty.
This is designed to test the mathematical learning and algebraic
reasoning skills of learning models.
Original paper: Analysing Mathematical Reasoning Abilities of Neural Models
(Saxton, Grefenstette, Hill, Kohli).
Example usage:
train_examples, val_examples = datasets.load_dataset(
'math_dataset/arithmetic__mul',
split=['train', 'test'],
as_supervised=True)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### algebra__linear_1d
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 92.60 MB
- **Total amount of disk used:** 2.43 GB
An example of 'train' looks as follows.
```
```
#### algebra__linear_1d_composed
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 200.58 MB
- **Total amount of disk used:** 2.53 GB
An example of 'train' looks as follows.
```
```
#### algebra__linear_2d
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 127.41 MB
- **Total amount of disk used:** 2.46 GB
An example of 'train' looks as follows.
```
```
#### algebra__linear_2d_composed
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 235.59 MB
- **Total amount of disk used:** 2.57 GB
An example of 'train' looks as follows.
```
```
#### algebra__polynomial_roots
- **Size of downloaded dataset files:** 2.33 GB
- **Size of the generated dataset:** 164.01 MB
- **Total amount of disk used:** 2.50 GB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### algebra__linear_1d
- `question`: a `string` feature.
- `answer`: a `string` feature.
#### algebra__linear_1d_composed
- `question`: a `string` feature.
- `answer`: a `string` feature.
#### algebra__linear_2d
- `question`: a `string` feature.
- `answer`: a `string` feature.
#### algebra__linear_2d_composed
- `question`: a `string` feature.
- `answer`: a `string` feature.
#### algebra__polynomial_roots
- `question`: a `string` feature.
- `answer`: a `string` feature.
### Data Splits
| name | train |test |
|---------------------------|------:|----:|
|algebra__linear_1d |1999998|10000|
|algebra__linear_1d_composed|1999998|10000|
|algebra__linear_2d |1999998|10000|
|algebra__linear_2d_composed|1999998|10000|
|algebra__polynomial_roots |1999998|10000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{2019arXiv,
author = {Saxton, Grefenstette, Hill, Kohli},
title = {Analysing Mathematical Reasoning Abilities of Neural Models},
year = {2019},
journal = {arXiv:1904.01557}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
lst-nectec/lst20 | lst-nectec | 2024-01-18T11:08:24Z | 239 | 5 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:th",
"license:other",
"size_categories:10K<n<100K",
"region:us",
"word-segmentation",
"clause-segmentation",
"sentence-segmentation"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- th
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
pretty_name: LST20
tags:
- word-segmentation
- clause-segmentation
- sentence-segmentation
dataset_info:
features:
- name: id
dtype: string
- name: fname
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NN
'1': VV
'2': PU
'3': CC
'4': PS
'5': AX
'6': AV
'7': FX
'8': NU
'9': AJ
'10': CL
'11': PR
'12': NG
'13': PA
'14': XX
'15': IJ
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B_BRN
'2': B_DES
'3': B_DTM
'4': B_LOC
'5': B_MEA
'6': B_NUM
'7': B_ORG
'8': B_PER
'9': B_TRM
'10': B_TTL
'11': I_BRN
'12': I_DES
'13': I_DTM
'14': I_LOC
'15': I_MEA
'16': I_NUM
'17': I_ORG
'18': I_PER
'19': I_TRM
'20': I_TTL
'21': E_BRN
'22': E_DES
'23': E_DTM
'24': E_LOC
'25': E_MEA
'26': E_NUM
'27': E_ORG
'28': E_PER
'29': E_TRM
'30': E_TTL
- name: clause_tags
sequence:
class_label:
names:
'0': O
'1': B_CLS
'2': I_CLS
'3': E_CLS
config_name: lst20
splits:
- name: train
num_bytes: 107725145
num_examples: 63310
- name: validation
num_bytes: 9646167
num_examples: 5620
- name: test
num_bytes: 8217425
num_examples: 5250
download_size: 0
dataset_size: 125588737
---
# Dataset Card for LST20
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://aiforthai.in.th/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [email]([email protected])
### Dataset Summary
LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.
It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.
At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with
16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is
considered large enough for developing joint neural models for NLP.
Manually download at https://aiforthai.in.th/corpus.php
See `LST20 Annotation Guideline.pdf` and `LST20 Brief Specification.pdf` within the downloaded `AIFORTHAI-LST20Corpus.tar.gz` for more details.
### Supported Tasks and Leaderboards
- POS tagging
- NER tagging
- clause segmentation
- sentence segmentation
- word tokenization
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'clause_tags': [1, 2, 2, 2, 2, 2, 2, 2, 3], 'fname': 'T11964.txt', 'id': '0', 'ner_tags': [8, 0, 0, 0, 0, 0, 0, 0, 25], 'pos_tags': [0, 0, 0, 1, 0, 8, 8, 8, 0], 'tokens': ['ธรรมนูญ', 'แชมป์', 'สิงห์คลาสสิก', 'กวาด', 'รางวัล', 'แสน', 'สี่', 'หมื่น', 'บาท']}
{'clause_tags': [1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3], 'fname': 'T11964.txt', 'id': '1', 'ner_tags': [8, 18, 28, 0, 0, 0, 0, 6, 0, 0, 0, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 15, 25, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 6], 'pos_tags': [0, 2, 0, 2, 1, 1, 2, 8, 2, 10, 2, 8, 2, 1, 0, 1, 0, 4, 7, 1, 0, 2, 8, 2, 10, 1, 10, 4, 2, 8, 2, 4, 0, 4, 0, 2, 8, 2, 10, 2, 8], 'tokens': ['ธรรมนูญ', '_', 'ศรีโรจน์', '_', 'เก็บ', 'เพิ่ม', '_', '4', '_', 'อันเดอร์พาร์', '_', '68', '_', 'เข้า', 'ป้าย', 'รับ', 'แชมป์', 'ใน', 'การ', 'เล่น', 'อาชีพ', '_', '19', '_', 'ปี', 'เป็น', 'ครั้ง', 'ที่', '_', '8', '_', 'ใน', 'ชีวิต', 'ด้วย', 'สกอร์', '_', '18', '_', 'อันเดอร์พาร์', '_', '270']}
```
### Data Fields
- `id`: nth sentence in each set, starting at 0
- `fname`: text file from which the sentence comes from
- `tokens`: word tokens
- `pos_tags`: POS tags
- `ner_tags`: NER tags
- `clause_tags`: clause tags
### Data Splits
| | train | eval | test | all |
|----------------------|-----------|-------------|-------------|-----------|
| words | 2,714,848 | 240,891 | 207,295 | 3,163,034 |
| named entities | 246,529 | 23,176 | 18,315 | 288,020 |
| clauses | 214,645 | 17,486 | 16,050 | 246,181 |
| sentences | 63,310 | 5,620 | 5,250 | 74,180 |
| distinct words | 42,091 | (oov) 2,595 | (oov) 2,006 | 46,692 |
| breaking spaces※ | 63,310 | 5,620 | 5,250 | 74,180 |
| non-breaking spaces※※| 402,380 | 39,920 | 32,204 | 475,504 |
※ Breaking space = space that is used as a sentence boundary marker
※※ Non-breaking space = space that is not used as a sentence boundary marker
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Respective authors of the news articles
### Annotations
#### Annotation process
Detailed annotation guideline can be found in `LST20 Annotation Guideline.pdf`.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
All texts are from public news. No personal and sensitive information is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
- Large-scale Thai NER & POS tagging, clause & sentence segmentatation, word tokenization
### Discussion of Biases
- All 3,745 texts are from news domain:
- politics: 841
- crime and accident: 592
- economics: 512
- entertainment: 472
- sports: 402
- international: 279
- science, technology and education: 216
- health: 92
- general: 75
- royal: 54
- disaster: 52
- development: 45
- environment: 40
- culture: 40
- weather forecast: 33
- Word tokenization is done accoding to InterBEST 2009 Guideline.
### Other Known Limitations
- Some NER tags do not correspond with given labels (`B`, `I`, and so on)
## Additional Information
### Dataset Curators
[NECTEC](https://www.nectec.or.th/en/)
### Licensing Information
1. Non-commercial use, research, and open source
Any non-commercial use of the dataset for research and open-sourced projects is encouraged and free of charge. Please cite our technical report for reference.
If you want to perpetuate your models trained on our dataset and share them to the research community in Thailand, please send your models, code, and APIs to the AI for Thai Project. Please contact Dr. Thepchai Supnithi via [email protected] for more information.
Note that modification and redistribution of the dataset by any means are strictly prohibited unless authorized by the corpus authors.
2. Commercial use
In any commercial use of the dataset, there are two options.
- Option 1 (in kind): Contributing a dataset of 50,000 words completely annotated with our annotation scheme within 1 year. Your data will also be shared and recognized as a dataset co-creator in the research community in Thailand.
- Option 2 (in cash): Purchasing a lifetime license for the entire dataset is required. The purchased rights of use cover only this dataset.
In both options, please contact Dr. Thepchai Supnithi via [email protected] for more information.
### Citation Information
```
@article{boonkwan2020annotation,
title={The Annotation Guideline of LST20 Corpus},
author={Boonkwan, Prachya and Luantangsrisuk, Vorapon and Phaholphinyo, Sitthaa and Kriengket, Kanyanat and Leenoi, Dhanon and Phrombut, Charun and Boriboon, Monthika and Kosawat, Krit and Supnithi, Thepchai},
journal={arXiv preprint arXiv:2008.05055},
year={2020}
}
```
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. |
ucsbnlp/liar | ucsbnlp | 2024-01-18T11:08:08Z | 2,007 | 24 | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"arxiv:1705.00648",
"region:us",
"fake-news-detection"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: liar
pretty_name: LIAR
tags:
- fake-news-detection
dataset_info:
features:
- name: id
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': half-true
'2': mostly-true
'3': 'true'
'4': barely-true
'5': pants-fire
- name: statement
dtype: string
- name: subject
dtype: string
- name: speaker
dtype: string
- name: job_title
dtype: string
- name: state_info
dtype: string
- name: party_affiliation
dtype: string
- name: barely_true_counts
dtype: float32
- name: false_counts
dtype: float32
- name: half_true_counts
dtype: float32
- name: mostly_true_counts
dtype: float32
- name: pants_on_fire_counts
dtype: float32
- name: context
dtype: string
splits:
- name: train
num_bytes: 2730651
num_examples: 10269
- name: test
num_bytes: 341414
num_examples: 1283
- name: validation
num_bytes: 341592
num_examples: 1284
download_size: 1013571
dataset_size: 3413657
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
statement: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.cs.ucsb.edu/~william/
- **Repository:**
- **Paper:** https://arxiv.org/abs/1705.00648
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LIAR is a dataset for fake news detection with 12.8K human labeled short statements from politifact.com's API, and each statement is evaluated by a politifact.com editor for its truthfulness. The distribution of labels in the LIAR dataset is relatively well-balanced: except for 1,050 pants-fire cases, the instances for all other labels range from 2,063 to 2,638. In each case, the labeler provides a lengthy analysis report to ground each judgment.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. |
peluz/lener_br | peluz | 2024-01-18T11:07:59Z | 373 | 35 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:unknown",
"size_categories:10K<n<100K",
"region:us",
"legal"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pt
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: lener-br
pretty_name: leNER-br
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-ORGANIZACAO
'2': I-ORGANIZACAO
'3': B-PESSOA
'4': I-PESSOA
'5': B-TEMPO
'6': I-TEMPO
'7': B-LOCAL
'8': I-LOCAL
'9': B-LEGISLACAO
'10': I-LEGISLACAO
'11': B-JURISPRUDENCIA
'12': I-JURISPRUDENCIA
config_name: lener_br
splits:
- name: train
num_bytes: 3984189
num_examples: 7828
- name: validation
num_bytes: 719433
num_examples: 1177
- name: test
num_bytes: 823708
num_examples: 1390
download_size: 2983137
dataset_size: 5527330
tags:
- legal
---
# Dataset Card for leNER-br
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [leNER-BR homepage](https://cic.unb.br/~teodecampos/LeNER-Br/)
- **Repository:** [leNER-BR repository](https://github.com/peluz/lener-br)
- **Paper:** [leNER-BR: Long Form Question Answering](https://cic.unb.br/~teodecampos/LeNER-Br/luz_etal_propor2018.pdf)
- **Point of Contact:** [Pedro H. Luz de Araujo](mailto:[email protected])
### Dataset Summary
LeNER-Br is a Portuguese language dataset for named entity recognition
applied to legal documents. LeNER-Br consists entirely of manually annotated
legislation and legal cases texts and contains tags for persons, locations,
time entities, organizations, legislation and legal cases.
To compose the dataset, 66 legal documents from several Brazilian Courts were
collected. Courts of superior and state levels were considered, such as Supremo
Tribunal Federal, Superior Tribunal de Justiça, Tribunal de Justiça de Minas
Gerais and Tribunal de Contas da União. In addition, four legislation documents
were collected, such as "Lei Maria da Penha", giving a total of 70 documents
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Portuguese.
## Dataset Structure
### Data Instances
An example from the dataset looks as follows:
```
{
"id": "0",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0],
"tokens": [
"EMENTA", ":", "APELAÇÃO", "CÍVEL", "-", "AÇÃO", "DE", "INDENIZAÇÃO", "POR", "DANOS", "MORAIS", "-", "PRELIMINAR", "-", "ARGUIDA", "PELO", "MINISTÉRIO", "PÚBLICO", "EM", "GRAU", "RECURSAL"]
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-ORGANIZACAO", "I-ORGANIZACAO", "B-PESSOA", "I-PESSOA", "B-TEMPO", "I-TEMPO", "B-LOCAL", "I-LOCAL", "B-LEGISLACAO", "I-LEGISLACAO", "B-JURISPRUDENCIA", "I-JURISPRUDENCIA"
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word.
### Data Splits
The data is split into train, validation and test set. The split sizes are as follow:
| Train | Val | Test |
| ------ | ----- | ---- |
| 7828 | 1177 | 1390 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{luz_etal_propor2018,
author = {Pedro H. {Luz de Araujo} and Te\'{o}filo E. {de Campos} and
Renato R. R. {de Oliveira} and Matheus Stauffer and
Samuel Couto and Paulo Bermejo},
title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},
booktitle = {International Conference on the Computational Processing of Portuguese ({PROPOR})},
publisher = {Springer},
series = {Lecture Notes on Computer Science ({LNCS})},
pages = {313--323},
year = {2018},
month = {September 24-26},
address = {Canela, RS, Brazil},
doi = {10.1007/978-3-319-99722-3_32},
url = {https://cic.unb.br/~teodecampos/LeNER-Br/},
}
```
### Contributions
Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset. |
google/jigsaw_toxicity_pred | google | 2024-01-18T11:06:53Z | 768 | 24 | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- other
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
pretty_name: JigsawToxicityPred
dataset_info:
features:
- name: comment_text
dtype: string
- name: toxic
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: severe_toxic
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: obscene
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: threat
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: insult
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
- name: identity_hate
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 71282358
num_examples: 159571
- name: test
num_bytes: 28241991
num_examples: 63978
download_size: 0
dataset_size: 99524349
train-eval-index:
- config: default
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
comment_text: text
toxic: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Jigsaw Comment Toxicity Classification Kaggle Competition](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up on seeking different opinions. Platforms struggle to effectively facilitate conversations, leading many communities to limit or completely shut down user comments. This dataset consists of a large number of Wikipedia comments which have been labeled by human raters for toxic behavior.
### Supported Tasks and Leaderboards
The dataset support multi-label classification
### Languages
The comments are in English
## Dataset Structure
### Data Instances
A data point consists of a comment followed by multiple labels that can be associated with it.
{'id': '02141412314',
'comment_text': 'Sample comment text',
'toxic': 0,
'severe_toxic': 0,
'obscene': 0,
'threat': 0,
'insult': 0,
'identity_hate': 1,
}
### Data Fields
- `id`: id of the comment
- `comment_text`: the text of the comment
- `toxic`: value of 0(non-toxic) or 1(toxic) classifying the comment
- `severe_toxic`: value of 0(non-severe_toxic) or 1(severe_toxic) classifying the comment
- `obscene`: value of 0(non-obscene) or 1(obscene) classifying the comment
- `threat`: value of 0(non-threat) or 1(threat) classifying the comment
- `insult`: value of 0(non-insult) or 1(insult) classifying the comment
- `identity_hate`: value of 0(non-identity_hate) or 1(identity_hate) classifying the comment
### Data Splits
The data is split into a training and testing set.
## Dataset Creation
### Curation Rationale
The dataset was created to help in efforts to identify and curb instances of toxicity online.
### Source Data
#### Initial Data Collection and Normalization
The dataset is a collection of Wikipedia comments.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
If words that are associated with swearing, insults or profanity are present in a comment, it is likely that it will be classified as toxic, regardless of the tone or the intent of the author e.g. humorous/self-deprecating. This could present some biases towards already vulnerable minority groups.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The "Toxic Comment Classification" dataset is released under [CC0], with the underlying comment text being governed by Wikipedia\'s [CC-SA-3.0].
### Citation Information
No citation information.
### Contributions
Thanks to [@Tigrex161](https://github.com/Tigrex161) for adding this dataset. |
takala/financial_phrasebank | takala | 2024-01-18T11:03:40Z | 6,292 | 220 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-3.0",
"size_categories:1K<n<10K",
"arxiv:1307.5336",
"region:us",
"finance"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
pretty_name: FinancialPhrasebank
dataset_info:
- config_name: sentences_allagree
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 303371
num_examples: 2264
download_size: 681890
dataset_size: 303371
- config_name: sentences_75agree
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 472703
num_examples: 3453
download_size: 681890
dataset_size: 472703
- config_name: sentences_66agree
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 587152
num_examples: 4217
download_size: 681890
dataset_size: 587152
- config_name: sentences_50agree
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 679240
num_examples: 4846
download_size: 681890
dataset_size: 679240
tags:
- finance
---
# Dataset Card for financial_phrasebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle](https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news) [ResearchGate](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10)
- **Repository:**
- **Paper:** [Arxiv](https://arxiv.org/abs/1307.5336)
- **Leaderboard:** [Kaggle](https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news/code) [PapersWithCode](https://paperswithcode.com/sota/sentiment-analysis-on-financial-phrasebank) =
- **Point of Contact:** [Pekka Malo](mailto:[email protected]) [Ankur Sinha](mailto:[email protected])
### Dataset Summary
Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators.
### Supported Tasks and Leaderboards
Sentiment Classification
### Languages
English
## Dataset Structure
### Data Instances
```
{ "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
"label": "negative"
}
```
### Data Fields
- sentence: a tokenized line from the dataset
- label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral'
### Data Splits
There's no train/validation/test split.
However the dataset is available in four possible configurations depending on the percentage of agreement of annotators:
`sentences_50agree`; Number of instances with >=50% annotator agreement: 4846
`sentences_66agree`: Number of instances with >=66% annotator agreement: 4217
`sentences_75agree`: Number of instances with >=75% annotator agreement: 3453
`sentences_allagree`: Number of instances with 100% annotator agreement: 2264
## Dataset Creation
### Curation Rationale
The key arguments for the low utilization of statistical techniques in
financial sentiment analysis have been the difficulty of implementation for
practical applications and the lack of high quality training data for building
such models. Especially in the case of finance and economic texts, annotated
collections are a scarce resource and many are reserved for proprietary use
only. To resolve the missing training data problem, we present a collection of
∼ 5000 sentences to establish human-annotated standards for benchmarking
alternative modeling techniques.
The objective of the phrase level annotation task was to classify each example
sentence into a positive, negative or neutral category by considering only the
information explicitly available in the given sentence. Since the study is
focused only on financial and economic domains, the annotators were asked to
consider the sentences from the view point of an investor only; i.e. whether
the news may have positive, negative or neutral influence on the stock price.
As a result, sentences which have a sentiment that is not relevant from an
economic or financial perspective are considered neutral.
### Source Data
#### Initial Data Collection and Normalization
The corpus used in this paper is made out of English news on all listed
companies in OMX Helsinki. The news has been downloaded from the LexisNexis
database using an automated web scraper. Out of this news database, a random
subset of 10,000 articles was selected to obtain good coverage across small and
large companies, companies in different industries, as well as different news
sources. Following the approach taken by Maks and Vossen (2010), we excluded
all sentences which did not contain any of the lexicon entities. This reduced
the overall sample to 53,400 sentences, where each has at least one or more
recognized lexicon entity. The sentences were then classified according to the
types of entity sequences detected. Finally, a random sample of ∼5000 sentences
was chosen to represent the overall news database.
#### Who are the source language producers?
The source data was written by various financial journalists.
### Annotations
#### Annotation process
This release of the financial phrase bank covers a collection of 4840
sentences. The selected collection of phrases was annotated by 16 people with
adequate background knowledge on financial markets.
Given the large number of overlapping annotations (5 to 8 annotations per
sentence), there are several ways to define a majority vote based gold
standard. To provide an objective comparison, we have formed 4 alternative
reference datasets based on the strength of majority agreement:
#### Who are the annotators?
Three of the annotators were researchers and the remaining 13 annotators were
master's students at Aalto University School of Business with majors primarily
in finance, accounting, and economics.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
All annotators were from the same institution and so interannotator agreement
should be understood with this taken into account.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/.
If you are interested in commercial use of the data, please contact the following authors for an appropriate license:
- [Pekka Malo](mailto:[email protected])
- [Ankur Sinha](mailto:[email protected])
### Citation Information
```
@article{Malo2014GoodDO,
title={Good debt or bad debt: Detecting semantic orientations in economic texts},
author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala},
journal={Journal of the Association for Information Science and Technology},
year={2014},
volume={65}
}
```
### Contributions
Thanks to [@frankier](https://github.com/frankier) for adding this dataset. |
thunlp/few_rel | thunlp | 2024-01-18T11:03:39Z | 140 | 6 | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"arxiv:1810.10147",
"arxiv:1910.07124",
"region:us",
"relation-extraction"
] | [
"other"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- n<1K
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: fewrel
pretty_name: Few-Shot Relation Classification Dataset
tags:
- relation-extraction
dataset_info:
- config_name: default
features:
- name: relation
dtype: string
- name: tokens
sequence: string
- name: head
struct:
- name: text
dtype: string
- name: type
dtype: string
- name: indices
sequence:
sequence: int64
- name: tail
struct:
- name: text
dtype: string
- name: type
dtype: string
- name: indices
sequence:
sequence: int64
- name: names
sequence: string
splits:
- name: train_wiki
num_bytes: 19923155
num_examples: 44800
- name: val_nyt
num_bytes: 1385642
num_examples: 2500
- name: val_pubmed
num_bytes: 488502
num_examples: 1000
- name: val_semeval
num_bytes: 2646249
num_examples: 8851
- name: val_wiki
num_bytes: 5147348
num_examples: 11200
- name: pubmed_unsupervised
num_bytes: 1117703
num_examples: 2500
download_size: 22674323
dataset_size: 30708599
- config_name: pid2name
features:
- name: relation
dtype: string
- name: names
sequence: string
splits:
- name: pid2name
num_bytes: 81607
num_examples: 744
download_size: 22674323
dataset_size: 81607
config_names:
- default
- pid2name
---
# Dataset Card for few_rel
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub Page](https://thunlp.github.io/)
- **Repository:** [GitHub](https://github.com/thunlp/FewRel)
- **Paper:** [FewRel](https://arxiv.org/abs/1810.10147), [FewRel 2.0](https://arxiv.org/abs/1910.07124)
- **Leaderboard:** [GitHub Leaderboard](https://thunlp.github.io/fewrel.html)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
FewRel is a large-scale few-shot relation extraction dataset, which contains more than one hundred relations and tens of thousands of annotated instances cross different domains.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
The dataset contaings English text, as used by writers on Wikipedia, and crowdsourced English annotations.
## Dataset Structure
### Data Instances
An instance from `train_wiki` split:
```
{'head': {'indices': [[16]], 'text': 'tjq', 'type': 'Q1331049'}, 'names': ['place served by transport hub', 'territorial entity or entities served by this transport hub (airport, train station, etc.)'], 'relation': 'P931', 'tail': {'indices': [[13, 14]], 'text': 'tanjung pandan', 'type': 'Q3056359'}, 'tokens': ['Merpati', 'flight', '106', 'departed', 'Jakarta', '(', 'CGK', ')', 'on', 'a', 'domestic', 'flight', 'to', 'Tanjung', 'Pandan', '(', 'TJQ', ')', '.']}
```
### Data Fields
For `default`:
- `relation`: a `string` feature containing PID of the relation.
- `tokens`: a `list` of `string` features containing tokens for the text.
- `head`: a dictionary containing:
- `text`: a `string` feature representing the head entity.
- `type`: a `string` feature representing the type of the head entity.
- `indices`: a `list` containing `list` of token indices.
- `tail`: a dictionary containing:
- `text`: a `string` feature representing the tail entity.
- `type`: a `string` feature representing the type of the tail entity.
- `indices`: a `list` containing `list` of token indices.
- `names`: a `list` of `string` features containing relation names. For `pubmed_unsupervised` split, this is set to a `list` with an empty `string`. For `val_semeval` and `val_pubmed` split, this is set to a `list` with the `string` from the `relation` field.
### Data Splits
`train_wiki`: 44800
`val_nyt`: 2500
`val_pubmed`: 1000
`val_semeval`: 8851
`val_wiki`: 11200
`pubmed_unsupervised`: 2500
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
For FewRel:
Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong
For FewRel 2.0:
Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie
### Licensing Information
```
MIT License
Copyright (c) 2018 THUNLP
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@inproceedings{han-etal-2018-fewrel,
title = "{F}ew{R}el: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation",
author = "Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1514",
doi = "10.18653/v1/D18-1514",
pages = "4803--4809"
}
```
```
@inproceedings{gao-etal-2019-fewrel,
title = "{F}ew{R}el 2.0: Towards More Challenging Few-Shot Relation Classification",
author = "Gao, Tianyu and Han, Xu and Zhu, Hao and Liu, Zhiyuan and Li, Peng and Sun, Maosong and Zhou, Jie",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1649",
doi = "10.18653/v1/D19-1649",
pages = "6251--6256"
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. |
fever/fever | fever | 2024-01-18T11:03:38Z | 1,004 | 30 | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-sa-3.0",
"license:gpl-3.0",
"size_categories:100K<n<1M",
"region:us",
"knowledge-verification"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
language:
- en
paperswithcode_id: fever
annotations_creators:
- crowdsourced
language_creators:
- found
license:
- cc-by-sa-3.0
- gpl-3.0
multilinguality:
- monolingual
pretty_name: FEVER
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
tags:
- knowledge-verification
dataset_info:
- config_name: v1.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: train
num_bytes: 29591412
num_examples: 311431
- name: labelled_dev
num_bytes: 3643157
num_examples: 37566
- name: unlabelled_dev
num_bytes: 1548965
num_examples: 19998
- name: unlabelled_test
num_bytes: 1617002
num_examples: 19998
- name: paper_dev
num_bytes: 1821489
num_examples: 18999
- name: paper_test
num_bytes: 1821668
num_examples: 18567
download_size: 44853972
dataset_size: 40043693
- config_name: v2.0
features:
- name: id
dtype: int32
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_annotation_id
dtype: int32
- name: evidence_id
dtype: int32
- name: evidence_wiki_url
dtype: string
- name: evidence_sentence_id
dtype: int32
splits:
- name: validation
num_bytes: 306243
num_examples: 2384
download_size: 392466
dataset_size: 306243
- config_name: wiki_pages
features:
- name: id
dtype: string
- name: text
dtype: string
- name: lines
dtype: string
splits:
- name: wikipedia_pages
num_bytes: 7254115038
num_examples: 5416537
download_size: 1713485474
dataset_size: 7254115038
---
# Dataset Card for "fever"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://fever.ai/](https://fever.ai/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
- FEVER Dataset: FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences
extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims
are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the
sentence(s) forming the necessary evidence for their judgment.
- FEVER 2.0 Adversarial Attacks Dataset: The FEVER 2.0 Dataset consists of 1174 claims created by the submissions of
participants in the Breaker phase of the 2019 shared task. Participants (Breakers) were tasked with generating
adversarial examples that induce classification errors for the existing systems. Breakers submitted a dataset of up to
1000 instances with equal number of instances for each of the three classes (Supported, Refuted NotEnoughInfo). Only
novel claims (i.e. not contained in the original FEVER dataset) were considered as valid entries to the shared task.
The submissions were then manually evaluated for Correctness (grammatical, appropriately labeled and meet the FEVER
annotation guidelines requirements).
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
#### v1.0
- **Size of downloaded dataset files:** 44.86 MB
- **Size of the generated dataset:** 40.05 MB
- **Total amount of disk used:** 84.89 MB
An example of 'train' looks as follows.
```
'claim': 'Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.',
'evidence_wiki_url': 'Nikolaj_Coster-Waldau',
'label': 'SUPPORTS',
'id': 75397,
'evidence_id': 104971,
'evidence_sentence_id': 7,
'evidence_annotation_id': 92206}
```
#### v2.0
- **Size of downloaded dataset files:** 0.39 MB
- **Size of the generated dataset:** 0.30 MB
- **Total amount of disk used:** 0.70 MB
An example of 'validation' looks as follows.
```
{'claim': "There is a convicted statutory rapist called Chinatown's writer.",
'evidence_wiki_url': '',
'label': 'NOT ENOUGH INFO',
'id': 500000,
'evidence_id': -1,
'evidence_sentence_id': -1,
'evidence_annotation_id': 269158}
```
#### wiki_pages
- **Size of downloaded dataset files:** 1.71 GB
- **Size of the generated dataset:** 7.25 GB
- **Total amount of disk used:** 8.97 GB
An example of 'wikipedia_pages' looks as follows.
```
{'text': 'The following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world . ',
'lines': '0\tThe following are the football -LRB- soccer -RRB- events of the year 1928 throughout the world .\n1\t',
'id': '1928_in_association_football'}
```
### Data Fields
The data fields are the same among all splits.
#### v1.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### v2.0
- `id`: a `int32` feature.
- `label`: a `string` feature.
- `claim`: a `string` feature.
- `evidence_annotation_id`: a `int32` feature.
- `evidence_id`: a `int32` feature.
- `evidence_wiki_url`: a `string` feature.
- `evidence_sentence_id`: a `int32` feature.
#### wiki_pages
- `id`: a `string` feature.
- `text`: a `string` feature.
- `lines`: a `string` feature.
### Data Splits
#### v1.0
| | train | unlabelled_dev | labelled_dev | paper_dev | unlabelled_test | paper_test |
|------|-------:|---------------:|-------------:|----------:|----------------:|-----------:|
| v1.0 | 311431 | 19998 | 37566 | 18999 | 19998 | 18567 |
#### v2.0
| | validation |
|------|-----------:|
| v2.0 | 2384 |
#### wiki_pages
| | wikipedia_pages |
|------------|----------------:|
| wiki_pages | 5416537 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
FEVER license:
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use "FEVER Dataset", please cite:
```bibtex
@inproceedings{Thorne18Fever,
author = {Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Mittal, Arpit},
title = {{FEVER}: a Large-scale Dataset for Fact Extraction and {VERification}},
booktitle = {NAACL-HLT},
year = {2018}
}
```
If you use "FEVER 2.0 Adversarial Attacks Dataset", please cite:
```bibtex
@inproceedings{Thorne19FEVER2,
author = {Thorne, James and Vlachos, Andreas and Cocarascu, Oana and Christodoulopoulos, Christos and Mittal, Arpit},
title = {The {FEVER2.0} Shared Task},
booktitle = {Proceedings of the Second Workshop on {Fact Extraction and VERification (FEVER)}},
year = {2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq),
[@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun),
[@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
li2017dailydialog/daily_dialog | li2017dailydialog | 2024-01-18T11:02:28Z | 8,414 | 145 | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"region:us",
"emotion-classification",
"dialog-act-classification"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: dailydialog
pretty_name: DailyDialog
tags:
- emotion-classification
- dialog-act-classification
dataset_info:
features:
- name: dialog
sequence: string
- name: act
sequence:
class_label:
names:
'0': __dummy__
'1': inform
'2': question
'3': directive
'4': commissive
- name: emotion
sequence:
class_label:
names:
'0': no emotion
'1': anger
'2': disgust
'3': fear
'4': happiness
'5': sadness
'6': surprise
splits:
- name: train
num_bytes: 7296715
num_examples: 11118
- name: test
num_bytes: 655844
num_examples: 1000
- name: validation
num_bytes: 673943
num_examples: 1000
download_size: 4475921
dataset_size: 8626502
---
# Dataset Card for "daily_dialog"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://yanran.li/dailydialog](http://yanran.li/dailydialog)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.48 MB
- **Size of the generated dataset:** 8.63 MB
- **Total amount of disk used:** 13.11 MB
### Dataset Summary
We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects.
The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way
and cover various topics about our daily life. We also manually label the developed dataset with communication
intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it
benefit the research field of dialog systems.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 4.48 MB
- **Size of the generated dataset:** 8.63 MB
- **Total amount of disk used:** 13.11 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"act": [2, 1, 1, 1, 1, 2, 3, 2, 3, 4],
"dialog": "[\"Good afternoon . This is Michelle Li speaking , calling on behalf of IBA . Is Mr Meng available at all ? \", \" This is Mr Meng ...",
"emotion": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `dialog`: a `list` of `string` features.
- `act`: a `list` of classification labels, with possible values including `__dummy__` (0), `inform` (1), `question` (2), `directive` (3) and `commissive` (4).
- `emotion`: a `list` of classification labels, with possible values including `no emotion` (0), `anger` (1), `disgust` (2), `fear` (3), `happiness` (4), `sadness` (5) and `surprise` (6).
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|11118| 1000|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
DailyDialog dataset is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@InProceedings{li2017dailydialog,
author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi},
title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset},
booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)},
year = {2017}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@julien-c](https://github.com/julien-c) for adding this dataset. |
arxiv-community/arxiv_dataset | arxiv-community | 2024-01-18T11:01:52Z | 3,323 | 114 | [
"task_categories:translation",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:explanation-generation",
"task_ids:fact-checking-retrieval",
"task_ids:text-simplification",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"arxiv:1905.00075",
"region:us"
] | [
"translation",
"summarization",
"text-retrieval"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
- summarization
- text-retrieval
task_ids:
- document-retrieval
- entity-linking-retrieval
- explanation-generation
- fact-checking-retrieval
- text-simplification
paperswithcode_id: null
pretty_name: arXiv Dataset
dataset_info:
features:
- name: id
dtype: string
- name: submitter
dtype: string
- name: authors
dtype: string
- name: title
dtype: string
- name: comments
dtype: string
- name: journal-ref
dtype: string
- name: doi
dtype: string
- name: report-no
dtype: string
- name: categories
dtype: string
- name: license
dtype: string
- name: abstract
dtype: string
- name: update_date
dtype: string
splits:
- name: train
num_bytes: 3056873071
num_examples: 2349354
download_size: 0
dataset_size: 3056873071
---
# Dataset Card for arXiv Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv)
- **Repository:**
- **Paper:** [On the Use of ArXiv as a Dataset](https://arxiv.org/abs/1905.00075)
- **Leaderboard:**
- **Point of Contact:** [Matt Bierbaum](mailto:[email protected])
### Dataset Summary
A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is English
## Dataset Structure
### Data Instances
This dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. An example is given below
```
{'id': '0704.0002',
'submitter': 'Louis Theran',
'authors': 'Ileana Streinu and Louis Theran',
'title': 'Sparsity-certifying Graph Decompositions',
'comments': 'To appear in Graphs and Combinatorics',
'journal-ref': None,
'doi': None,
'report-no': None,
'categories': 'math.CO cs.CG',
'license': 'http://arxiv.org/licenses/nonexclusive-distrib/1.0/',
'abstract': ' We describe a new algorithm, the $(k,\\ell)$-pebble game with colors, and use\nit obtain a characterization of the family of $(k,\\ell)$-sparse graphs and\nalgorithmic solutions to a family of problems concerning tree decompositions of\ngraphs. Special instances of sparse graphs appear in rigidity theory and have\nreceived increased attention in recent years. In particular, our colored\npebbles generalize and strengthen the previous results of Lee and Streinu and\ngive a new proof of the Tutte-Nash-Williams characterization of arboricity. We\nalso present a new decomposition that certifies sparsity based on the\n$(k,\\ell)$-pebble game with colors. Our work also exposes connections between\npebble game algorithms and previous sparse graph algorithms by Gabow, Gabow and\nWestermann and Hendrickson.\n',
'update_date': '2008-12-13'}
```
### Data Fields
- `id`: ArXiv ID (can be used to access the paper)
- `submitter`: Who submitted the paper
- `authors`: Authors of the paper
- `title`: Title of the paper
- `comments`: Additional info, such as number of pages and figures
- `journal-ref`: Information about the journal the paper was published in
- `doi`: [Digital Object Identifier](https://www.doi.org)
- `report-no`: Report Number
- `abstract`: The abstract of the paper
- `categories`: Categories / tags in the ArXiv system
### Data Splits
The data was not splited.
## Dataset Creation
### Curation Rationale
For nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming depth. In these times of unique global challenges, efficient extraction of insights from data is essential. To help make the arXiv more accessible, a free, open pipeline on Kaggle to the machine-readable arXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more is presented to empower new use cases that can lead to the exploration of richer machine learning techniques that combine multi-modal features towards applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
### Source Data
This data is based on arXiv papers.
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
This dataset contains no annotations.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The original data is maintained by [ArXiv](https://arxiv.org/)
### Licensing Information
The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@misc{clement2019arxiv,
title={On the Use of ArXiv as a Dataset},
author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
year={2019},
eprint={1905.00075},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset. |
abuelkhair-corpus/arabic_billion_words | abuelkhair-corpus | 2024-01-18T11:01:47Z | 441 | 29 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:ar",
"license:unknown",
"size_categories:100K<n<1M",
"arxiv:1611.04033",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Arabic Billion Words
dataset_info:
- config_name: Alittihad
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1601790302
num_examples: 349342
download_size: 348259999
dataset_size: 1601790302
- config_name: Almasryalyoum
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1056197870
num_examples: 291723
download_size: 242604438
dataset_size: 1056197870
- config_name: Almustaqbal
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1545659336
num_examples: 446873
download_size: 350826797
dataset_size: 1545659336
- config_name: Alqabas
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2631729746
num_examples: 817274
download_size: 595274646
dataset_size: 2631729746
- config_name: Echoroukonline
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 464386206
num_examples: 139732
download_size: 108184378
dataset_size: 464386206
- config_name: Ryiadh
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3101294859
num_examples: 858188
download_size: 691264971
dataset_size: 3101294859
- config_name: Sabanews
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 198019614
num_examples: 92149
download_size: 38214558
dataset_size: 198019614
- config_name: SaudiYoum
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2723291416
num_examples: 888068
download_size: 605537923
dataset_size: 2723291416
- config_name: Techreen
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1103458209
num_examples: 314597
download_size: 252976781
dataset_size: 1103458209
- config_name: Youm7
features:
- name: url
dtype: string
- name: head_line
dtype: string
- name: date
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3004689464
num_examples: 1172136
download_size: 617708074
dataset_size: 3004689464
config_names:
- Alittihad
- Almasryalyoum
- Almustaqbal
- Alqabas
- Echoroukonline
- Ryiadh
- Sabanews
- SaudiYoum
- Techreen
- Youm7
---
# Dataset Card for Arabic Billion Words Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus
- **Repository:**
- **Paper:** https://arxiv.org/pdf/1611.04033
- **Leaderboard:**
- **Point of Contact:**[Ibrahim Abu El-Khair]([email protected])
### Dataset Summary
Abu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles.
It contains over a billion and a half words in total, out of which, there are about three million unique words.
The corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256.
Also it was marked with two mark-up languages, namely: SGML, and XML.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Arabic
## Dataset Structure
### Data Instances
This is an example of the "Almasryalyoum" configuration subset:
```python
{
"url": "http://today.almasryalyoum.com/printerfriendly.aspx?ArticleID=61300",
"head_line": "رئيس وزراء المجر: عنصرية جماهير أوجبيست جلبت العار للبلاد",
"date": "19/5/2007",
"text": """قال متحدث باسم الحكومة المجرية: إن رئيس الوزراء فيرنك جيوركساني رحب بقرار اتحاد كرة القدم المجري بخصم ثلاث نقاط من نادي أوجبيست بسبب السلوك العنصري الذي صدر من جماهيره.
وعاقب الاتحاد المجري فريق أوجبيست بعد أن سخرت جماهيره من إبراهيم سيديبي مهاجم فريق ديبرينسين الأسود أثناء مباراة الفريقين أوائل مايو الجاري.
يذكر أن الاتحاد فرض أيضا غرامة مالية قدرها 20 ألف دولار علي أوجبيست في عام 2005 بعد أن رددت جماهيره شعارات معادية للسامية خلال مباراة بالدوري المجري.
وأوضح جيوركساني في خطاب إلي إيستفان كيستليكي رئيس الاتحاد المجري لكرة القدم، أن هذا السلوك العنصري من الجماهير «جلب العار لكرة القدم وللمجر». يذكر أن المجر بها مجموعة من مشجعي كرة القدم المشاغبين «الهوليجانز»، وشارك الكثير منهم في أعمال شغب معادية للحكومة في العام الماضي.""",
}
```
### Data Fields
The data fields are:
- "url": string, original url of the article,
- "head_line": string, headline of the article,
- "date": string, date of the article,
- "text": string, text content of the article,
### Data Splits
There is only one "training" split for all configuration subsets, containing the following number of examples:
| | Number of examples |
|:---------------|-------------------:|
| Alittihad | 349342 |
| Almasryalyoum | 291723 |
| Almustaqbal | 446873 |
| Alqabas | 817274 |
| Echoroukonline | 139732 |
| Ryiadh | 858188 |
| Sabanews | 92149 |
| SaudiYoum | 888068 |
| Techreen | 314597 |
| Youm7 | 1172136 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{el20161,
title={1.5 billion words arabic corpus},
author={El-Khair, Ibrahim Abu},
journal={arXiv preprint arXiv:1611.04033},
year={2016}
}
```
### Contributions
Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) and [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
facebook/curiosity_dialogs | facebook | 2024-01-18T09:51:48Z | 122 | 12 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:10K<n<100K",
"region:us",
"conversational-curiosity"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
paperswithcode_id: curiosity
pretty_name: Curiosity Dataset
tags:
- conversational-curiosity
dataset_info:
features:
- name: messages
sequence:
- name: message
dtype: string
- name: liked
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: sender
dtype:
class_label:
names:
'0': user
'1': assistant
- name: facts
sequence:
- name: fid
dtype: int32
- name: used
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: source
dtype:
class_label:
names:
'0': section
'1': known
'2': random
- name: message_id
dtype: string
- name: dialog_acts
sequence: string
- name: known_entities
sequence: string
- name: focus_entity
dtype: string
- name: dialog_id
dtype: int32
- name: inferred_steps
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: created_time
dtype: int64
- name: aspects
sequence: string
- name: first_aspect
dtype: string
- name: second_aspect
dtype: string
- name: shuffle_facts
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: related_entities
sequence: string
- name: tag
dtype: string
- name: user_id
dtype: int32
- name: assistant_id
dtype: int32
- name: is_annotated
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: user_dialog_rating
dtype: int32
- name: user_other_agent_rating
dtype: int32
- name: assistant_dialog_rating
dtype: int32
- name: assistant_other_agent_rating
dtype: int32
- name: reported
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
- name: annotated
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
config_name: curiosity_dialogs
splits:
- name: train
num_bytes: 37198297
num_examples: 10287
- name: val
num_bytes: 4914487
num_examples: 1287
- name: test
num_bytes: 4915613
num_examples: 1287
- name: test_zero
num_bytes: 4333191
num_examples: 1187
download_size: 92169165
dataset_size: 51361588
---
# Dataset Card for Curiosity Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Curiosity Dataset Homepage](https://www.pedro.ai/curiosity)
- **Repository:** [Curiosity Dataset Repository](https://github.com/facebookresearch/curiosity)
- **Paper:** [ACL Anthology](https://www.aclweb.org/anthology/2020.emnlp-main.655/)
- **Point of Contact:** [Pedro Rodriguez](https://mailhide.io/e/wbfjM)
### Dataset Summary
Curiosity dataset consists of 14K English dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.
### Supported Tasks and Leaderboards
* `text-generation-other-conversational-curiosity`: The dataset can be used to train a model for Conversational Curiosity, which consists in the testing of the hypothesis that engagement increases when users are presented with facts related to what they know. Success on this task is typically measured by achieving a *high* [Accuracy](https://huggingface.co/metrics/accuracy) and [F1 Score](https://huggingface.co/metrics/f1).
### Languages
The text in the dataset is in English collected by crowd-souring. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point consists of dialogs between an user and an assistant, which is followed by the different attributes of the particular dialog.
An example from the Curiosity Dataset train set looks as follows:
```
{'annotated': 1,
'aspects': ['Media', 'Politics and government'],
'assistant_dialog_rating': 5,
'assistant_id': 341,
'assistant_other_agent_rating': 5,
'created_time': 1571783665,
'dialog_id': 21922,
'first_aspect': 'Media',
'focus_entity': 'Namibia',
'inferred_steps': 1,
'is_annotated': 0,
'known_entities': ['South Africa', 'United Kingdom', 'Portugal'],
'messages': {'dialog_acts': [['request_topic'],
['inform_response'],
['request_aspect'],
['inform_response'],
['request_followup'],
['inform_response'],
['request_aspect', 'feedback_positive'],
['inform_response'],
['request_followup'],
['inform_response'],
[],
[]],
'facts': [{'fid': [], 'source': [], 'used': []},
{'fid': [77870, 77676, 77816, 77814, 77775, 77659, 77877, 77785, 77867],
'source': [0, 1, 2, 2, 0, 2, 0, 1, 1],
'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]},
{'fid': [], 'source': [], 'used': []},
{'fid': [77725, 77870, 77676, 77863, 77814, 77775, 77659, 77877, 77867],
'source': [2, 0, 1, 1, 2, 0, 2, 0, 1],
'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]},
{'fid': [], 'source': [], 'used': []},
{'fid': [77694, 77661, 77863, 77780, 77671, 77704, 77869, 77693, 77877],
'source': [1, 2, 1, 0, 2, 2, 0, 1, 0],
'used': [0, 0, 0, 0, 0, 0, 0, 0, 1]},
{'fid': [], 'source': [], 'used': []},
{'fid': [77816, 77814, 77864, 77659, 77877, 77803, 77738, 77784, 77789],
'source': [2, 2, 0, 2, 0, 1, 1, 0, 1],
'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]},
{'fid': [], 'source': [], 'used': []},
{'fid': [77694, 77776, 77780, 77696, 77707, 77693, 77778, 77702, 77743],
'source': [1, 0, 0, 2, 1, 1, 0, 2, 2],
'used': [0, 0, 0, 0, 0, 0, 0, 0, 0]},
{'fid': [], 'source': [], 'used': []},
{'fid': [77662, 77779, 77742, 77734, 77663, 77777, 77702, 77731, 77778],
'source': [1, 0, 2, 1, 2, 0, 2, 1, 0],
'used': [0, 0, 0, 0, 0, 0, 0, 0, 1]}],
'liked': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'message': ['Hi. I want information about Namibia.',
'Nmbia is a country in southern Africa.',
'Do you have information about the media there?',
'A mentional amount of foriegn',
'What about it?',
"Media and journalists in Namibia are represented by the Namibia chapter of the Media Institute of 'southern Africa and the Editors Forum of Namibia.",
'Interesting! What can you tell me about the politics and government?',
'Namibia formed the Namibian Defence Force, comprising former enemies in a 23-year bush war.',
'Do you have more information about it?',
"With a small army and a fragile economy , the Namibian government's principal foreign policy concern is developing strengthened ties within the Southern African region.",
"That's all I wanted to know. Thank you!",
'My pleasure!'],
'message_id': ['617343895',
'2842515356',
'4240816985',
'520711081',
'1292358002',
'3677078227',
'1563061125',
'1089028270',
'1607063839',
'113037558',
'1197873991',
'1399017322'],
'sender': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1]},
'related_entities': ['Western Roman Empire',
'United Kingdom',
'Portuguese language',
'Southern African Development Community',
'South Africa',
'Kalahari Desert',
'Namib Desert',
'League of Nations',
'Afrikaans',
'Sub-Saharan Africa',
'Portugal',
'South-West Africa',
'Warmbad, Namibia',
'German language',
'NBC'],
'reported': 0,
'second_aspect': 'Politics and government',
'shuffle_facts': 1,
'tag': 'round_2',
'user_dialog_rating': 5,
'user_id': 207,
'user_other_agent_rating': 5}
```
### Data Fields
* `messages`: List of dialogs between the user and the assistant and their associated attributes
* `dialog_acts`: List of actions performed in the dialogs
* `facts`: List of facts returned by the assistant
* `fid`: Fact ID
* `source`: Source for the fact
* `used`: Whether facts were used before in the same dialog
* `liked`: List of values indicating whether each dialog was liked
* `message`: List of dialogs (messages) between the user and the assistant
* `message_id`: Message ID
* `sender`: Message author ID (numeric)
* `known_entities`: Rooted facts about entities the user knows
* `focus_entity` : Entity in focus in the dialogs
* `dialog_id `: Dialog ID
* `inferred_steps`: Number of inferred steps
* `created_time`: Time of creation of the dialog
* `aspects`: List of two aspects which the dialog is about
* `first_aspect`: First aspect
* `second_aspect`: Second aspect
* `shuffle_facts`: Whether facts were shuffled
* `related_entities` : List of fifteen related entities to the focus entity
* `tag`: Conversation tag
* `user_id`: User ID
* `assistant_id`: Assistant ID
* `is_annotated`: 0 or 1 (More Information Needed)
* `user_dialog_rating`: 1 - 5 (More Information Needed)
* `user_other_agent_rating`: 1 - 5 (More Information Needed)
* `assistant_dialog_rating`: 1 - 5 (More Information Needed)
* `assistant_other_agent_rating`: 1 - 5 (More Information Needed)
* `reported`: Whether the dialog was reported inappropriate
* `annotated`: 0 or 1 (More Information Needed)
### Data Splits
The data is split into a training, validation, test and test_zero set as per the original dataset split.
| | train | validation | test | test_zero |
|-----------------------|------:|-----------:|-----:|----------:|
| Input dialog examples | 10287 | 1287 | 1287 | 1187 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/legalcode)
### Citation Information
```
@inproceedings{rodriguez2020curiosity,
title = {Information Seeking in the Spirit of Learning: a Dataset for Conversational Curiosity},
author = {Pedro Rodriguez and Paul Crook and Seungwhan Moon and Zhiguang Wang},
year = 2020,
booktitle = {Empirical Methods in Natural Language Processing}
}
```
### Contributions
Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset. |
aviaefrat/cryptonite | aviaefrat | 2024-01-18T09:49:43Z | 66 | 6 | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"size_categories:100K<n<1M",
"arxiv:2103.01242",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: null
pretty_name: Cryptonite
dataset_info:
- config_name: default
features:
- name: agent_info
sequence:
- name: Bottomline
dtype: string
- name: Role
dtype: string
- name: Target
dtype: float32
- name: agent_turn
sequence: int32
- name: dialogue_acts
sequence:
- name: intent
dtype: string
- name: price
dtype: float32
- name: utterance
sequence: string
- name: items
sequence:
- name: Category
dtype: string
- name: Images
dtype: string
- name: Price
dtype: float32
- name: Description
dtype: string
- name: Title
dtype: string
splits:
- name: train
num_bytes: 8538836
num_examples: 5247
- name: test
num_bytes: 1353933
num_examples: 838
- name: validation
num_bytes: 966032
num_examples: 597
download_size: 25373618
dataset_size: 10858801
- config_name: cryptonite
features:
- name: clue
dtype: string
- name: answer
dtype: string
- name: enumeration
dtype: string
- name: publisher
dtype: string
- name: date
dtype: int64
- name: quick
dtype: bool
- name: id
dtype: string
splits:
- name: train
num_bytes: 52228597
num_examples: 470804
- name: validation
num_bytes: 2901768
num_examples: 26156
- name: test
num_bytes: 2908275
num_examples: 26157
download_size: 21615952
dataset_size: 58038640
config_names:
- cryptonite
- default
---
# Dataset Card for Cryptonite
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/aviaefrat/cryptonite)
- **Repository:** [Github](https://github.com/aviaefrat/cryptonite)
- **Paper:** [Arxiv](https://arxiv.org/pdf/2103.01242.pdf)
- **Leaderboard:**
- **Point of Contact:** [Twitter](https://twitter.com/AviaEfrat)
### Dataset Summary
Current NLP datasets targeting ambiguity can be solved by a native speaker with relative ease. We present Cryptonite, a large-scale dataset based on cryptic crosswords, which is both linguistically complex and naturally sourced. Each example in Cryptonite is a cryptic clue, a short phrase or sentence with a misleading surface reading, whose solving requires disambiguating semantic, syntactic, and phonetic wordplays, as well as world knowledge. Cryptic clues pose a challenge even for experienced solvers, though top-tier experts can solve them with almost 100% accuracy. Cryptonite is a challenging task for current models; fine-tuning T5-Large on 470k cryptic clues achieves only 7.6% accuracy, on par with the accuracy of a rule-based clue solver (8.6%).
### Languages
English
## Dataset Structure
### Data Instances
This is one example from the train set.
```python
{
'clue': 'make progress socially in stated region (5)',
'answer': 'climb',
'date': 971654400000,
'enumeration': '(5)',
'id': 'Times-31523-6across',
'publisher': 'Times',
'quick': False
}
```
### Data Fields
- `clue`: a string representing the clue provided for the crossword
- `answer`: a string representing the answer to the clue
- `enumeration`: a string representing the
- `publisher`: a string representing the publisher of the crossword
- `date`: a int64 representing the UNIX timestamp of the date of publication of the crossword
- `quick`: a bool representing whether the crossword is quick (a crossword aimed at beginners, easier to solve)
- `id`: a string to uniquely identify a given example in the dataset
### Data Splits
Train (470,804 examples), validation (26,156 examples), test (26,157 examples).
## Dataset Creation
### Curation Rationale
Crosswords from the Times and the Telegraph.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Avia Efrat, Uri Shaham, Dan Kilman, Omer Levy
### Licensing Information
`cc-by-nc-4.0`
### Citation Information
```
@misc{efrat2021cryptonite,
title={Cryptonite: A Cryptic Crossword Benchmark for Extreme Ambiguity in Language},
author={Avia Efrat and Uri Shaham and Dan Kilman and Omer Levy},
year={2021},
eprint={2103.01242},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@theo-m](https://github.com/theo-m) for adding this dataset. |
nyu-mll/crows_pairs | nyu-mll | 2024-01-18T09:49:15Z | 1,595 | 8 | [
"task_categories:text-classification",
"task_ids:text-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"region:us",
"bias-evaluation"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
paperswithcode_id: crows-pairs
pretty_name: CrowS-Pairs
tags:
- bias-evaluation
dataset_info:
features:
- name: id
dtype: int32
- name: sent_more
dtype: string
- name: sent_less
dtype: string
- name: stereo_antistereo
dtype:
class_label:
names:
'0': stereo
'1': antistereo
- name: bias_type
dtype:
class_label:
names:
'0': race-color
'1': socioeconomic
'2': gender
'3': disability
'4': nationality
'5': sexual-orientation
'6': physical-appearance
'7': religion
'8': age
- name: annotations
sequence:
sequence:
class_label:
names:
'0': race-color
'1': socioeconomic
'2': gender
'3': disability
'4': nationality
'5': sexual-orientation
'6': physical-appearance
'7': religion
'8': age
- name: anon_writer
dtype: string
- name: anon_annotators
sequence: string
config_name: crows_pairs
splits:
- name: test
num_bytes: 419976
num_examples: 1508
download_size: 437764
dataset_size: 419976
---
# Dataset Card for CrowS-Pairs
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Add homepage URL here if available (unless it's a GitHub repository)]()
- **Repository:** https://github.com/nyu-mll/crows-pairs
- **Paper:** https://aclanthology.org/2020.emnlp-main.154
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [If known, name and email of at least one person the reader can contact for questions about the dataset.]()
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CrowS-Pairs is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
It is created using prompts taken from the [ROCStories corpora](https://cs.rochester.edu/nlp/rocstories/) and the fiction part of [MNLI](https://cims.nyu.edu/~sbowman/multinli/). Please refer to their papers for more details.
### Citation Information
```
@inproceedings{nangia-etal-2020-crows,
title = "{C}row{S}-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models",
author = "Nangia, Nikita and
Vania, Clara and
Bhalerao, Rasika and
Bowman, Samuel R.",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.154",
doi = "10.18653/v1/2020.emnlp-main.154",
pages = "1953--1967",
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
allenai/cosmos_qa | allenai | 2024-01-18T09:43:51Z | 1,599 | 30 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"arxiv:1909.00277",
"region:us"
] | [
"multiple-choice"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: CosmosQA
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: cosmosqa
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer0
dtype: string
- name: answer1
dtype: string
- name: answer2
dtype: string
- name: answer3
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 17159918
num_examples: 25262
- name: test
num_bytes: 5121479
num_examples: 6963
- name: validation
num_bytes: 2186987
num_examples: 2985
download_size: 24399475
dataset_size: 24468384
---
# Dataset Card for "cosmos_qa"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://wilburone.github.io/cosmos/](https://wilburone.github.io/cosmos/)
- **Repository:** https://github.com/wilburOne/cosmosqa/
- **Paper:** [Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning](https://arxiv.org/abs/1909.00277)
- **Point of Contact:** [Lifu Huang](mailto:[email protected])
- **Size of downloaded dataset files:** 24.40 MB
- **Size of the generated dataset:** 24.51 MB
- **Total amount of disk used:** 48.91 MB
### Dataset Summary
Cosmos QA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people's everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 24.40 MB
- **Size of the generated dataset:** 24.51 MB
- **Total amount of disk used:** 48.91 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answer0": "If he gets married in the church he wo nt have to get a divorce .",
"answer1": "He wants to get married to a different person .",
"answer2": "He wants to know if he does nt like this girl can he divorce her ?",
"answer3": "None of the above choices .",
"context": "\"Do i need to go for a legal divorce ? I wanted to marry a woman but she is not in the same religion , so i am not concern of th...",
"id": "3BFF0DJK8XA7YNK4QYIGCOG1A95STE##3180JW2OT5AF02OISBX66RFOCTG5J7##A2LTOS0AZ3B28A##Blog_56156##q1_a1##378G7J1SJNCDAAIN46FM2P7T6KZEW2",
"label": 1,
"question": "Why is this person asking about divorce ?"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answer0`: a `string` feature.
- `answer1`: a `string` feature.
- `answer2`: a `string` feature.
- `answer3`: a `string` feature.
- `label`: a `int32` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|25262| 2985|6963|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
As reported via email by Yejin Choi, the dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```
@inproceedings{huang-etal-2019-cosmos,
title = "Cosmos {QA}: Machine Reading Comprehension with Contextual Commonsense Reasoning",
author = "Huang, Lifu and
Le Bras, Ronan and
Bhagavatula, Chandra and
Choi, Yejin",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1243",
doi = "10.18653/v1/D19-1243",
pages = "2391--2401",
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
ZihanWangKi/conllpp | ZihanWangKi | 2024-01-18T09:35:35Z | 214 | 16 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|conll2003",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|conll2003
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: conll
pretty_name: CoNLL++
train-eval-index:
- config: conllpp
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
0: '"'
1: ''''''
2: '#'
3: $
4: (
5: )
6: ','
7: .
8: ':'
9: '``'
10: CC
11: CD
12: DT
13: EX
14: FW
15: IN
16: JJ
17: JJR
18: JJS
19: LS
20: MD
21: NN
22: NNP
23: NNPS
24: NNS
25: NN|SYM
26: PDT
27: POS
28: PRP
29: PRP$
30: RB
31: RBR
32: RBS
33: RP
34: SYM
35: TO
36: UH
37: VB
38: VBD
39: VBG
40: VBN
41: VBP
42: VBZ
43: WDT
44: WP
45: WP$
46: WRB
- name: chunk_tags
sequence:
class_label:
names:
0: O
1: B-ADJP
2: I-ADJP
3: B-ADVP
4: I-ADVP
5: B-CONJP
6: I-CONJP
7: B-INTJ
8: I-INTJ
9: B-LST
10: I-LST
11: B-NP
12: I-NP
13: B-PP
14: I-PP
15: B-PRT
16: I-PRT
17: B-SBAR
18: I-SBAR
19: B-UCP
20: I-UCP
21: B-VP
22: I-VP
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
config_name: conllpp
splits:
- name: train
num_bytes: 6931393
num_examples: 14041
- name: validation
num_bytes: 1739247
num_examples: 3250
- name: test
num_bytes: 1582078
num_examples: 3453
download_size: 4859600
dataset_size: 10252718
---
# Dataset Card for "conllpp"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
- **Repository:** [Github](https://github.com/ZihanWangKi/CrossWeigh)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1519)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set
have been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One
correction on the test set for example, is:
```
{
"tokens": ["SOCCER", "-", "JAPAN", "GET", "LUCKY", "WIN", ",", "CHINA", "IN", "SURPRISE", "DEFEAT", "."],
"original_ner_tags_in_conll2003": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-PER", "O", "O", "O", "O"],
"corrected_ner_tags_in_conllpp": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-LOC", "O", "O", "O", "O"],
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
#### conllpp
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
"id": "0",
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
}
```
### Data Fields
The data fields are the same among all splits.
#### conllpp
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4).
- `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4).
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{wang2019crossweigh,
title={CrossWeigh: Training Named Entity Tagger from Imperfect Annotations},
author={Wang, Zihan and Shang, Jingbo and Liu, Liyuan and Lu, Lihao and Liu, Jiacheng and Han, Jiawei},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={5157--5166},
year={2019}
}
```
### Contributions
Thanks to [@ZihanWangKi](https://github.com/ZihanWangKi) for adding this dataset. |
ontonotes/conll2012_ontonotesv5 | ontonotes | 2024-01-18T09:34:57Z | 968 | 43 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"task_ids:coreference-resolution",
"task_ids:parsing",
"task_ids:lemmatization",
"task_ids:word-sense-disambiguation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:ar",
"language:en",
"language:zh",
"license:cc-by-nc-nd-4.0",
"size_categories:10K<n<100K",
"region:us",
"semantic-role-labeling"
] | [
"token-classification"
] | 2022-03-15T10:48:28Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- ar
- en
- zh
license:
- cc-by-nc-nd-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
- coreference-resolution
- parsing
- lemmatization
- word-sense-disambiguation
paperswithcode_id: ontonotes-5-0
pretty_name: CoNLL2012 shared task data based on OntoNotes 5.0
tags:
- semantic-role-labeling
dataset_info:
- config_name: english_v4
features:
- name: document_id
dtype: string
- name: sentences
list:
- name: part_id
dtype: int32
- name: words
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': XX
'1': '``'
'2': $
'3': ''''''
'4': ','
'5': -LRB-
'6': -RRB-
'7': .
'8': ':'
'9': ADD
'10': AFX
'11': CC
'12': CD
'13': DT
'14': EX
'15': FW
'16': HYPH
'17': IN
'18': JJ
'19': JJR
'20': JJS
'21': LS
'22': MD
'23': NFP
'24': NN
'25': NNP
'26': NNPS
'27': NNS
'28': PDT
'29': POS
'30': PRP
'31': PRP$
'32': RB
'33': RBR
'34': RBS
'35': RP
'36': SYM
'37': TO
'38': UH
'39': VB
'40': VBD
'41': VBG
'42': VBN
'43': VBP
'44': VBZ
'45': WDT
'46': WP
'47': WP$
'48': WRB
- name: parse_tree
dtype: string
- name: predicate_lemmas
sequence: string
- name: predicate_framenet_ids
sequence: string
- name: word_senses
sequence: float32
- name: speaker
dtype: string
- name: named_entities
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-NORP
'4': I-NORP
'5': B-FAC
'6': I-FAC
'7': B-ORG
'8': I-ORG
'9': B-GPE
'10': I-GPE
'11': B-LOC
'12': I-LOC
'13': B-PRODUCT
'14': I-PRODUCT
'15': B-DATE
'16': I-DATE
'17': B-TIME
'18': I-TIME
'19': B-PERCENT
'20': I-PERCENT
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-ORDINAL
'26': I-ORDINAL
'27': B-CARDINAL
'28': I-CARDINAL
'29': B-EVENT
'30': I-EVENT
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
'33': B-LAW
'34': I-LAW
'35': B-LANGUAGE
'36': I-LANGUAGE
- name: srl_frames
list:
- name: verb
dtype: string
- name: frames
sequence: string
- name: coref_spans
sequence:
sequence: int32
length: 3
splits:
- name: train
num_bytes: 112246121
num_examples: 1940
- name: validation
num_bytes: 14116925
num_examples: 222
- name: test
num_bytes: 14709044
num_examples: 222
download_size: 193644139
dataset_size: 141072090
- config_name: chinese_v4
features:
- name: document_id
dtype: string
- name: sentences
list:
- name: part_id
dtype: int32
- name: words
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': X
'1': AD
'2': AS
'3': BA
'4': CC
'5': CD
'6': CS
'7': DEC
'8': DEG
'9': DER
'10': DEV
'11': DT
'12': ETC
'13': FW
'14': IJ
'15': INF
'16': JJ
'17': LB
'18': LC
'19': M
'20': MSP
'21': NN
'22': NR
'23': NT
'24': OD
'25': 'ON'
'26': P
'27': PN
'28': PU
'29': SB
'30': SP
'31': URL
'32': VA
'33': VC
'34': VE
'35': VV
- name: parse_tree
dtype: string
- name: predicate_lemmas
sequence: string
- name: predicate_framenet_ids
sequence: string
- name: word_senses
sequence: float32
- name: speaker
dtype: string
- name: named_entities
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-NORP
'4': I-NORP
'5': B-FAC
'6': I-FAC
'7': B-ORG
'8': I-ORG
'9': B-GPE
'10': I-GPE
'11': B-LOC
'12': I-LOC
'13': B-PRODUCT
'14': I-PRODUCT
'15': B-DATE
'16': I-DATE
'17': B-TIME
'18': I-TIME
'19': B-PERCENT
'20': I-PERCENT
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-ORDINAL
'26': I-ORDINAL
'27': B-CARDINAL
'28': I-CARDINAL
'29': B-EVENT
'30': I-EVENT
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
'33': B-LAW
'34': I-LAW
'35': B-LANGUAGE
'36': I-LANGUAGE
- name: srl_frames
list:
- name: verb
dtype: string
- name: frames
sequence: string
- name: coref_spans
sequence:
sequence: int32
length: 3
splits:
- name: train
num_bytes: 77195698
num_examples: 1391
- name: validation
num_bytes: 10828169
num_examples: 172
- name: test
num_bytes: 9585138
num_examples: 166
download_size: 193644139
dataset_size: 97609005
- config_name: arabic_v4
features:
- name: document_id
dtype: string
- name: sentences
list:
- name: part_id
dtype: int32
- name: words
sequence: string
- name: pos_tags
sequence: string
- name: parse_tree
dtype: string
- name: predicate_lemmas
sequence: string
- name: predicate_framenet_ids
sequence: string
- name: word_senses
sequence: float32
- name: speaker
dtype: string
- name: named_entities
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-NORP
'4': I-NORP
'5': B-FAC
'6': I-FAC
'7': B-ORG
'8': I-ORG
'9': B-GPE
'10': I-GPE
'11': B-LOC
'12': I-LOC
'13': B-PRODUCT
'14': I-PRODUCT
'15': B-DATE
'16': I-DATE
'17': B-TIME
'18': I-TIME
'19': B-PERCENT
'20': I-PERCENT
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-ORDINAL
'26': I-ORDINAL
'27': B-CARDINAL
'28': I-CARDINAL
'29': B-EVENT
'30': I-EVENT
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
'33': B-LAW
'34': I-LAW
'35': B-LANGUAGE
'36': I-LANGUAGE
- name: srl_frames
list:
- name: verb
dtype: string
- name: frames
sequence: string
- name: coref_spans
sequence:
sequence: int32
length: 3
splits:
- name: train
num_bytes: 42017761
num_examples: 359
- name: validation
num_bytes: 4859292
num_examples: 44
- name: test
num_bytes: 4900664
num_examples: 44
download_size: 193644139
dataset_size: 51777717
- config_name: english_v12
features:
- name: document_id
dtype: string
- name: sentences
list:
- name: part_id
dtype: int32
- name: words
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': XX
'1': '``'
'2': $
'3': ''''''
'4': '*'
'5': ','
'6': -LRB-
'7': -RRB-
'8': .
'9': ':'
'10': ADD
'11': AFX
'12': CC
'13': CD
'14': DT
'15': EX
'16': FW
'17': HYPH
'18': IN
'19': JJ
'20': JJR
'21': JJS
'22': LS
'23': MD
'24': NFP
'25': NN
'26': NNP
'27': NNPS
'28': NNS
'29': PDT
'30': POS
'31': PRP
'32': PRP$
'33': RB
'34': RBR
'35': RBS
'36': RP
'37': SYM
'38': TO
'39': UH
'40': VB
'41': VBD
'42': VBG
'43': VBN
'44': VBP
'45': VBZ
'46': VERB
'47': WDT
'48': WP
'49': WP$
'50': WRB
- name: parse_tree
dtype: string
- name: predicate_lemmas
sequence: string
- name: predicate_framenet_ids
sequence: string
- name: word_senses
sequence: float32
- name: speaker
dtype: string
- name: named_entities
sequence:
class_label:
names:
'0': O
'1': B-PERSON
'2': I-PERSON
'3': B-NORP
'4': I-NORP
'5': B-FAC
'6': I-FAC
'7': B-ORG
'8': I-ORG
'9': B-GPE
'10': I-GPE
'11': B-LOC
'12': I-LOC
'13': B-PRODUCT
'14': I-PRODUCT
'15': B-DATE
'16': I-DATE
'17': B-TIME
'18': I-TIME
'19': B-PERCENT
'20': I-PERCENT
'21': B-MONEY
'22': I-MONEY
'23': B-QUANTITY
'24': I-QUANTITY
'25': B-ORDINAL
'26': I-ORDINAL
'27': B-CARDINAL
'28': I-CARDINAL
'29': B-EVENT
'30': I-EVENT
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
'33': B-LAW
'34': I-LAW
'35': B-LANGUAGE
'36': I-LANGUAGE
- name: srl_frames
list:
- name: verb
dtype: string
- name: frames
sequence: string
- name: coref_spans
sequence:
sequence: int32
length: 3
splits:
- name: train
num_bytes: 174173192
num_examples: 10539
- name: validation
num_bytes: 24264804
num_examples: 1370
- name: test
num_bytes: 18254144
num_examples: 1200
download_size: 193644139
dataset_size: 216692140
---
# Dataset Card for CoNLL2012 shared task data based on OntoNotes 5.0
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html)
- **Repository:** [Mendeley](https://data.mendeley.com/datasets/zmycy7t9h9)
- **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and is used in the CoNLL-2012 shared task.
It includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
The source of data is the Mendeley Data repo [ontonotes-conll2012](https://data.mendeley.com/datasets/zmycy7t9h9), which seems to be as the same as the official data, but users should use this dataset on their own responsibility.
See also summaries from paperwithcode, [OntoNotes 5.0](https://paperswithcode.com/dataset/ontonotes-5-0) and [CoNLL-2012](https://paperswithcode.com/dataset/conll-2012-1)
For more detailed info of the dataset like annotation, tag set, etc., you can refer to the documents in the Mendeley repo mentioned above.
### Supported Tasks and Leaderboards
- [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5)
- [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes)
- [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes)
- ...
### Languages
V4 data for Arabic, Chinese, English, and V12 data for English
## Dataset Structure
### Data Instances
```
{
{'document_id': 'nw/wsj/23/wsj_2311',
'sentences': [{'part_id': 0,
'words': ['CONCORDE', 'trans-Atlantic', 'flights', 'are', '$', '2, 'to', 'Paris', 'and', '$', '3, 'to', 'London', '.']},
'pos_tags': [25, 18, 27, 43, 2, 12, 17, 25, 11, 2, 12, 17, 25, 7],
'parse_tree': '(TOP(S(NP (NNP CONCORDE) (JJ trans-Atlantic) (NNS flights) )(VP (VBP are) (NP(NP(NP ($ $) (CD 2,400) )(PP (IN to) (NP (NNP Paris) ))) (CC and) (NP(NP ($ $) (CD 3,200) )(PP (IN to) (NP (NNP London) ))))) (. .) ))',
'predicate_lemmas': [None, None, None, 'be', None, None, None, None, None, None, None, None, None, None],
'predicate_framenet_ids': [None, None, None, '01', None, None, None, None, None, None, None, None, None, None],
'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None, None],
'speaker': None,
'named_entities': [7, 6, 0, 0, 0, 15, 0, 5, 0, 0, 15, 0, 5, 0],
'srl_frames': [{'frames': ['B-ARG1', 'I-ARG1', 'I-ARG1', 'B-V', 'B-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'I-ARG2', 'O'],
'verb': 'are'}],
'coref_spans': [],
{'part_id': 0,
'words': ['In', 'a', 'Centennial', 'Journal', 'article', 'Oct.', '5', ',', 'the', 'fares', 'were', 'reversed', '.']}]}
'pos_tags': [17, 13, 25, 25, 24, 25, 12, 4, 13, 27, 40, 42, 7],
'parse_tree': '(TOP(S(PP (IN In) (NP (DT a) (NML (NNP Centennial) (NNP Journal) ) (NN article) ))(NP (NNP Oct.) (CD 5) ) (, ,) (NP (DT the) (NNS fares) )(VP (VBD were) (VP (VBN reversed) )) (. .) ))',
'predicate_lemmas': [None, None, None, None, None, None, None, None, None, None, None, 'reverse', None],
'predicate_framenet_ids': [None, None, None, None, None, None, None, None, None, None, None, '01', None],
'word_senses': [None, None, None, None, None, None, None, None, None, None, None, None, None],
'speaker': None,
'named_entities': [0, 0, 4, 22, 0, 12, 30, 0, 0, 0, 0, 0, 0],
'srl_frames': [{'frames': ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'B-ARGM-TMP', 'I-ARGM-TMP', 'O', 'B-ARG1', 'I-ARG1', 'O', 'B-V', 'O'],
'verb': 'reversed'}],
'coref_spans': [],
}
```
### Data Fields
- **`document_id`** (*`str`*): This is a variation on the document filename
- **`sentences`** (*`List[Dict]`*): All sentences of the same document are in a single example for the convenience of concatenating sentences.
Every element in `sentences` is a *`Dict`* composed of the following data fields:
- **`part_id`** (*`int`*) : Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
- **`words`** (*`List[str]`*) :
- **`pos_tags`** (*`List[ClassLabel]` or `List[str]`*) : This is the Penn-Treebank-style part of speech. When parse information is missing, all parts of speech except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.
- tag set : Note tag sets below are founded by scanning all the data, and I found it seems to be a little bit different from officially stated tag sets. See official documents in the [Mendeley repo](https://data.mendeley.com/datasets/zmycy7t9h9)
- arabic : str. Because pos tag in Arabic is compounded and complex, hard to represent it by `ClassLabel`
- chinese v4 : `datasets.ClassLabel(num_classes=36, names=["X", "AD", "AS", "BA", "CC", "CD", "CS", "DEC", "DEG", "DER", "DEV", "DT", "ETC", "FW", "IJ", "INF", "JJ", "LB", "LC", "M", "MSP", "NN", "NR", "NT", "OD", "ON", "P", "PN", "PU", "SB", "SP", "URL", "VA", "VC", "VE", "VV",])`, where `X` is for pos tag missing
- english v4 : `datasets.ClassLabel(num_classes=49, names=["XX", "``", "$", "''", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
- english v12 : `datasets.ClassLabel(num_classes=51, names="english_v12": ["XX", "``", "$", "''", "*", ",", "-LRB-", "-RRB-", ".", ":", "ADD", "AFX", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "LS", "MD", "NFP", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "SYM", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "VERB", "WDT", "WP", "WP$", "WRB",])`, where `XX` is for pos tag missing, and `-LRB-`/`-RRB-` is "`(`" / "`)`".
- **`parse_tree`** (*`Optional[str]`*) : An serialized NLTK Tree representing the parse. It includes POS tags as pre-terminal nodes. When the parse information is missing, the parse will be `None`.
- **`predicate_lemmas`** (*`List[Optional[str]]`*) : The predicate lemma of the words for which we have semantic role information or word sense information. All other indices are `None`.
- **`predicate_framenet_ids`** (*`List[Optional[int]]`*) : The PropBank frameset ID of the lemmas in predicate_lemmas, or `None`.
- **`word_senses`** (*`List[Optional[float]]`*) : The word senses for the words in the sentence, or None. These are floats because the word sense can have values after the decimal, like 1.1.
- **`speaker`** (*`Optional[str]`*) : This is the speaker or author name where available. Mostly in Broadcast Conversation and Web Log data. When it is not available, it will be `None`.
- **`named_entities`** (*`List[ClassLabel]`*) : The BIO tags for named entities in the sentence.
- tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])`
- **`srl_frames`** (*`List[{"word":str, "frames":List[str]}]`*) : A dictionary keyed by the verb in the sentence for the given Propbank frame labels, in a BIO format.
- **`coref spans`** (*`List[List[int]]`*) : The spans for entity mentions involved in coreference resolution within the sentence. Each element is a tuple composed of (cluster_id, start_index, end_index). Indices are inclusive.
### Data Splits
Each dataset (arabic_v4, chinese_v4, english_v4, english_v12) has 3 splits: _train_, _validation_, and _test_
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{pradhan-etal-2013-towards,
title = "Towards Robust Linguistic Analysis using {O}nto{N}otes",
author = {Pradhan, Sameer and
Moschitti, Alessandro and
Xue, Nianwen and
Ng, Hwee Tou and
Bj{\"o}rkelund, Anders and
Uryupina, Olga and
Zhang, Yuchen and
Zhong, Zhi},
booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-3516",
pages = "143--152",
}
```
### Contributions
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset. |
zefang-liu/phishing-email-dataset | zefang-liu | 2024-01-17T23:48:20Z | 476 | 11 | [
"task_categories:text-classification",
"language:en",
"license:lgpl-3.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2024-01-17T23:36:31Z | 2 | ---
license: lgpl-3.0
language:
- en
task_categories:
- text-classification
size_categories:
- 10K<n<100K
---
# Phishing Email Dataset
This dataset on Hugging Face is a direct copy of the 'Phishing Email Detection' dataset from Kaggle, shared under the [GNU Lesser General Public License 3.0](https://www.gnu.org/licenses/lgpl-3.0.html). The dataset was originally created by the user '[Cyber Cop](https://www.kaggle.com/subhajournal)' on Kaggle. For complete details, including licensing and usage information, please visit the [original Kaggle page](https://www.kaggle.com/datasets/subhajournal/phishingemails).
|
ucinlp/drop | ucinlp | 2024-01-17T08:15:43Z | 2,714 | 53 | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_ids:extractive-qa",
"task_ids:abstractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1903.00161",
"region:us"
] | [
"question-answering",
"text2text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
- text2text-generation
task_ids:
- extractive-qa
- abstractive-qa
paperswithcode_id: drop
pretty_name: DROP
dataset_info:
features:
- name: section_id
dtype: string
- name: query_id
dtype: string
- name: passage
dtype: string
- name: question
dtype: string
- name: answers_spans
sequence:
- name: spans
dtype: string
- name: types
dtype: string
splits:
- name: train
num_bytes: 105572506
num_examples: 77400
- name: validation
num_bytes: 11737755
num_examples: 9535
download_size: 11538387
dataset_size: 117310261
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
# Dataset Card for "drop"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://allenai.org/data/drop
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://aclanthology.org/N19-1246/
- **Paper:** https://arxiv.org/abs/1903.00161
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 8.30 MB
- **Size of the generated dataset:** 110.91 MB
- **Total amount of disk used:** 119.21 MB
### Dataset Summary
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs.
. DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a
question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or
sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was
necessary for prior datasets.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 8.30 MB
- **Size of the generated dataset:** 110.91 MB
- **Total amount of disk used:** 119.21 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"answers_spans": {
"spans": ["Chaz Schilens"]
},
"passage": "\" Hoping to rebound from their loss to the Patriots, the Raiders stayed at home for a Week 16 duel with the Houston Texans. Oak...",
"question": "Who scored the first touchdown of the game?"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `passage`: a `string` feature.
- `question`: a `string` feature.
- `answers_spans`: a dictionary feature containing:
- `spans`: a `string` feature.
### Data Splits
| name |train|validation|
|-------|----:|---------:|
|default|77409| 9536|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{Dua2019DROP,
author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner},
title={ {DROP}: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs},
booktitle={Proc. of NAACL},
year={2019}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset. |
clue/clue | clue | 2024-01-17T07:48:08Z | 2,758 | 43 | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_ids:topic-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:natural-language-inference",
"task_ids:multiple-choice-qa",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2004.05986",
"region:us",
"coreference-nli",
"qa-nli"
] | [
"text-classification",
"multiple-choice"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- other
language_creators:
- other
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
- multiple-choice
task_ids:
- topic-classification
- semantic-similarity-scoring
- natural-language-inference
- multiple-choice-qa
paperswithcode_id: clue
pretty_name: 'CLUE: Chinese Language Understanding Evaluation benchmark'
tags:
- coreference-nli
- qa-nli
dataset_info:
- config_name: afqmc
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 378718
num_examples: 3861
- name: train
num_bytes: 3396503
num_examples: 34334
- name: validation
num_bytes: 426285
num_examples: 4316
download_size: 2337418
dataset_size: 4201506
- config_name: c3
features:
- name: id
dtype: int32
- name: context
sequence: string
- name: question
dtype: string
- name: choice
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1600142
num_examples: 1625
- name: train
num_bytes: 9672739
num_examples: 11869
- name: validation
num_bytes: 2990943
num_examples: 3816
download_size: 4718960
dataset_size: 14263824
- config_name: chid
features:
- name: idx
dtype: int32
- name: candidates
sequence: string
- name: content
sequence: string
- name: answers
sequence:
- name: text
dtype: string
- name: candidate_id
dtype: int32
splits:
- name: test
num_bytes: 11480435
num_examples: 3447
- name: train
num_bytes: 252477926
num_examples: 84709
- name: validation
num_bytes: 10117761
num_examples: 3218
download_size: 198468807
dataset_size: 274076122
- config_name: cluewsc2020
features:
- name: idx
dtype: int32
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'true'
'1': 'false'
- name: target
struct:
- name: span1_text
dtype: string
- name: span2_text
dtype: string
- name: span1_index
dtype: int32
- name: span2_index
dtype: int32
splits:
- name: test
num_bytes: 645637
num_examples: 2574
- name: train
num_bytes: 288816
num_examples: 1244
- name: validation
num_bytes: 72670
num_examples: 304
download_size: 380611
dataset_size: 1007123
- config_name: cmnli
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': entailment
'2': contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 2386821
num_examples: 13880
- name: train
num_bytes: 67684989
num_examples: 391783
- name: validation
num_bytes: 2051829
num_examples: 12241
download_size: 54234919
dataset_size: 72123639
- config_name: cmrc2018
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 3112042
num_examples: 2000
- name: train
num_bytes: 15508062
num_examples: 10142
- name: validation
num_bytes: 5183785
num_examples: 3219
- name: trial
num_bytes: 1606907
num_examples: 1002
download_size: 5459001
dataset_size: 25410796
- config_name: csl
features:
- name: idx
dtype: int32
- name: corpus_id
dtype: int32
- name: abst
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
- name: keyword
sequence: string
splits:
- name: test
num_bytes: 2463728
num_examples: 3000
- name: train
num_bytes: 16478890
num_examples: 20000
- name: validation
num_bytes: 2464563
num_examples: 3000
download_size: 3936111
dataset_size: 21407181
- config_name: diagnostics
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': entailment
'2': contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 42392
num_examples: 514
download_size: 23000
dataset_size: 42392
- config_name: drcd
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: test
num_bytes: 4982378
num_examples: 3493
- name: train
num_bytes: 37443386
num_examples: 26936
- name: validation
num_bytes: 5222729
num_examples: 3524
download_size: 11188875
dataset_size: 47648493
- config_name: iflytek
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
'10': '10'
'11': '11'
'12': '12'
'13': '13'
'14': '14'
'15': '15'
'16': '16'
'17': '17'
'18': '18'
'19': '19'
'20': '20'
'21': '21'
'22': '22'
'23': '23'
'24': '24'
'25': '25'
'26': '26'
'27': '27'
'28': '28'
'29': '29'
'30': '30'
'31': '31'
'32': '32'
'33': '33'
'34': '34'
'35': '35'
'36': '36'
'37': '37'
'38': '38'
'39': '39'
'40': '40'
'41': '41'
'42': '42'
'43': '43'
'44': '44'
'45': '45'
'46': '46'
'47': '47'
'48': '48'
'49': '49'
'50': '50'
'51': '51'
'52': '52'
'53': '53'
'54': '54'
'55': '55'
'56': '56'
'57': '57'
'58': '58'
'59': '59'
'60': '60'
'61': '61'
'62': '62'
'63': '63'
'64': '64'
'65': '65'
'66': '66'
'67': '67'
'68': '68'
'69': '69'
'70': '70'
'71': '71'
'72': '72'
'73': '73'
'74': '74'
'75': '75'
'76': '76'
'77': '77'
'78': '78'
'79': '79'
'80': '80'
'81': '81'
'82': '82'
'83': '83'
'84': '84'
'85': '85'
'86': '86'
'87': '87'
'88': '88'
'89': '89'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
'100': '100'
'101': '101'
'102': '102'
'103': '103'
'104': '104'
'105': '105'
'106': '106'
'107': '107'
'108': '108'
'109': '109'
'110': '110'
'111': '111'
'112': '112'
'113': '113'
'114': '114'
'115': '115'
'116': '116'
'117': '117'
'118': '118'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 2105684
num_examples: 2600
- name: train
num_bytes: 10028605
num_examples: 12133
- name: validation
num_bytes: 2157119
num_examples: 2599
download_size: 9777855
dataset_size: 14291408
- config_name: ocnli
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': neutral
'1': entailment
'2': contradiction
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 376058
num_examples: 3000
- name: train
num_bytes: 6187142
num_examples: 50437
- name: validation
num_bytes: 366227
num_examples: 2950
download_size: 3000218
dataset_size: 6929427
- config_name: tnews
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': '100'
'1': '101'
'2': '102'
'3': '103'
'4': '104'
'5': '106'
'6': '107'
'7': '108'
'8': '109'
'9': '110'
'10': '112'
'11': '113'
'12': '114'
'13': '115'
'14': '116'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 810970
num_examples: 10000
- name: train
num_bytes: 4245677
num_examples: 53360
- name: validation
num_bytes: 797922
num_examples: 10000
download_size: 4697843
dataset_size: 5854569
configs:
- config_name: afqmc
data_files:
- split: test
path: afqmc/test-*
- split: train
path: afqmc/train-*
- split: validation
path: afqmc/validation-*
- config_name: c3
data_files:
- split: test
path: c3/test-*
- split: train
path: c3/train-*
- split: validation
path: c3/validation-*
- config_name: chid
data_files:
- split: test
path: chid/test-*
- split: train
path: chid/train-*
- split: validation
path: chid/validation-*
- config_name: cluewsc2020
data_files:
- split: test
path: cluewsc2020/test-*
- split: train
path: cluewsc2020/train-*
- split: validation
path: cluewsc2020/validation-*
- config_name: cmnli
data_files:
- split: test
path: cmnli/test-*
- split: train
path: cmnli/train-*
- split: validation
path: cmnli/validation-*
- config_name: cmrc2018
data_files:
- split: test
path: cmrc2018/test-*
- split: train
path: cmrc2018/train-*
- split: validation
path: cmrc2018/validation-*
- split: trial
path: cmrc2018/trial-*
- config_name: csl
data_files:
- split: test
path: csl/test-*
- split: train
path: csl/train-*
- split: validation
path: csl/validation-*
- config_name: diagnostics
data_files:
- split: test
path: diagnostics/test-*
- config_name: drcd
data_files:
- split: test
path: drcd/test-*
- split: train
path: drcd/train-*
- split: validation
path: drcd/validation-*
- config_name: iflytek
data_files:
- split: test
path: iflytek/test-*
- split: train
path: iflytek/train-*
- split: validation
path: iflytek/validation-*
- config_name: ocnli
data_files:
- split: test
path: ocnli/test-*
- split: train
path: ocnli/train-*
- split: validation
path: ocnli/validation-*
- config_name: tnews
data_files:
- split: test
path: tnews/test-*
- split: train
path: tnews/train-*
- split: validation
path: tnews/validation-*
---
# Dataset Card for "clue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cluebenchmarks.com
- **Repository:** https://github.com/CLUEbenchmark/CLUE
- **Paper:** [CLUE: A Chinese Language Understanding Evaluation Benchmark](https://aclanthology.org/2020.coling-main.419/)
- **Paper:** https://arxiv.org/abs/2004.05986
- **Point of Contact:** [Zhenzhong Lan](mailto:[email protected])
- **Size of downloaded dataset files:** 198.68 MB
- **Size of the generated dataset:** 486.34 MB
- **Total amount of disk used:** 685.02 MB
### Dataset Summary
CLUE, A Chinese Language Understanding Evaluation Benchmark
(https://www.cluebenchmarks.com/) is a collection of resources for training,
evaluating, and analyzing Chinese language understanding systems.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### afqmc
- **Size of downloaded dataset files:** 1.20 MB
- **Size of the generated dataset:** 4.20 MB
- **Total amount of disk used:** 5.40 MB
An example of 'validation' looks as follows.
```
{
"idx": 0,
"label": 0,
"sentence1": "双十一花呗提额在哪",
"sentence2": "里可以提花呗额度"
}
```
#### c3
- **Size of downloaded dataset files:** 3.20 MB
- **Size of the generated dataset:** 15.69 MB
- **Total amount of disk used:** 18.90 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "比人的灵敏",
"choice": ["没有人的灵敏", "和人的差不多", "和人的一样好", "比人的灵敏"],
"context": "[\"许多动物的某些器官感觉特别灵敏,它们能比人类提前知道一些灾害事件的发生,例如,海洋中的水母能预报风暴,老鼠能事先躲避矿井崩塌或有害气体,等等。地震往往能使一些动物的某些感觉器官受到刺激而发生异常反应。如一个地区的重力发生变异,某些动物可能通过它们的平衡...",
"id": 1,
"question": "动物的器官感觉与人的相比有什么不同?"
}
```
#### chid
- **Size of downloaded dataset files:** 139.20 MB
- **Size of the generated dataset:** 274.08 MB
- **Total amount of disk used:** 413.28 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": {
"candidate_id": [3, 5, 6, 1, 7, 4, 0],
"text": ["碌碌无为", "无所作为", "苦口婆心", "得过且过", "未雨绸缪", "软硬兼施", "传宗接代"]
},
"candidates": "[\"传宗接代\", \"得过且过\", \"咄咄逼人\", \"碌碌无为\", \"软硬兼施\", \"无所作为\", \"苦口婆心\", \"未雨绸缪\", \"和衷共济\", \"人老珠黄\"]...",
"content": "[\"谈到巴萨目前的成就,瓜迪奥拉用了“坚持”两个字来形容。自从上世纪90年代克鲁伊夫带队以来,巴萨就坚持每年都有拉玛西亚球员进入一队的传统。即便是范加尔时代,巴萨强力推出的“巴萨五鹰”德拉·佩纳、哈维、莫雷罗、罗杰·加西亚和贝拉乌桑几乎#idiom0000...",
"idx": 0
}
```
#### cluewsc2020
- **Size of downloaded dataset files:** 0.28 MB
- **Size of the generated dataset:** 1.03 MB
- **Total amount of disk used:** 1.29 MB
An example of 'train' looks as follows.
```
{
"idx": 0,
"label": 1,
"target": {
"span1_index": 3,
"span1_text": "伤口",
"span2_index": 27,
"span2_text": "它们"
},
"text": "裂开的伤口涂满尘土,里面有碎石子和木头刺,我小心翼翼把它们剔除出去。"
}
```
#### cmnli
- **Size of downloaded dataset files:** 31.40 MB
- **Size of the generated dataset:** 72.12 MB
- **Total amount of disk used:** 103.53 MB
An example of 'train' looks as follows.
```
{
"idx": 0,
"label": 0,
"sentence1": "从概念上讲,奶油略读有两个基本维度-产品和地理。",
"sentence2": "产品和地理位置是使奶油撇油起作用的原因。"
}
```
### Data Fields
The data fields are the same among all splits.
#### afqmc
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `label`: a classification label, with possible values including `0` (0), `1` (1).
- `idx`: a `int32` feature.
#### c3
- `id`: a `int32` feature.
- `context`: a `list` of `string` features.
- `question`: a `string` feature.
- `choice`: a `list` of `string` features.
- `answer`: a `string` feature.
#### chid
- `idx`: a `int32` feature.
- `candidates`: a `list` of `string` features.
- `content`: a `list` of `string` features.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `candidate_id`: a `int32` feature.
#### cluewsc2020
- `idx`: a `int32` feature.
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `true` (0), `false` (1).
- `span1_text`: a `string` feature.
- `span2_text`: a `string` feature.
- `span1_index`: a `int32` feature.
- `span2_index`: a `int32` feature.
#### cmnli
- `sentence1`: a `string` feature.
- `sentence2`: a `string` feature.
- `label`: a classification label, with possible values including `neutral` (0), `entailment` (1), `contradiction` (2).
- `idx`: a `int32` feature.
### Data Splits
| name |train |validation|test |
|-----------|-----:|---------:|----:|
|afqmc | 34334| 4316| 3861|
|c3 | 11869| 3816| 3892|
|chid | 84709| 3218| 3231|
|cluewsc2020| 1244| 304| 290|
|cmnli |391783| 12241|13880|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{xu-etal-2020-clue,
title = "{CLUE}: A {C}hinese Language Understanding Evaluation Benchmark",
author = "Xu, Liang and
Hu, Hai and
Zhang, Xuanwei and
Li, Lu and
Cao, Chenjie and
Li, Yudong and
Xu, Yechen and
Sun, Kai and
Yu, Dian and
Yu, Cong and
Tian, Yin and
Dong, Qianqian and
Liu, Weitang and
Shi, Bo and
Cui, Yiming and
Li, Junyi and
Zeng, Jun and
Wang, Rongzhao and
Xie, Weijian and
Li, Yanting and
Patterson, Yina and
Tian, Zuoyu and
Zhang, Yiwen and
Zhou, He and
Liu, Shaoweihua and
Zhao, Zhe and
Zhao, Qipeng and
Yue, Cong and
Zhang, Xinrui and
Yang, Zhengliang and
Richardson, Kyle and
Lan, Zhenzhong",
booktitle = "Proceedings of the 28th International Conference on Computational Linguistics",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2020.coling-main.419",
doi = "10.18653/v1/2020.coling-main.419",
pages = "4762--4772",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@JetRunner](https://github.com/JetRunner) for adding this dataset. |
cam-cst/cbt | cam-cst | 2024-01-16T16:01:16Z | 824 | 15 | [
"task_categories:other",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:gfdl",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1511.02301",
"region:us"
] | [
"other",
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- gfdl
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- n<1K
source_datasets:
- original
task_categories:
- other
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: cbt
pretty_name: Children’s Book Test (CBT)
config_names:
- CN
- NE
- P
- V
- raw
dataset_info:
- config_name: CN
features:
- name: sentences
sequence: string
- name: question
dtype: string
- name: answer
dtype: string
- name: options
sequence: string
splits:
- name: train
num_bytes: 301730151
num_examples: 120769
- name: test
num_bytes: 6138376
num_examples: 2500
- name: validation
num_bytes: 4737257
num_examples: 2000
download_size: 31615166
dataset_size: 312605784
- config_name: NE
features:
- name: sentences
sequence: string
- name: question
dtype: string
- name: answer
dtype: string
- name: options
sequence: string
splits:
- name: train
num_bytes: 253551931
num_examples: 108719
- name: test
num_bytes: 5707734
num_examples: 2500
- name: validation
num_bytes: 4424316
num_examples: 2000
download_size: 29693075
dataset_size: 263683981
- config_name: P
features:
- name: sentences
sequence: string
- name: question
dtype: string
- name: answer
dtype: string
- name: options
sequence: string
splits:
- name: train
num_bytes: 852852601
num_examples: 334030
- name: test
num_bytes: 6078048
num_examples: 2500
- name: validation
num_bytes: 4776981
num_examples: 2000
download_size: 43825356
dataset_size: 863707630
- config_name: V
features:
- name: sentences
sequence: string
- name: question
dtype: string
- name: answer
dtype: string
- name: options
sequence: string
splits:
- name: train
num_bytes: 252177649
num_examples: 105825
- name: test
num_bytes: 5806625
num_examples: 2500
- name: validation
num_bytes: 4556425
num_examples: 2000
download_size: 29992082
dataset_size: 262540699
- config_name: raw
features:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 25741580
num_examples: 98
- name: test
num_bytes: 1528704
num_examples: 5
- name: validation
num_bytes: 1182657
num_examples: 5
download_size: 16350790
dataset_size: 28452941
configs:
- config_name: CN
data_files:
- split: train
path: CN/train-*
- split: test
path: CN/test-*
- split: validation
path: CN/validation-*
- config_name: NE
data_files:
- split: train
path: NE/train-*
- split: test
path: NE/test-*
- split: validation
path: NE/validation-*
- config_name: P
data_files:
- split: train
path: P/train-*
- split: test
path: P/test-*
- split: validation
path: P/validation-*
- config_name: V
data_files:
- split: train
path: V/train-*
- split: test
path: V/test-*
- split: validation
path: V/validation-*
- config_name: raw
data_files:
- split: train
path: raw/train-*
- split: test
path: raw/test-*
- split: validation
path: raw/validation-*
---
# Dataset Card for CBT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[The bAbI project](https://research.fb.com/downloads/babi/)
- **Repository:**
- **Paper:** [arXiv Paper](https://arxiv.org/pdf/1511.02301.pdf)
- **Leaderboard:**
- **Point of Contact:** [Felix Hill](mailto:[email protected]) or [Antoine Bordes](mailto:[email protected]).
### Dataset Summary
The Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available.
This dataset contains four different configurations:
- `V`: where the answers to the questions are verbs.
- `P`: where the answers to the questions are pronouns.
- `NE`: where the answers to the questions are named entities.
- `CN`: where the answers to the questions are common nouns.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The data is present in English language as written by authors Lucy Maud Montgomery, Charles Dickens,Andrew Lang, etc. in story books for children.
## Dataset Structure
### Data Instances
An instance from the `V` config:
```
{'answer': 'said', 'options': ['christening', 'existed', 'hear', 'knows', 'read', 'remarked', 'said', 'sitting', 'talking', 'wearing'], 'question': "`` They are very kind old ladies in their way , '' XXXXX the king ; `` and were nice to me when I was a boy . ''", 'sentences': ['This vexed the king even more than the queen , who was very clever and learned , and who had hated dolls when she was a child .', 'However , she , too in spite of all the books she read and all the pictures she painted , would have been glad enough to be the mother of a little prince .', 'The king was anxious to consult the fairies , but the queen would not hear of such a thing .', 'She did not believe in fairies : she said that they had never existed ; and that she maintained , though The History of the Royal Family was full of chapters about nothing else .', 'Well , at long and at last they had a little boy , who was generally regarded as the finest baby that had ever been seen .', 'Even her majesty herself remarked that , though she could never believe all the courtiers told her , yet he certainly was a fine child -- a very fine child .', 'Now , the time drew near for the christening party , and the king and queen were sitting at breakfast in their summer parlour talking over it .', 'It was a splendid room , hung with portraits of the royal ancestors .', 'There was Cinderella , the grandmother of the reigning monarch , with her little foot in her glass slipper thrust out before her .', 'There was the Marquis de Carabas , who , as everyone knows , was raised to the throne as prince consort after his marriage with the daughter of the king of the period .', 'On the arm of the throne was seated his celebrated cat , wearing boots .', 'There , too , was a portrait of a beautiful lady , sound asleep : this was Madame La Belle au Bois-dormant , also an ancestress of the royal family .', 'Many other pictures of celebrated persons were hanging on the walls .', "`` You have asked all the right people , my dear ? ''", 'said the king .', "`` Everyone who should be asked , '' answered the queen .", "`` People are so touchy on these occasions , '' said his majesty .", "`` You have not forgotten any of our aunts ? ''", "`` No ; the old cats ! ''", "replied the queen ; for the king 's aunts were old-fashioned , and did not approve of her , and she knew it ."]}
```
### Data Fields
For the `raw` config, the data fields are:
- `title`: a `string` feature containing the title of the book present in the dataset.
- `content`: a `string` feature containing the content of the book present in the dataset.
For all other configs, the data fields are:
- `sentences`: a `list` of `string` features containing 20 sentences from a book.
- `question`: a `string` feature containing a question with blank marked as `XXXX` which is to be filled with one of the options.
- `answer`: a `string` feature containing the answer.
- `options`: a `list` of `string` features containing the options for the question.
### Data Splits
The splits and corresponding sizes are:
| |train |test |validation|
|:--|------:|----:|---------:|
|raw|98 |5 |5 |
|V |105825 |2500 |2000 |
|P |334030 |2500 |2000 |
|CN |120769 |2500 |2000 |
|NE |108719 |2500 |2000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Children's Book Authors
### Annotations
#### Annotation process
From the [homepage](https://research.fb.com/downloads/babi/):
>After allocating books to either training, validation or test sets, we formed example ‘questions’ from chapters in the book by enumerating 21 consecutive sentences. In each question, the first 20 sentences form the context, and a word is removed from the 21st sentence, which becomes the query. Models must identify the answer word among a selection of 10 candidate answers appearing in the context sentences and the query. For finer-grained analyses, we evaluated four classes of question by removing distinct types of word: Named Entities, (Common) Nouns, Verbs and Prepositions.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
```
GNU Free Documentation License v1.3
```
### Citation Information
```
@misc{hill2016goldilocks,
title={The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations},
author={Felix Hill and Antoine Bordes and Sumit Chopra and Jason Weston},
year={2016},
eprint={1511.02301},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset. |
china-ai-law-challenge/cail2018 | china-ai-law-challenge | 2024-01-16T15:08:12Z | 282 | 26 | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"license:unknown",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1807.02478",
"region:us",
"judgement-prediction"
] | [
"other"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: chinese-ai-and-law-cail-2018
pretty_name: CAIL 2018
tags:
- judgement-prediction
dataset_info:
features:
- name: fact
dtype: string
- name: relevant_articles
sequence: int32
- name: accusation
sequence: string
- name: punish_of_money
dtype: float32
- name: criminals
sequence: string
- name: death_penalty
dtype: bool
- name: imprisonment
dtype: float32
- name: life_imprisonment
dtype: bool
splits:
- name: exercise_contest_train
num_bytes: 220112348
num_examples: 154592
- name: exercise_contest_valid
num_bytes: 21702109
num_examples: 17131
- name: exercise_contest_test
num_bytes: 41057538
num_examples: 32508
- name: first_stage_train
num_bytes: 1779653382
num_examples: 1710856
- name: first_stage_test
num_bytes: 244334666
num_examples: 217016
- name: final_test
num_bytes: 44194611
num_examples: 35922
download_size: 1167828091
dataset_size: 2351054654
configs:
- config_name: default
data_files:
- split: exercise_contest_train
path: data/exercise_contest_train-*
- split: exercise_contest_valid
path: data/exercise_contest_valid-*
- split: exercise_contest_test
path: data/exercise_contest_test-*
- split: first_stage_train
path: data/first_stage_train-*
- split: first_stage_test
path: data/first_stage_test-*
- split: final_test
path: data/final_test-*
---
---
# Dataset Card for CAIL 2018
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/thunlp/CAIL/blob/master/README_en.md)
- **Repository:** [Github](https://github.com/thunlp/CAIL)
- **Paper:** [Arxiv](https://arxiv.org/abs/1807.02478)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset. |
botisan-ai/cantonese-mandarin-translations | botisan-ai | 2024-01-13T03:30:12Z | 85 | 21 | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:translation",
"source_datasets:original",
"language:zh",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"conditional-text-generation"
] | [
"text2text-generation",
"translation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- zh
license:
- cc-by-nc-sa-4.0
multilinguality:
- translation
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text2text-generation
- translation
task_ids: []
pretty_name: Cantonese - Mandarin Translations
language_bcp47:
- zh-CN
- zh-HK
tags:
- conditional-text-generation
---
# Dataset Card for cantonese-mandarin-translations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a machine-translated parallel corpus between Cantonese (a Chinese dialect that is mainly spoken by Guangdong (province of China), Hong Kong, Macau and part of Malaysia) and Chinese (written form, in Simplified Chinese).
### Supported Tasks and Leaderboards
N/A
### Languages
- Cantonese (`yue`)
- Simplified Chinese (`zh-CN`)
## Dataset Structure
JSON lines with `yue` field and `zh` field for the parallel corpus.
### Data Instances
N/A
### Data Fields
- `yue`: Cantonese corpus
- `zh`: translated Chinese corpus
### Data Splits
No data splitting is done as of yet.
## Dataset Creation
The dataset is produced by doing the following:
- Download [HKCancor Cantonese Corpus](https://github.com/fcbond/hkcancor) and [CommonVoice Cantonese (Hong Kong Chinese `yue`) text corpus](https://commonvoice.mozilla.org/en/datasets)
- Extract text corpus and merge datasets
- Run text against [Microsoft's Translator API](https://learn.microsoft.com/en-us/azure/ai-services/translator/language-support) from `yue` to `zh-Hans`
### Curation Rationale
Currently no such corpus exists, and it is hard to find such a corpus, so we tried to generate a reasonable batch of samples using machine translation for research purposes.
### Source Data
- [HKCancor](https://github.com/fcbond/hkcancor)
- [CommonVoice 7.0 Chinese (Hong Kong)](https://commonvoice.mozilla.org/en/datasets)
#### Initial Data Collection and Normalization
Normalization scripts will be included soon.
#### Who are the source language producers?
- [HKCancor](https://github.com/fcbond/hkcancor)
- [CommonVoice 7.0 Chinese (Hong Kong)](https://commonvoice.mozilla.org/en/datasets)
### Annotations
#### Annotation process
We run the Cantonese text corpus against Microsoft's Translator API.
#### Who are the annotators?
- [Microsoft's Translator API](https://learn.microsoft.com/en-us/azure/ai-services/translator/language-support)
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
We would like to share this parallel corpus and welcome contributions to preserve the Cantonese dialect.
### Discussion of Biases
N/A
### Other Known Limitations
This parallel corpus is machine-translated, it is not 100% accurate.
## Additional Information
### Dataset Curators
- [Botisan AI](https://botisan.ai)
- [Haoran (Simon) Liang](https://github.com/lhr0909)
### Licensing Information
[CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Citation Information
```
@misc {botisanAiCantoneseMandarinTranslationsDatasets,
author = {Liang, H.},
title = {Cantonese Mandarin Translations Dataset},
year = {2021},
url = {https://huggingface.co/datasets/botisan-ai/cantonese-mandarin-translations},
}
```
### Contributions
Thanks to [@lhr0909](https://github.com/lhr0909) for adding this dataset. |
defunct-datasets/eli5 | defunct-datasets | 2024-01-11T09:32:33Z | 611 | 50 | [
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:open-domain-abstractive-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"arxiv:1907.09190",
"arxiv:1904.04047",
"region:us"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
- open-domain-abstractive-qa
paperswithcode_id: eli5
pretty_name: ELI5
viewer: false
dataset_info:
features:
- name: q_id
dtype: string
- name: title
dtype: string
- name: selftext
dtype: string
- name: document
dtype: string
- name: subreddit
dtype: string
- name: answers
sequence:
- name: a_id
dtype: string
- name: text
dtype: string
- name: score
dtype: int32
- name: title_urls
sequence:
- name: url
dtype: string
- name: selftext_urls
sequence:
- name: url
dtype: string
- name: answers_urls
sequence:
- name: url
dtype: string
config_name: LFQA_reddit
splits:
- name: train_eli5
num_bytes: 577188173
num_examples: 272634
- name: validation_eli5
num_bytes: 21117891
num_examples: 9812
- name: test_eli5
num_bytes: 53099796
num_examples: 24512
- name: train_asks
num_bytes: 286464210
num_examples: 131778
- name: validation_asks
num_bytes: 9662481
num_examples: 2281
- name: test_asks
num_bytes: 17713920
num_examples: 4462
- name: train_askh
num_bytes: 330483260
num_examples: 98525
- name: validation_askh
num_bytes: 18690845
num_examples: 4901
- name: test_askh
num_bytes: 36246784
num_examples: 9764
download_size: 6326543
dataset_size: 1350667360
---
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data.</p>
</div>
## <span style="color:red">⚠️ Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable</span>.
# Dataset Card for ELI5
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ELI5 homepage](https://facebookresearch.github.io/ELI5/explore.html)
- **Repository:** [ELI5 repository](https://github.com/facebookresearch/ELI5)
- **Paper:** [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190)
- **Point of Contact:** [Yacine Jernite](mailto:[email protected])
### Dataset Summary
The ELI5 dataset is an English-language dataset of questions and answers gathered from three subreddits where users ask factual questions requiring paragraph-length or longer answers. The dataset was created to support the task of open-domain long form abstractive question answering, and covers questions about general topics in its [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subset, science in it [r/askscience](https://www.reddit.com/r/askscience/) subset, and History in its [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subset.
### Supported Tasks and Leaderboards
- `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer. The model performance is measured by how high its [ROUGE](https://huggingface.co/metrics/rouge) score to the reference is. A [BART-based model](https://huggingface.co/yjernite/bart_eli5) with a [dense retriever](https://huggingface.co/yjernite/retribert-base-uncased) trained to draw information from [Wikipedia passages](https://huggingface.co/datasets/wiki_snippets) achieves a [ROUGE-L of 0.149](https://yjernite.github.io/lfqa.html#generation).
### Languages
The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by the number of upvotes they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.
An example from the ELI5 test set looks as follows:
```
{'q_id': '8houtx',
'title': 'Why does water heated to room temperature feel colder than the air around it?',
'selftext': '',
'document': '',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['dylcnfk', 'dylcj49'],
'text': ["Water transfers heat more efficiently than air. When something feels cold it's because heat is being transferred from your skin to whatever you're touching. Since water absorbs the heat more readily than air, it feels colder.",
"Air isn't as good at transferring heat compared to something like water or steel (sit on a room temperature steel bench vs. a room temperature wooden bench, and the steel one will feel more cold).\n\nWhen you feel cold, what you're feeling is heat being transferred out of you. If there is no breeze, you feel a certain way. If there's a breeze, you will get colder faster (because the moving air is pulling the heat away from you), and if you get into water, its quite good at pulling heat from you. Get out of the water and have a breeze blow on you while you're wet, all of the water starts evaporating, pulling even more heat from you."],
'score': [5, 2]},
'title_urls': {'url': []},
'selftext_urls': {'url': []},
'answers_urls': {'url': []}}
```
### Data Fields
- `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps.
- `subreddit`: One of `explainlikeimfive`, `askscience`, or `AskHistorians`, indicating which subreddit the question came from
- `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens
- `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n`
- `selftext`: either an empty string or an elaboration of the question
- `selftext_urls`: similar to `title_urls` but for `self_text`
- `answers`: a list of answers, each answer has:
- `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps.
- `text`: the answer text with the URLs normalized
- `score`: the number of upvotes the answer had received when the dumps were created
- `answers_urls`: a list of the extracted URLs. All answers use the same list, the numbering of the normalization token continues across answer texts
### Data Splits
The data is split into a training, validation and test set for each of the three subreddits. In order to avoid having duplicate questions in across sets, the `title` field of each of the questions were ranked by their tf-idf match to their nearest neighbor and the ones with the smallest value were used in the test and validation sets. The final split sizes are as follow:
| | Train | Valid | Test |
| ----- | ------ | ----- | ---- |
| r/explainlikeimfive examples| 272634 | 9812 | 24512|
| r/askscience examples | 131778 | 2281 | 4462 |
| r/AskHistorians examples | 98525 | 4901 | 9764 |
## Dataset Creation
### Curation Rationale
ELI5 was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/).
In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period form August 2012 to August 2019.
#### Who are the source language producers?
The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), [r/askscience](https://www.reddit.com/r/askscience/), and [r/AskHistorians](https://www.reddit.com/r/AskHistorians/) subreddits between 2012 and 2019. No further demographic information was available from the data source.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some of the questions and answers are about contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them.
It should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section.
### Discussion of Biases
While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the [recent post from Reddit founder u/spez](https://www.reddit.com/r/announcements/comments/gxas21/upcoming_changes_to_our_content_policy_our_board/) outlining some of the ways he thinks the website's historical policies have been responsible for this problem, [Adrienne Massanari's 2015 article on GamerGate](https://www.researchgate.net/publication/283848479_Gamergate_and_The_Fappening_How_Reddit's_algorithm_governance_and_culture_support_toxic_technocultures) and follow-up works, or a [2019 Wired article on misogyny on Reddit](https://www.wired.com/story/misogyny-reddit-research/).
While there has been some recent work in the NLP community on *de-biasing* models (e.g. [Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047) for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.
We still note some encouraging signs for all of these communities: [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) and [r/askscience](https://www.reddit.com/r/askscience/) have similar structures and purposes, and [r/askscience](https://www.reddit.com/r/askscience/) was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a [hackerfall post](https://hackerfall.com/story/study-and-interactive-visualization-of-toxicity-in), [thecut.com write-up](https://www.thecut.com/2015/03/interactive-chart-of-reddits-toxicity.html) and supporting [data](https://chart-studio.plotly.com/~bsbell21/210/toxicity-vs-supportiveness-by-subreddit/#data)). Meanwhile, the [r/AskHistorians rules](https://www.reddit.com/r/AskHistorians/wiki/rules) mention that the admins will not tolerate "_racism, sexism, or any other forms of bigotry_". However, further analysis of whether and to what extent these rules reduce toxicity is still needed.
We also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics.
### Other Known Limitations
The answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth.
## Additional Information
### Dataset Curators
The dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR).
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{eli5_lfqa,
author = {Angela Fan and
Yacine Jernite and
Ethan Perez and
David Grangier and
Jason Weston and
Michael Auli},
editor = {Anna Korhonen and
David R. Traum and
Llu{\'{\i}}s M{\`{a}}rquez},
title = {{ELI5:} Long Form Question Answering},
booktitle = {Proceedings of the 57th Conference of the Association for Computational
Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019,
Volume 1: Long Papers},
pages = {3558--3567},
publisher = {Association for Computational Linguistics},
year = {2019},
url = {https://doi.org/10.18653/v1/p19-1346},
doi = {10.18653/v1/p19-1346}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset. |
rezacsedu/bn_hate_speech | rezacsedu | 2024-01-10T10:29:39Z | 100 | 2 | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:bn",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2004.07807",
"region:us",
"hate-speech-topic-classification"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- bn
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: bengali-hate-speech
pretty_name: Bengali Hate Speech Dataset
tags:
- hate-speech-topic-classification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Personal
'1': Political
'2': Religious
'3': Geopolitical
'4': Gender abusive
splits:
- name: train
num_bytes: 972631
num_examples: 3418
download_size: 389814
dataset_size: 972631
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Bengali Hate Speech Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Bengali Hate Speech Dataset](https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset)
- **Repository:** [Bengali Hate Speech Dataset](https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset)
- **Paper:** [Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network](https://arxiv.org/abs/2004.07807)
- **Point of Contact:** [Md. Rezaul Karim]([email protected])
### Dataset Summary
The Bengali Hate Speech Dataset is a Bengali-language dataset of news articles collected from various Bengali media sources and categorized based on the type of hate in the text. The dataset was created to provide greater support for under-resourced languages like Bengali on NLP tasks, and serves as a benchmark for multiple types of classification tasks.
### Supported Tasks and Leaderboards
* `topic classification`: The dataset can be used to train a Multichannel Convolutional-LSTM for classifying different types of hate speech. The model performance can be measured by its F1 score.
### Languages
The text in the dataset is in Bengali and the associated BCP-47 code is `bn`.
## Dataset Structure
### Data Instances
A data instance takes the form of a news article and its associated label.
🚨 Beware that the following example contains extremely offensive content!
An example looks like this:
```
{"text": "রেন্ডিয়াকে পৃথীবির মানচিএ থেকে মুচে ফেলতে হবে",
"label": "Geopolitical"}
```
### Data Fields
* `text`: the text of the Bengali news article
* `label`: one of `Geopolitical`, `Personal`, `Political`, `Religious`, or `Gender abusive` indicating the type of hate speech
### Data Splits
The dataset has 3418 examples.
## Dataset Creation
### Curation Rationale
Under-resourced languages like Bengali lack supporting resources that languages like English have. This dataset was collected from multiple Bengali news sources to provide several classification benchmarks for hate speech detection, document classification and sentiment analysis.
### Source Data
#### Initial Data Collection and Normalization
Bengali articles were collected from a Bengali Wikipedia dump, Bengali news articles, news dumps of TV channels, books, blogs, sports portal and social media. Emphasis was placed on Facebook pages and newspaper sources because they have about 50 million followers and is a common source of opinion and hate speech. The full dataset consists of 250 million articles and is currently being prepared. This is a subset of the full dataset.
#### Who are the source language producers?
The source language producers are Bengali authors and users who interact with these various forms of Bengali media.
### Annotations
#### Annotation process
The data was annotated by manually identifying freqently occurring terms in texts containing hate speech and references to specific entities. The authors also prepared normalized frequency vectors of 175 abusive terms that are commonly used to express hate in Bengali. A hate label is assigned if at least one of these terms exists in the text. Annotator's were provided with unbiased text only contents to make the decision. Non-hate statements were removed from the list and the category of hate was further divided into political, personal, gender abusive, geopolitical and religious. To reduce possible bias, each label was assigned based on a majority voting on the annotator's opinions and Cohen's Kappa was computed to measure inter-annotator agreement.
#### Who are the annotators?
Three native Bengali speakers and two linguists annotated the dataset which was then reviewed and validated by three experts (one South Asian linguist and two native speakers).
### Personal and Sensitive Information
The dataset contains very sensitive and highly offensive comments in a religious, political and gendered context. Some of the comments are directed towards contemporary public figures like politicians, religious leaders, celebrities and athletes.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of the dataset is to improve hate speech detection in Bengali. The growth of social media has enabled people to express hate freely online and there has been a lot of focus on detecting hate speech for highly resourced languages like English. The use of hate speech is pervasive, like any other major language, which can have serious and deadly consequences. Failure to react to hate speech renders targeted minorities more vulnerable to attack and it can also create indifference towards their treatment from majority populations.
### Discussion of Biases
The dataset was collected using a bootstrapping approach. An initial search was made for specific types of texts, articles and tweets containing common harassment directed at targeting characteristics. As a result, this dataset contains **extremely** offensive content that is disturbing. In addition, Facebook pages and newspaper sources were emphasized because they are well-known for having hate and harassment issues.
### Other Known Limitations
The dataset contains racist, sexist, homophobic and offensive comments. It is collected and annotated for research related purposes only.
## Additional Information
### Dataset Curators
The dataset was curated by Md. Rezaul Karim, Sumon Kanti Dey, Bharathi Raja Chakravarthi, John McCrae and Michael Cochez.
### Licensing Information
This dataset is licensed under the MIT License.
### Citation Information
```
@inproceedings{karim2020BengaliNLP,
title={Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network},
author={Karim, Md. Rezaul and Chakravarti, Bharathi Raja and P. McCrae, John and Cochez, Michael},
booktitle={7th IEEE International Conference on Data Science and Advanced Analytics (IEEE DSAA,2020)},
publisher={IEEE},
year={2020}
}
```
### Contributions
Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset. |
spyysalo/bc2gm_corpus | spyysalo | 2024-01-10T10:03:04Z | 400 | 10 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Bc2GmCorpus
dataset_info:
config_name: bc2gm_corpus
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-GENE
'2': I-GENE
splits:
- name: train
num_bytes: 6095123
num_examples: 12500
- name: validation
num_bytes: 1215919
num_examples: 2500
- name: test
num_bytes: 2454589
num_examples: 5000
download_size: 2154630
dataset_size: 9765631
configs:
- config_name: bc2gm_corpus
data_files:
- split: train
path: bc2gm_corpus/train-*
- split: validation
path: bc2gm_corpus/validation-*
- split: test
path: bc2gm_corpus/test-*
default: true
---
# Dataset Card for bc2gm_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/spyysalo/bc2gm-corpus/)
- **Repository:** [Github](https://github.com/spyysalo/bc2gm-corpus/)
- **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@mahajandiwakar](https://github.com/mahajandiwakar) for adding this dataset. |
legacy-datasets/banking77 | legacy-datasets | 2024-01-10T08:23:17Z | 3,128 | 48 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2003.04807",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
pretty_name: BANKING77
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': activate_my_card
'1': age_limit
'2': apple_pay_or_google_pay
'3': atm_support
'4': automatic_top_up
'5': balance_not_updated_after_bank_transfer
'6': balance_not_updated_after_cheque_or_cash_deposit
'7': beneficiary_not_allowed
'8': cancel_transfer
'9': card_about_to_expire
'10': card_acceptance
'11': card_arrival
'12': card_delivery_estimate
'13': card_linking
'14': card_not_working
'15': card_payment_fee_charged
'16': card_payment_not_recognised
'17': card_payment_wrong_exchange_rate
'18': card_swallowed
'19': cash_withdrawal_charge
'20': cash_withdrawal_not_recognised
'21': change_pin
'22': compromised_card
'23': contactless_not_working
'24': country_support
'25': declined_card_payment
'26': declined_cash_withdrawal
'27': declined_transfer
'28': direct_debit_payment_not_recognised
'29': disposable_card_limits
'30': edit_personal_details
'31': exchange_charge
'32': exchange_rate
'33': exchange_via_app
'34': extra_charge_on_statement
'35': failed_transfer
'36': fiat_currency_support
'37': get_disposable_virtual_card
'38': get_physical_card
'39': getting_spare_card
'40': getting_virtual_card
'41': lost_or_stolen_card
'42': lost_or_stolen_phone
'43': order_physical_card
'44': passcode_forgotten
'45': pending_card_payment
'46': pending_cash_withdrawal
'47': pending_top_up
'48': pending_transfer
'49': pin_blocked
'50': receiving_money
'51': Refund_not_showing_up
'52': request_refund
'53': reverted_card_payment?
'54': supported_cards_and_currencies
'55': terminate_account
'56': top_up_by_bank_transfer_charge
'57': top_up_by_card_charge
'58': top_up_by_cash_or_cheque
'59': top_up_failed
'60': top_up_limits
'61': top_up_reverted
'62': topping_up_by_card
'63': transaction_charged_twice
'64': transfer_fee_charged
'65': transfer_into_account
'66': transfer_not_received_by_recipient
'67': transfer_timing
'68': unable_to_verify_identity
'69': verify_my_identity
'70': verify_source_of_funds
'71': verify_top_up
'72': virtual_card_not_working
'73': visa_or_mastercard
'74': why_verify_identity
'75': wrong_amount_of_cash_received
'76': wrong_exchange_rate_for_cash_withdrawal
splits:
- name: train
num_bytes: 715028
num_examples: 10003
- name: test
num_bytes: 204010
num_examples: 3080
download_size: 392040
dataset_size: 919038
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for BANKING77
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/PolyAI-LDN/task-specific-datasets)
- **Repository:** [Github](https://github.com/PolyAI-LDN/task-specific-datasets)
- **Paper:** [ArXiv](https://arxiv.org/abs/2003.04807)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "banking77" is deprecated and will be deleted. Use "<a href="https://huggingface.co/datasets/PolyAI/banking77">PolyAI/banking77</a>" instead.</p>
</div>
Dataset composed of online banking queries annotated with their corresponding intents.
BANKING77 dataset provides a very fine-grained set of intents in a banking domain.
It comprises 13,083 customer service queries labeled with 77 intents.
It focuses on fine-grained single-domain intent detection.
### Supported Tasks and Leaderboards
Intent classification, intent detection
### Languages
English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows:
```
{
'label': 11, # integer label corresponding to "card_arrival" intent
'text': 'I am still waiting on my card?'
}
```
### Data Fields
- `text`: a string feature.
- `label`: One of classification labels (0-76) corresponding to unique intents.
Intent names are mapped to `label` in the following way:
| label | intent (category) |
|---:|:-------------------------------------------------|
| 0 | activate_my_card |
| 1 | age_limit |
| 2 | apple_pay_or_google_pay |
| 3 | atm_support |
| 4 | automatic_top_up |
| 5 | balance_not_updated_after_bank_transfer |
| 6 | balance_not_updated_after_cheque_or_cash_deposit |
| 7 | beneficiary_not_allowed |
| 8 | cancel_transfer |
| 9 | card_about_to_expire |
| 10 | card_acceptance |
| 11 | card_arrival |
| 12 | card_delivery_estimate |
| 13 | card_linking |
| 14 | card_not_working |
| 15 | card_payment_fee_charged |
| 16 | card_payment_not_recognised |
| 17 | card_payment_wrong_exchange_rate |
| 18 | card_swallowed |
| 19 | cash_withdrawal_charge |
| 20 | cash_withdrawal_not_recognised |
| 21 | change_pin |
| 22 | compromised_card |
| 23 | contactless_not_working |
| 24 | country_support |
| 25 | declined_card_payment |
| 26 | declined_cash_withdrawal |
| 27 | declined_transfer |
| 28 | direct_debit_payment_not_recognised |
| 29 | disposable_card_limits |
| 30 | edit_personal_details |
| 31 | exchange_charge |
| 32 | exchange_rate |
| 33 | exchange_via_app |
| 34 | extra_charge_on_statement |
| 35 | failed_transfer |
| 36 | fiat_currency_support |
| 37 | get_disposable_virtual_card |
| 38 | get_physical_card |
| 39 | getting_spare_card |
| 40 | getting_virtual_card |
| 41 | lost_or_stolen_card |
| 42 | lost_or_stolen_phone |
| 43 | order_physical_card |
| 44 | passcode_forgotten |
| 45 | pending_card_payment |
| 46 | pending_cash_withdrawal |
| 47 | pending_top_up |
| 48 | pending_transfer |
| 49 | pin_blocked |
| 50 | receiving_money |
| 51 | Refund_not_showing_up |
| 52 | request_refund |
| 53 | reverted_card_payment? |
| 54 | supported_cards_and_currencies |
| 55 | terminate_account |
| 56 | top_up_by_bank_transfer_charge |
| 57 | top_up_by_card_charge |
| 58 | top_up_by_cash_or_cheque |
| 59 | top_up_failed |
| 60 | top_up_limits |
| 61 | top_up_reverted |
| 62 | topping_up_by_card |
| 63 | transaction_charged_twice |
| 64 | transfer_fee_charged |
| 65 | transfer_into_account |
| 66 | transfer_not_received_by_recipient |
| 67 | transfer_timing |
| 68 | unable_to_verify_identity |
| 69 | verify_my_identity |
| 70 | verify_source_of_funds |
| 71 | verify_top_up |
| 72 | virtual_card_not_working |
| 73 | visa_or_mastercard |
| 74 | why_verify_identity |
| 75 | wrong_amount_of_cash_received |
| 76 | wrong_exchange_rate_for_cash_withdrawal |
### Data Splits
| Dataset statistics | Train | Test |
| --- | --- | --- |
| Number of examples | 10 003 | 3 080 |
| Average character length | 59.5 | 54.2 |
| Number of intents | 77 | 77 |
| Number of domains | 1 | 1 |
## Dataset Creation
### Curation Rationale
Previous intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets ([HWU64](https://github.com/xliuhw/NLU-Evaluation-Data) and [CLINC150](https://github.com/clinc/oos-eval)), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered "in the wild". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. **banking**. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The dataset does not contain any additional annotations.
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset it to help develop better intent detection systems.
Any comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[PolyAI](https://github.com/PolyAI-LDN)
### Licensing Information
Creative Commons Attribution 4.0 International
### Citation Information
```
@inproceedings{Casanueva2020,
author = {I{\~{n}}igo Casanueva and Tadas Temcinas and Daniela Gerz and Matthew Henderson and Ivan Vulic},
title = {Efficient Intent Detection with Dual Sentence Encoders},
year = {2020},
month = {mar},
note = {Data available at https://github.com/PolyAI-LDN/task-specific-datasets},
url = {https://arxiv.org/abs/2003.04807},
booktitle = {Proceedings of the 2nd Workshop on NLP for ConvAI - ACL 2020}
}
```
### Contributions
Thanks to [@dkajtoch](https://github.com/dkajtoch) for adding this dataset. |
tuanphong/ascent_kb | tuanphong | 2024-01-09T14:44:26Z | 112 | 3 | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2011.00905",
"region:us",
"knowledge-base"
] | [
"other"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- other
task_ids: []
paperswithcode_id: ascentkb
pretty_name: Ascent KB
tags:
- knowledge-base
dataset_info:
- config_name: canonical
features:
- name: arg1
dtype: string
- name: rel
dtype: string
- name: arg2
dtype: string
- name: support
dtype: int64
- name: facets
list:
- name: value
dtype: string
- name: type
dtype: string
- name: support
dtype: int64
- name: source_sentences
list:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2976665740
num_examples: 8904060
download_size: 898478552
dataset_size: 2976665740
- config_name: open
features:
- name: subject
dtype: string
- name: predicate
dtype: string
- name: object
dtype: string
- name: support
dtype: int64
- name: facets
list:
- name: value
dtype: string
- name: type
dtype: string
- name: support
dtype: int64
- name: source_sentences
list:
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 2882646222
num_examples: 8904060
download_size: 900156754
dataset_size: 2882646222
configs:
- config_name: canonical
data_files:
- split: train
path: canonical/train-*
default: true
- config_name: open
data_files:
- split: train
path: open/train-*
---
# Dataset Card for Ascent KB
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ascent.mpi-inf.mpg.de/
- **Repository:** https://github.com/phongnt570/ascent
- **Paper:** https://arxiv.org/abs/2011.00905
- **Point of Contact:** http://tuan-phong.com
### Dataset Summary
This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the [Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/).
The focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc.
The current version of Ascent KB (v1.0.0) is approximately **19 times larger than ConceptNet** (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded).
For more details, take a look at
[the research paper](https://arxiv.org/abs/2011.00905) and
[the website](https://ascent.mpi-inf.mpg.de).
### Supported Tasks and Leaderboards
The dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
There are two configurations available for this dataset:
1. `canonical` (default): This part contains `<arg1 ; rel ; arg2>`
assertions where the relations (`rel`) were mapped to
[ConceptNet relations](https://github.com/commonsense/conceptnet5/wiki/Relations)
with slight modifications:
- Introducing 2 new relations: `/r/HasSubgroup`, `/r/HasAspect`.
- All `/r/HasA` relations were replaced with `/r/HasAspect`.
This is motivated by the [ATOMIC-2020](https://allenai.org/data/atomic-2020)
schema, although they grouped all `/r/HasA` and
`/r/HasProperty` into `/r/HasProperty`.
- The `/r/UsedFor` relation was replaced with `/r/ObjectUse`
which is broader (could be either _"used for"_, _"used in"_, or _"used as"_, ect.).
This is also taken from ATOMIC-2020.
2. `open`: This part contains open assertions of the form
`<subject ; predicate ; object>` extracted directly from web
contents. This is the original form of the `canonical` triples.
In both configurations, each assertion is equipped with
extra information including: a set of semantic `facets`
(e.g., *LOCATION*, *TEMPORAL*, etc.), its `support` (i.e., number of occurrences),
and a list of `source_sentences`.
An example row in the `canonical` configuration:
```JSON
{
"arg1": "elephant",
"rel": "/r/HasProperty",
"arg2": "intelligent",
"support": 15,
"facets": [
{
"value": "extremely",
"type": "DEGREE",
"support": 11
}
],
"source_sentences": [
{
"text": "Elephants are extremely intelligent animals.",
"source": "https://www.softschools.com/facts/animals/asian_elephant_facts/2310/"
},
{
"text": "Elephants are extremely intelligent creatures and an elephant's brain can weigh as much as 4-6 kg.",
"source": "https://www.elephantsforafrica.org/elephant-facts/"
}
]
}
```
### Data Fields
- **For `canonical` configuration**
- `arg1`: the first argument to the relationship, e.g., *elephant*
- `rel`: the canonical relation, e.g., */r/HasProperty*
- `arg2`: the second argument to the relationship, e.g., *intelligence*
- `support`: the number of occurrences of the assertion, e.g., *15*
- `facets`: an array of semantic facets, each contains
- `value`: facet value, e.g., *extremely*
- `type`: facet type, e.g., *DEGREE*
- `support`: the number of occurrences of the facet, e.g., *11*
- `source_sentences`: an array of source sentences from which the assertion was
extracted, each contains
- `text`: the raw text of the sentence
- `source`: the URL to its parent document
- **For `open` configuration**
- The fields of this configuration are the same as the `canonical`
configuration's, except that
the (`arg1`, `rel`, `arg2`) fields are replaced with the
(`subject`, `predicate`, `object`) fields
which are free
text phrases extracted directly from the source sentences
using an Open Information Extraction (OpenIE) tool.
### Data Splits
There are no splits. All data points come to a default split called `train`.
## Dataset Creation
### Curation Rationale
The commonsense knowledge base was created to assist in development of robust and reliable AI.
### Source Data
#### Initial Data Collection and Normalization
Texts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions.
The assertions were then grouped into semantically equivalent clusters.
Take a look at the research paper for more details.
#### Who are the source language producers?
Web users.
### Annotations
#### Annotation process
None.
#### Who are the annotators?
None.
### Personal and Sensitive Information
Unknown.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The knowledge base has been developed by researchers at the
[Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/).
Contact [Tuan-Phong Nguyen](http://tuan-phong.com) in case of questions and comments.
### Licensing Information
[The Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@InProceedings{nguyen2021www,
title={Advanced Semantics for Commonsense Knowledge Extraction},
author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard},
year={2021},
booktitle={The Web Conference 2021},
}
```
### Contributions
Thanks to [@phongnt570](https://github.com/phongnt570) for adding this dataset. |
sealuzh/app_reviews | sealuzh | 2024-01-09T12:30:17Z | 857 | 26 | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:sentiment-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- sentiment-scoring
pretty_name: AppReviews
dataset_info:
features:
- name: package_name
dtype: string
- name: review
dtype: string
- name: date
dtype: string
- name: star
dtype: int8
splits:
- name: train
num_bytes: 32768731
num_examples: 288065
download_size: 13207727
dataset_size: 32768731
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Home Page](https://github.com/sealuzh/user_quality)
- **Repository:** [Repo Link](https://github.com/sealuzh/user_quality)
- **Paper:** [Link](https://giograno.me/assets/pdf/workshop/wama17.pdf)
- **Leaderboard:
- **Point of Contact:** [Darshan Gandhi]([email protected])
### Dataset Summary
It is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches)
### Supported Tasks and Leaderboards
The dataset we provide comprises 395 different apps from F-Droid repository, including code quality indicators of 629 versions of these
apps. It also encloses app reviews related to each of these versions, which have been automatically categorized classifying types of user feedback from a software maintenance and evolution perspective.
### Languages
The dataset is a monolingual dataset which has the messages English.
## Dataset Structure
### Data Instances
The dataset consists of a message in English.
{'package_name': 'com.mantz_it.rfanalyzer',
'review': "Great app! The new version now works on my Bravia Android TV which is great as it's right by my rooftop aerial cable. The scan feature would be useful...any ETA on when this will be available? Also the option to import a list of bookmarks e.g. from a simple properties file would be useful.",
'date': 'October 12 2016',
'star': 4}
### Data Fields
* package_name : Name of the Software Application Package
* review : Message of the user
* date : date when the user posted the review
* star : rating provied by the user for the application
### Data Splits
There is training data, with a total of : 288065
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
With the help of this dataset one can try to understand more about software applications and what are the views and opinions of the users about them. This helps to understand more about which type of software applications are prefeered by the users and how do these applications facilitate the user to help them solve their problems and issues.
### Discussion of Biases
The reviews are only for applications which are in the open-source software applications, the other sectors have not been considered here
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Giovanni Grano - (University of Zurich), Sebastiano Panichella - (University of Zurich), Andrea di Sorbo - (University of Sannio)
### Licensing Information
[More Information Needed]
### Citation Information
@InProceedings{Zurich Open Repository and
Archive:dataset,
title = {Software Applications User Reviews},
authors={Grano, Giovanni; Di Sorbo, Andrea; Mercaldo, Francesco; Visaggio, Corrado A; Canfora, Gerardo;
Panichella, Sebastiano},
year={2017}
}
### Contributions
Thanks to [@darshan-gandhi](https://github.com/darshan-gandhi) for adding this dataset. |
sewon/ambig_qa | sewon | 2024-01-09T12:27:07Z | 910 | 14 | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|natural_questions",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2004.10645",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|natural_questions
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: ambigqa
pretty_name: 'AmbigQA: Answering Ambiguous Open-domain Questions'
dataset_info:
- config_name: full
features:
- name: id
dtype: string
- name: question
dtype: string
- name: annotations
sequence:
- name: type
dtype: string
- name: answer
sequence: string
- name: qaPairs
sequence:
- name: question
dtype: string
- name: answer
sequence: string
- name: viewed_doc_titles
sequence: string
- name: used_queries
sequence:
- name: query
dtype: string
- name: results
sequence:
- name: title
dtype: string
- name: snippet
dtype: string
- name: nq_answer
sequence: string
- name: nq_doc_title
dtype: string
splits:
- name: train
num_bytes: 43538533
num_examples: 10036
- name: validation
num_bytes: 15383268
num_examples: 2002
download_size: 30674462
dataset_size: 58921801
- config_name: light
features:
- name: id
dtype: string
- name: question
dtype: string
- name: annotations
sequence:
- name: type
dtype: string
- name: answer
sequence: string
- name: qaPairs
sequence:
- name: question
dtype: string
- name: answer
sequence: string
splits:
- name: train
num_bytes: 2739628
num_examples: 10036
- name: validation
num_bytes: 805756
num_examples: 2002
download_size: 1777867
dataset_size: 3545384
configs:
- config_name: full
data_files:
- split: train
path: full/train-*
- split: validation
path: full/validation-*
default: true
- config_name: light
data_files:
- split: train
path: light/train-*
- split: validation
path: light/validation-*
---
# Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- [**Homepage:**](https://nlp.cs.washington.edu/ambigqa/)
- [**Repository:**](https://github.com/shmsw25/AmbigQA)
- [**Paper:**](https://arxiv.org/pdf/2004.10645.pdf)
### Dataset Summary
AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with
14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.
We provide two distributions of our new dataset AmbigNQ: a `full` version with all annotation metadata and a `light` version with only inputs and outputs.
### Supported Tasks and Leaderboards
`question-answering`
### Languages
English
## Dataset Structure
### Data Instances
An example from the data set looks as follows:
```
{'annotations': {'answer': [[]],
'qaPairs': [{'answer': [['April 19, 1987'], ['December 17, 1989']],
'question': ['When did the Simpsons first air on television as an animated short on the Tracey Ullman Show?',
'When did the Simpsons first air as a half-hour prime time show?']}],
'type': ['multipleQAs']},
'id': '-4469503464110108318',
'nq_answer': ['December 17 , 1989'],
'nq_doc_title': 'The Simpsons',
'question': 'When did the simpsons first air on television?',
'used_queries': {'query': ['When did the simpsons first air on television?'],
'results': [{'snippet': ['The <b>Simpsons</b> is an American animated <b>television</b> sitcom starring the animated \nSimpson family, ... Since its <b>debut</b> on December 17, 1989, the show <b>has</b> \nbroadcast 673 episodes and its 30th season started ... The <b>Simpsons first</b> season \n<b>was</b> the Fox network's <b>first TV</b> series to rank among a season's top 30 highest-\nrated shows.',
'The <b>Simpsons</b> is an American animated sitcom created by Matt Groening for the \nFox ... Since its <b>debut</b> on December 17, 1989, 674 episodes of The <b>Simpsons</b> \nhave been broadcast. ... When producer James L. Brooks <b>was</b> working on the \n<b>television</b> variety show The Tracey Ullman Show, he decided to include small \nanimated ...',
'... in shorts from The Tracey Ullman Show as their <b>television debut</b> in 1987. The \n<b>Simpsons</b> shorts are a series of animated shorts that <b>aired</b> as a recurring \nsegment on Fox variety <b>television</b> series The Tracey ... The final short to <b>air was</b> "\n<b>TV Simpsons</b>", originally airing on May 14, 1989. The <b>Simpsons</b> later debuted on\n ...',
'The <b>first</b> season of the American animated <b>television</b> series The <b>Simpsons</b> \noriginally <b>aired</b> on the Fox network between December 17, 1989, and May 13, \n1990, beginning with the Christmas special "<b>Simpsons</b> Roasting on an Open Fire\n". The executive producers for the <b>first</b> production season <b>were</b> Matt Groening, ...',
'The <b>Simpsons</b> is an American animated <b>television</b> sitcom created by Matt \nGroening for the Fox ... Since its <b>debut</b> on December 17, 1989, The <b>Simpsons</b> \n<b>has</b> broadcast 674 episodes. The show holds several American <b>television</b> \nlongevity ...',
'The opening sequence of the American animated <b>television</b> series The <b>Simpsons</b> \nis among the most popular opening sequences in <b>television</b> and is accompanied \nby one of <b>television's</b> most recognizable theme songs. The <b>first</b> episode to use \nthis intro <b>was</b> the series' second episode "Bart the ... <b>was</b> the <b>first</b> episode of The \n<b>Simpsons</b> to <b>air</b> in 720p high-definition <b>television</b>, ...',
'"<b>Simpsons</b> Roasting on an Open Fire", titled onscreen as "The <b>Simpsons</b> \nChristmas Special", is the premiere episode of the American animated <b>TV</b> series \nThe <b>Simpsons</b>, ... The show <b>was</b> originally intended to <b>debut</b> earlier in 1989 with "\nSome Enchanted Evening", but due to animation problems with that episode, the \nshow ...',
'"Stark Raving Dad" is the <b>first</b> episode of the third season of the American \nanimated <b>television</b> series The <b>Simpsons</b>. It <b>first aired</b> on the Fox network in the \nUnited States on September 19, 1991. ... The <b>Simpsons was</b> the second highest \nrated show on Fox the week it <b>aired</b>, behind Married... with Children. "Stark \nRaving Dad," ...',
'The <b>Simpsons</b>' twentieth season <b>aired</b> on Fox from September 28, 2008 to May \n17, 2009. With this season, the show tied Gunsmoke as the longest-running \nAmerican primetime <b>television</b> series in terms of total number ... It <b>was</b> the <b>first</b>-\never episode of the show to <b>air</b> in Europe before being seen in the United States.',
'The animated <b>TV</b> show The <b>Simpsons</b> is an American English language \nanimated sitcom which ... The <b>Simpsons was</b> dubbed for the <b>first</b> time in Punjabi \nand <b>aired</b> on Geo <b>TV</b> in Pakistan. The name of the localised Punjabi version is \nTedi Sim ...'],
'title': ['History of The Simpsons',
'The Simpsons',
'The Simpsons shorts',
'The Simpsons (season 1)',
'List of The Simpsons episodes',
'The Simpsons opening sequence',
'Simpsons Roasting on an Open Fire',
'Stark Raving Dad',
'The Simpsons (season 20)',
'Non-English versions of The Simpsons']}]},
'viewed_doc_titles': ['The Simpsons']}
```
### Data Fields
Full
```
{'id': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'annotations': Sequence(feature={'type': Value(dtype='string', id=None), 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'qaPairs': Sequence(feature={'question': Value(dtype='string', id=None), 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, length=-1, id=None)}, length=-1, id=None),
'viewed_doc_titles': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'used_queries': Sequence(feature={'query': Value(dtype='string', id=None), 'results': Sequence(feature={'title': Value(dtype='string', id=None), 'snippet': Value(dtype='string', id=None)}, length=-1, id=None)}, length=-1, id=None),
'nq_answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
'nq_doc_title': Value(dtype='string', id=None)}
```
In the original data format `annotations` have different keys depending on the `type` field = `singleAnswer` or `multipleQAs`. But this implementation uses an empty list `[]` for the unavailable keys
please refer to Dataset Contents(https://github.com/shmsw25/AmbigQA#dataset-contents) for more details.
```
for example in train_light_dataset:
for i,t in enumerate(example['annotations']['type']):
if t =='singleAnswer':
# use the example['annotations']['answer'][i]
# example['annotations']['qaPairs'][i] - > is []
print(example['annotations']['answer'][i])
else:
# use the example['annotations']['qaPairs'][i]
# example['annotations']['answer'][i] - > is []
print(example['annotations']['qaPairs'][i])
```
please refer to Dataset Contents(https://github.com/shmsw25/AmbigQA#dataset-contents) for more details.
Light version only has `id`, `question`, `annotations` fields
### Data Splits
- train: 10036
- validation: 2002
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
- Wikipedia
- NQ-open:
```
@article{ kwiatkowski2019natural,
title={ Natural questions: a benchmark for question answering research},
author={ Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and others },
journal={ Transactions of the Association for Computational Linguistics },
year={ 2019 }
}
```
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/)
### Citation Information
```
@inproceedings{ min2020ambigqa,
title={ {A}mbig{QA}: Answering Ambiguous Open-domain Questions },
author={ Min, Sewon and Michael, Julian and Hajishirzi, Hannaneh and Zettlemoyer, Luke },
booktitle={ EMNLP },
year={2020}
}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |
wikimedia/wikipedia | wikimedia | 2024-01-09T09:40:51Z | 99,568 | 790 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language:ab",
"language:ace",
"language:ady",
"language:af",
"language:alt",
"language:am",
"language:ami",
"language:an",
"language:ang",
"language:anp",
"language:ar",
"language:arc",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:atj",
"language:av",
"language:avk",
"language:awa",
"language:ay",
"language:az",
"language:azb",
"language:ba",
"language:ban",
"language:bar",
"language:bbc",
"language:bcl",
"language:be",
"language:bg",
"language:bh",
"language:bi",
"language:bjn",
"language:blk",
"language:bm",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bug",
"language:bxr",
"language:ca",
"language:cbk",
"language:cdo",
"language:ce",
"language:ceb",
"language:ch",
"language:chr",
"language:chy",
"language:ckb",
"language:co",
"language:cr",
"language:crh",
"language:cs",
"language:csb",
"language:cu",
"language:cv",
"language:cy",
"language:da",
"language:dag",
"language:de",
"language:dga",
"language:din",
"language:diq",
"language:dsb",
"language:dty",
"language:dv",
"language:dz",
"language:ee",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:fat",
"language:ff",
"language:fi",
"language:fj",
"language:fo",
"language:fon",
"language:fr",
"language:frp",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gag",
"language:gan",
"language:gcr",
"language:gd",
"language:gl",
"language:glk",
"language:gn",
"language:gom",
"language:gor",
"language:got",
"language:gpe",
"language:gsw",
"language:gu",
"language:guc",
"language:gur",
"language:guw",
"language:gv",
"language:ha",
"language:hak",
"language:haw",
"language:hbs",
"language:he",
"language:hi",
"language:hif",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:hyw",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ik",
"language:ilo",
"language:inh",
"language:io",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:jam",
"language:jbo",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kbp",
"language:kcg",
"language:kg",
"language:ki",
"language:kk",
"language:kl",
"language:km",
"language:kn",
"language:ko",
"language:koi",
"language:krc",
"language:ks",
"language:ksh",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lad",
"language:lb",
"language:lbe",
"language:lez",
"language:lfn",
"language:lg",
"language:li",
"language:lij",
"language:lld",
"language:lmo",
"language:ln",
"language:lo",
"language:lt",
"language:ltg",
"language:lv",
"language:lzh",
"language:mad",
"language:mai",
"language:map",
"language:mdf",
"language:mg",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mni",
"language:mnw",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:nah",
"language:nan",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:nia",
"language:nl",
"language:nn",
"language:no",
"language:nov",
"language:nqo",
"language:nrf",
"language:nso",
"language:nv",
"language:ny",
"language:oc",
"language:olo",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pag",
"language:pam",
"language:pap",
"language:pcd",
"language:pcm",
"language:pdc",
"language:pfl",
"language:pi",
"language:pih",
"language:pl",
"language:pms",
"language:pnb",
"language:pnt",
"language:ps",
"language:pt",
"language:pwn",
"language:qu",
"language:rm",
"language:rmy",
"language:rn",
"language:ro",
"language:ru",
"language:rue",
"language:rup",
"language:rw",
"language:sa",
"language:sah",
"language:sat",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:se",
"language:sg",
"language:sgs",
"language:shi",
"language:shn",
"language:si",
"language:sk",
"language:skr",
"language:sl",
"language:sm",
"language:smn",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:srn",
"language:ss",
"language:st",
"language:stq",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:szy",
"language:ta",
"language:tay",
"language:tcy",
"language:te",
"language:tet",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tl",
"language:tly",
"language:tn",
"language:to",
"language:tpi",
"language:tr",
"language:trv",
"language:ts",
"language:tt",
"language:tum",
"language:tw",
"language:ty",
"language:tyv",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xal",
"language:xh",
"language:xmf",
"language:yi",
"language:yo",
"language:yue",
"language:za",
"language:zea",
"language:zgh",
"language:zh",
"language:zu",
"license:cc-by-sa-3.0",
"license:gfdl",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | null | ---
language:
- ab
- ace
- ady
- af
- alt
- am
- ami
- an
- ang
- anp
- ar
- arc
- ary
- arz
- as
- ast
- atj
- av
- avk
- awa
- ay
- az
- azb
- ba
- ban
- bar
- bbc
- bcl
- be
- bg
- bh
- bi
- bjn
- blk
- bm
- bn
- bo
- bpy
- br
- bs
- bug
- bxr
- ca
- cbk
- cdo
- ce
- ceb
- ch
- chr
- chy
- ckb
- co
- cr
- crh
- cs
- csb
- cu
- cv
- cy
- da
- dag
- de
- dga
- din
- diq
- dsb
- dty
- dv
- dz
- ee
- el
- eml
- en
- eo
- es
- et
- eu
- ext
- fa
- fat
- ff
- fi
- fj
- fo
- fon
- fr
- frp
- frr
- fur
- fy
- ga
- gag
- gan
- gcr
- gd
- gl
- glk
- gn
- gom
- gor
- got
- gpe
- gsw
- gu
- guc
- gur
- guw
- gv
- ha
- hak
- haw
- hbs
- he
- hi
- hif
- hr
- hsb
- ht
- hu
- hy
- hyw
- ia
- id
- ie
- ig
- ik
- ilo
- inh
- io
- is
- it
- iu
- ja
- jam
- jbo
- jv
- ka
- kaa
- kab
- kbd
- kbp
- kcg
- kg
- ki
- kk
- kl
- km
- kn
- ko
- koi
- krc
- ks
- ksh
- ku
- kv
- kw
- ky
- la
- lad
- lb
- lbe
- lez
- lfn
- lg
- li
- lij
- lld
- lmo
- ln
- lo
- lt
- ltg
- lv
- lzh
- mad
- mai
- map
- mdf
- mg
- mhr
- mi
- min
- mk
- ml
- mn
- mni
- mnw
- mr
- mrj
- ms
- mt
- mwl
- my
- myv
- mzn
- nah
- nan
- nap
- nds
- ne
- new
- nia
- nl
- nn
- 'no'
- nov
- nqo
- nrf
- nso
- nv
- ny
- oc
- olo
- om
- or
- os
- pa
- pag
- pam
- pap
- pcd
- pcm
- pdc
- pfl
- pi
- pih
- pl
- pms
- pnb
- pnt
- ps
- pt
- pwn
- qu
- rm
- rmy
- rn
- ro
- ru
- rue
- rup
- rw
- sa
- sah
- sat
- sc
- scn
- sco
- sd
- se
- sg
- sgs
- shi
- shn
- si
- sk
- skr
- sl
- sm
- smn
- sn
- so
- sq
- sr
- srn
- ss
- st
- stq
- su
- sv
- sw
- szl
- szy
- ta
- tay
- tcy
- te
- tet
- tg
- th
- ti
- tk
- tl
- tly
- tn
- to
- tpi
- tr
- trv
- ts
- tt
- tum
- tw
- ty
- tyv
- udm
- ug
- uk
- ur
- uz
- ve
- vec
- vep
- vi
- vls
- vo
- vro
- wa
- war
- wo
- wuu
- xal
- xh
- xmf
- yi
- yo
- yue
- za
- zea
- zgh
- zh
- zu
license:
- cc-by-sa-3.0
- gfdl
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
configs:
- config_name: 20231101.ab
data_files:
- split: train
path: 20231101.ab/train-*
- config_name: 20231101.ace
data_files:
- split: train
path: 20231101.ace/train-*
- config_name: 20231101.ady
data_files:
- split: train
path: 20231101.ady/train-*
- config_name: 20231101.af
data_files:
- split: train
path: 20231101.af/train-*
- config_name: 20231101.als
data_files:
- split: train
path: 20231101.als/train-*
- config_name: 20231101.alt
data_files:
- split: train
path: 20231101.alt/train-*
- config_name: 20231101.am
data_files:
- split: train
path: 20231101.am/train-*
- config_name: 20231101.ami
data_files:
- split: train
path: 20231101.ami/train-*
- config_name: 20231101.an
data_files:
- split: train
path: 20231101.an/train-*
- config_name: 20231101.ang
data_files:
- split: train
path: 20231101.ang/train-*
- config_name: 20231101.anp
data_files:
- split: train
path: 20231101.anp/train-*
- config_name: 20231101.ar
data_files:
- split: train
path: 20231101.ar/train-*
- config_name: 20231101.arc
data_files:
- split: train
path: 20231101.arc/train-*
- config_name: 20231101.ary
data_files:
- split: train
path: 20231101.ary/train-*
- config_name: 20231101.arz
data_files:
- split: train
path: 20231101.arz/train-*
- config_name: 20231101.as
data_files:
- split: train
path: 20231101.as/train-*
- config_name: 20231101.ast
data_files:
- split: train
path: 20231101.ast/train-*
- config_name: 20231101.atj
data_files:
- split: train
path: 20231101.atj/train-*
- config_name: 20231101.av
data_files:
- split: train
path: 20231101.av/train-*
- config_name: 20231101.avk
data_files:
- split: train
path: 20231101.avk/train-*
- config_name: 20231101.awa
data_files:
- split: train
path: 20231101.awa/train-*
- config_name: 20231101.ay
data_files:
- split: train
path: 20231101.ay/train-*
- config_name: 20231101.az
data_files:
- split: train
path: 20231101.az/train-*
- config_name: 20231101.azb
data_files:
- split: train
path: 20231101.azb/train-*
- config_name: 20231101.ba
data_files:
- split: train
path: 20231101.ba/train-*
- config_name: 20231101.ban
data_files:
- split: train
path: 20231101.ban/train-*
- config_name: 20231101.bar
data_files:
- split: train
path: 20231101.bar/train-*
- config_name: 20231101.bat-smg
data_files:
- split: train
path: 20231101.bat-smg/train-*
- config_name: 20231101.bcl
data_files:
- split: train
path: 20231101.bcl/train-*
- config_name: 20231101.be
data_files:
- split: train
path: 20231101.be/train-*
- config_name: 20231101.be-x-old
data_files:
- split: train
path: 20231101.be-x-old/train-*
- config_name: 20231101.bg
data_files:
- split: train
path: 20231101.bg/train-*
- config_name: 20231101.bh
data_files:
- split: train
path: 20231101.bh/train-*
- config_name: 20231101.bi
data_files:
- split: train
path: 20231101.bi/train-*
- config_name: 20231101.bjn
data_files:
- split: train
path: 20231101.bjn/train-*
- config_name: 20231101.blk
data_files:
- split: train
path: 20231101.blk/train-*
- config_name: 20231101.bm
data_files:
- split: train
path: 20231101.bm/train-*
- config_name: 20231101.bn
data_files:
- split: train
path: 20231101.bn/train-*
- config_name: 20231101.bo
data_files:
- split: train
path: 20231101.bo/train-*
- config_name: 20231101.bpy
data_files:
- split: train
path: 20231101.bpy/train-*
- config_name: 20231101.br
data_files:
- split: train
path: 20231101.br/train-*
- config_name: 20231101.bs
data_files:
- split: train
path: 20231101.bs/train-*
- config_name: 20231101.bug
data_files:
- split: train
path: 20231101.bug/train-*
- config_name: 20231101.bxr
data_files:
- split: train
path: 20231101.bxr/train-*
- config_name: 20231101.ca
data_files:
- split: train
path: 20231101.ca/train-*
- config_name: 20231101.cbk-zam
data_files:
- split: train
path: 20231101.cbk-zam/train-*
- config_name: 20231101.cdo
data_files:
- split: train
path: 20231101.cdo/train-*
- config_name: 20231101.ce
data_files:
- split: train
path: 20231101.ce/train-*
- config_name: 20231101.ceb
data_files:
- split: train
path: 20231101.ceb/train-*
- config_name: 20231101.ch
data_files:
- split: train
path: 20231101.ch/train-*
- config_name: 20231101.chr
data_files:
- split: train
path: 20231101.chr/train-*
- config_name: 20231101.chy
data_files:
- split: train
path: 20231101.chy/train-*
- config_name: 20231101.ckb
data_files:
- split: train
path: 20231101.ckb/train-*
- config_name: 20231101.co
data_files:
- split: train
path: 20231101.co/train-*
- config_name: 20231101.cr
data_files:
- split: train
path: 20231101.cr/train-*
- config_name: 20231101.crh
data_files:
- split: train
path: 20231101.crh/train-*
- config_name: 20231101.cs
data_files:
- split: train
path: 20231101.cs/train-*
- config_name: 20231101.csb
data_files:
- split: train
path: 20231101.csb/train-*
- config_name: 20231101.cu
data_files:
- split: train
path: 20231101.cu/train-*
- config_name: 20231101.cv
data_files:
- split: train
path: 20231101.cv/train-*
- config_name: 20231101.cy
data_files:
- split: train
path: 20231101.cy/train-*
- config_name: 20231101.da
data_files:
- split: train
path: 20231101.da/train-*
- config_name: 20231101.dag
data_files:
- split: train
path: 20231101.dag/train-*
- config_name: 20231101.de
data_files:
- split: train
path: 20231101.de/train-*
- config_name: 20231101.din
data_files:
- split: train
path: 20231101.din/train-*
- config_name: 20231101.diq
data_files:
- split: train
path: 20231101.diq/train-*
- config_name: 20231101.dsb
data_files:
- split: train
path: 20231101.dsb/train-*
- config_name: 20231101.dty
data_files:
- split: train
path: 20231101.dty/train-*
- config_name: 20231101.dv
data_files:
- split: train
path: 20231101.dv/train-*
- config_name: 20231101.dz
data_files:
- split: train
path: 20231101.dz/train-*
- config_name: 20231101.ee
data_files:
- split: train
path: 20231101.ee/train-*
- config_name: 20231101.el
data_files:
- split: train
path: 20231101.el/train-*
- config_name: 20231101.eml
data_files:
- split: train
path: 20231101.eml/train-*
- config_name: 20231101.en
data_files:
- split: train
path: 20231101.en/train-*
- config_name: 20231101.eo
data_files:
- split: train
path: 20231101.eo/train-*
- config_name: 20231101.es
data_files:
- split: train
path: 20231101.es/train-*
- config_name: 20231101.et
data_files:
- split: train
path: 20231101.et/train-*
- config_name: 20231101.eu
data_files:
- split: train
path: 20231101.eu/train-*
- config_name: 20231101.ext
data_files:
- split: train
path: 20231101.ext/train-*
- config_name: 20231101.fa
data_files:
- split: train
path: 20231101.fa/train-*
- config_name: 20231101.fat
data_files:
- split: train
path: 20231101.fat/train-*
- config_name: 20231101.ff
data_files:
- split: train
path: 20231101.ff/train-*
- config_name: 20231101.fi
data_files:
- split: train
path: 20231101.fi/train-*
- config_name: 20231101.fiu-vro
data_files:
- split: train
path: 20231101.fiu-vro/train-*
- config_name: 20231101.fj
data_files:
- split: train
path: 20231101.fj/train-*
- config_name: 20231101.fo
data_files:
- split: train
path: 20231101.fo/train-*
- config_name: 20231101.fon
data_files:
- split: train
path: 20231101.fon/train-*
- config_name: 20231101.fr
data_files:
- split: train
path: 20231101.fr/train-*
- config_name: 20231101.frp
data_files:
- split: train
path: 20231101.frp/train-*
- config_name: 20231101.frr
data_files:
- split: train
path: 20231101.frr/train-*
- config_name: 20231101.fur
data_files:
- split: train
path: 20231101.fur/train-*
- config_name: 20231101.fy
data_files:
- split: train
path: 20231101.fy/train-*
- config_name: 20231101.ga
data_files:
- split: train
path: 20231101.ga/train-*
- config_name: 20231101.gag
data_files:
- split: train
path: 20231101.gag/train-*
- config_name: 20231101.gan
data_files:
- split: train
path: 20231101.gan/train-*
- config_name: 20231101.gcr
data_files:
- split: train
path: 20231101.gcr/train-*
- config_name: 20231101.gd
data_files:
- split: train
path: 20231101.gd/train-*
- config_name: 20231101.gl
data_files:
- split: train
path: 20231101.gl/train-*
- config_name: 20231101.glk
data_files:
- split: train
path: 20231101.glk/train-*
- config_name: 20231101.gn
data_files:
- split: train
path: 20231101.gn/train-*
- config_name: 20231101.gom
data_files:
- split: train
path: 20231101.gom/train-*
- config_name: 20231101.gor
data_files:
- split: train
path: 20231101.gor/train-*
- config_name: 20231101.got
data_files:
- split: train
path: 20231101.got/train-*
- config_name: 20231101.gpe
data_files:
- split: train
path: 20231101.gpe/train-*
- config_name: 20231101.gu
data_files:
- split: train
path: 20231101.gu/train-*
- config_name: 20231101.guc
data_files:
- split: train
path: 20231101.guc/train-*
- config_name: 20231101.gur
data_files:
- split: train
path: 20231101.gur/train-*
- config_name: 20231101.guw
data_files:
- split: train
path: 20231101.guw/train-*
- config_name: 20231101.gv
data_files:
- split: train
path: 20231101.gv/train-*
- config_name: 20231101.ha
data_files:
- split: train
path: 20231101.ha/train-*
- config_name: 20231101.hak
data_files:
- split: train
path: 20231101.hak/train-*
- config_name: 20231101.haw
data_files:
- split: train
path: 20231101.haw/train-*
- config_name: 20231101.he
data_files:
- split: train
path: 20231101.he/train-*
- config_name: 20231101.hi
data_files:
- split: train
path: 20231101.hi/train-*
- config_name: 20231101.hif
data_files:
- split: train
path: 20231101.hif/train-*
- config_name: 20231101.hr
data_files:
- split: train
path: 20231101.hr/train-*
- config_name: 20231101.hsb
data_files:
- split: train
path: 20231101.hsb/train-*
- config_name: 20231101.ht
data_files:
- split: train
path: 20231101.ht/train-*
- config_name: 20231101.hu
data_files:
- split: train
path: 20231101.hu/train-*
- config_name: 20231101.hy
data_files:
- split: train
path: 20231101.hy/train-*
- config_name: 20231101.hyw
data_files:
- split: train
path: 20231101.hyw/train-*
- config_name: 20231101.ia
data_files:
- split: train
path: 20231101.ia/train-*
- config_name: 20231101.id
data_files:
- split: train
path: 20231101.id/train-*
- config_name: 20231101.ie
data_files:
- split: train
path: 20231101.ie/train-*
- config_name: 20231101.ig
data_files:
- split: train
path: 20231101.ig/train-*
- config_name: 20231101.ik
data_files:
- split: train
path: 20231101.ik/train-*
- config_name: 20231101.ilo
data_files:
- split: train
path: 20231101.ilo/train-*
- config_name: 20231101.inh
data_files:
- split: train
path: 20231101.inh/train-*
- config_name: 20231101.io
data_files:
- split: train
path: 20231101.io/train-*
- config_name: 20231101.is
data_files:
- split: train
path: 20231101.is/train-*
- config_name: 20231101.it
data_files:
- split: train
path: 20231101.it/train-*
- config_name: 20231101.iu
data_files:
- split: train
path: 20231101.iu/train-*
- config_name: 20231101.ja
data_files:
- split: train
path: 20231101.ja/train-*
- config_name: 20231101.jam
data_files:
- split: train
path: 20231101.jam/train-*
- config_name: 20231101.jbo
data_files:
- split: train
path: 20231101.jbo/train-*
- config_name: 20231101.jv
data_files:
- split: train
path: 20231101.jv/train-*
- config_name: 20231101.ka
data_files:
- split: train
path: 20231101.ka/train-*
- config_name: 20231101.kaa
data_files:
- split: train
path: 20231101.kaa/train-*
- config_name: 20231101.kab
data_files:
- split: train
path: 20231101.kab/train-*
- config_name: 20231101.kbd
data_files:
- split: train
path: 20231101.kbd/train-*
- config_name: 20231101.kbp
data_files:
- split: train
path: 20231101.kbp/train-*
- config_name: 20231101.kcg
data_files:
- split: train
path: 20231101.kcg/train-*
- config_name: 20231101.kg
data_files:
- split: train
path: 20231101.kg/train-*
- config_name: 20231101.ki
data_files:
- split: train
path: 20231101.ki/train-*
- config_name: 20231101.kk
data_files:
- split: train
path: 20231101.kk/train-*
- config_name: 20231101.kl
data_files:
- split: train
path: 20231101.kl/train-*
- config_name: 20231101.km
data_files:
- split: train
path: 20231101.km/train-*
- config_name: 20231101.kn
data_files:
- split: train
path: 20231101.kn/train-*
- config_name: 20231101.ko
data_files:
- split: train
path: 20231101.ko/train-*
- config_name: 20231101.koi
data_files:
- split: train
path: 20231101.koi/train-*
- config_name: 20231101.krc
data_files:
- split: train
path: 20231101.krc/train-*
- config_name: 20231101.ks
data_files:
- split: train
path: 20231101.ks/train-*
- config_name: 20231101.ksh
data_files:
- split: train
path: 20231101.ksh/train-*
- config_name: 20231101.ku
data_files:
- split: train
path: 20231101.ku/train-*
- config_name: 20231101.kv
data_files:
- split: train
path: 20231101.kv/train-*
- config_name: 20231101.kw
data_files:
- split: train
path: 20231101.kw/train-*
- config_name: 20231101.ky
data_files:
- split: train
path: 20231101.ky/train-*
- config_name: 20231101.la
data_files:
- split: train
path: 20231101.la/train-*
- config_name: 20231101.lad
data_files:
- split: train
path: 20231101.lad/train-*
- config_name: 20231101.lb
data_files:
- split: train
path: 20231101.lb/train-*
- config_name: 20231101.lbe
data_files:
- split: train
path: 20231101.lbe/train-*
- config_name: 20231101.lez
data_files:
- split: train
path: 20231101.lez/train-*
- config_name: 20231101.lfn
data_files:
- split: train
path: 20231101.lfn/train-*
- config_name: 20231101.lg
data_files:
- split: train
path: 20231101.lg/train-*
- config_name: 20231101.li
data_files:
- split: train
path: 20231101.li/train-*
- config_name: 20231101.lij
data_files:
- split: train
path: 20231101.lij/train-*
- config_name: 20231101.lld
data_files:
- split: train
path: 20231101.lld/train-*
- config_name: 20231101.lmo
data_files:
- split: train
path: 20231101.lmo/train-*
- config_name: 20231101.ln
data_files:
- split: train
path: 20231101.ln/train-*
- config_name: 20231101.lo
data_files:
- split: train
path: 20231101.lo/train-*
- config_name: 20231101.lt
data_files:
- split: train
path: 20231101.lt/train-*
- config_name: 20231101.ltg
data_files:
- split: train
path: 20231101.ltg/train-*
- config_name: 20231101.lv
data_files:
- split: train
path: 20231101.lv/train-*
- config_name: 20231101.mad
data_files:
- split: train
path: 20231101.mad/train-*
- config_name: 20231101.mai
data_files:
- split: train
path: 20231101.mai/train-*
- config_name: 20231101.map-bms
data_files:
- split: train
path: 20231101.map-bms/train-*
- config_name: 20231101.mdf
data_files:
- split: train
path: 20231101.mdf/train-*
- config_name: 20231101.mg
data_files:
- split: train
path: 20231101.mg/train-*
- config_name: 20231101.mhr
data_files:
- split: train
path: 20231101.mhr/train-*
- config_name: 20231101.mi
data_files:
- split: train
path: 20231101.mi/train-*
- config_name: 20231101.min
data_files:
- split: train
path: 20231101.min/train-*
- config_name: 20231101.mk
data_files:
- split: train
path: 20231101.mk/train-*
- config_name: 20231101.ml
data_files:
- split: train
path: 20231101.ml/train-*
- config_name: 20231101.mn
data_files:
- split: train
path: 20231101.mn/train-*
- config_name: 20231101.mni
data_files:
- split: train
path: 20231101.mni/train-*
- config_name: 20231101.mnw
data_files:
- split: train
path: 20231101.mnw/train-*
- config_name: 20231101.mr
data_files:
- split: train
path: 20231101.mr/train-*
- config_name: 20231101.mrj
data_files:
- split: train
path: 20231101.mrj/train-*
- config_name: 20231101.ms
data_files:
- split: train
path: 20231101.ms/train-*
- config_name: 20231101.mt
data_files:
- split: train
path: 20231101.mt/train-*
- config_name: 20231101.mwl
data_files:
- split: train
path: 20231101.mwl/train-*
- config_name: 20231101.my
data_files:
- split: train
path: 20231101.my/train-*
- config_name: 20231101.myv
data_files:
- split: train
path: 20231101.myv/train-*
- config_name: 20231101.mzn
data_files:
- split: train
path: 20231101.mzn/train-*
- config_name: 20231101.nah
data_files:
- split: train
path: 20231101.nah/train-*
- config_name: 20231101.nap
data_files:
- split: train
path: 20231101.nap/train-*
- config_name: 20231101.nds
data_files:
- split: train
path: 20231101.nds/train-*
- config_name: 20231101.nds-nl
data_files:
- split: train
path: 20231101.nds-nl/train-*
- config_name: 20231101.ne
data_files:
- split: train
path: 20231101.ne/train-*
- config_name: 20231101.new
data_files:
- split: train
path: 20231101.new/train-*
- config_name: 20231101.nia
data_files:
- split: train
path: 20231101.nia/train-*
- config_name: 20231101.nl
data_files:
- split: train
path: 20231101.nl/train-*
- config_name: 20231101.nn
data_files:
- split: train
path: 20231101.nn/train-*
- config_name: 20231101.no
data_files:
- split: train
path: 20231101.no/train-*
- config_name: 20231101.nov
data_files:
- split: train
path: 20231101.nov/train-*
- config_name: 20231101.nqo
data_files:
- split: train
path: 20231101.nqo/train-*
- config_name: 20231101.nrm
data_files:
- split: train
path: 20231101.nrm/train-*
- config_name: 20231101.nso
data_files:
- split: train
path: 20231101.nso/train-*
- config_name: 20231101.nv
data_files:
- split: train
path: 20231101.nv/train-*
- config_name: 20231101.ny
data_files:
- split: train
path: 20231101.ny/train-*
- config_name: 20231101.oc
data_files:
- split: train
path: 20231101.oc/train-*
- config_name: 20231101.olo
data_files:
- split: train
path: 20231101.olo/train-*
- config_name: 20231101.om
data_files:
- split: train
path: 20231101.om/train-*
- config_name: 20231101.or
data_files:
- split: train
path: 20231101.or/train-*
- config_name: 20231101.os
data_files:
- split: train
path: 20231101.os/train-*
- config_name: 20231101.pa
data_files:
- split: train
path: 20231101.pa/train-*
- config_name: 20231101.pag
data_files:
- split: train
path: 20231101.pag/train-*
- config_name: 20231101.pam
data_files:
- split: train
path: 20231101.pam/train-*
- config_name: 20231101.pap
data_files:
- split: train
path: 20231101.pap/train-*
- config_name: 20231101.pcd
data_files:
- split: train
path: 20231101.pcd/train-*
- config_name: 20231101.pcm
data_files:
- split: train
path: 20231101.pcm/train-*
- config_name: 20231101.pdc
data_files:
- split: train
path: 20231101.pdc/train-*
- config_name: 20231101.pfl
data_files:
- split: train
path: 20231101.pfl/train-*
- config_name: 20231101.pi
data_files:
- split: train
path: 20231101.pi/train-*
- config_name: 20231101.pih
data_files:
- split: train
path: 20231101.pih/train-*
- config_name: 20231101.pl
data_files:
- split: train
path: 20231101.pl/train-*
- config_name: 20231101.pms
data_files:
- split: train
path: 20231101.pms/train-*
- config_name: 20231101.pnb
data_files:
- split: train
path: 20231101.pnb/train-*
- config_name: 20231101.pnt
data_files:
- split: train
path: 20231101.pnt/train-*
- config_name: 20231101.ps
data_files:
- split: train
path: 20231101.ps/train-*
- config_name: 20231101.pt
data_files:
- split: train
path: 20231101.pt/train-*
- config_name: 20231101.pwn
data_files:
- split: train
path: 20231101.pwn/train-*
- config_name: 20231101.qu
data_files:
- split: train
path: 20231101.qu/train-*
- config_name: 20231101.rm
data_files:
- split: train
path: 20231101.rm/train-*
- config_name: 20231101.rmy
data_files:
- split: train
path: 20231101.rmy/train-*
- config_name: 20231101.rn
data_files:
- split: train
path: 20231101.rn/train-*
- config_name: 20231101.ro
data_files:
- split: train
path: 20231101.ro/train-*
- config_name: 20231101.roa-rup
data_files:
- split: train
path: 20231101.roa-rup/train-*
- config_name: 20231101.roa-tara
data_files:
- split: train
path: 20231101.roa-tara/train-*
- config_name: 20231101.ru
data_files:
- split: train
path: 20231101.ru/train-*
- config_name: 20231101.rue
data_files:
- split: train
path: 20231101.rue/train-*
- config_name: 20231101.rw
data_files:
- split: train
path: 20231101.rw/train-*
- config_name: 20231101.sa
data_files:
- split: train
path: 20231101.sa/train-*
- config_name: 20231101.sah
data_files:
- split: train
path: 20231101.sah/train-*
- config_name: 20231101.sat
data_files:
- split: train
path: 20231101.sat/train-*
- config_name: 20231101.sc
data_files:
- split: train
path: 20231101.sc/train-*
- config_name: 20231101.scn
data_files:
- split: train
path: 20231101.scn/train-*
- config_name: 20231101.sco
data_files:
- split: train
path: 20231101.sco/train-*
- config_name: 20231101.sd
data_files:
- split: train
path: 20231101.sd/train-*
- config_name: 20231101.se
data_files:
- split: train
path: 20231101.se/train-*
- config_name: 20231101.sg
data_files:
- split: train
path: 20231101.sg/train-*
- config_name: 20231101.sh
data_files:
- split: train
path: 20231101.sh/train-*
- config_name: 20231101.shi
data_files:
- split: train
path: 20231101.shi/train-*
- config_name: 20231101.shn
data_files:
- split: train
path: 20231101.shn/train-*
- config_name: 20231101.si
data_files:
- split: train
path: 20231101.si/train-*
- config_name: 20231101.simple
data_files:
- split: train
path: 20231101.simple/train-*
- config_name: 20231101.sk
data_files:
- split: train
path: 20231101.sk/train-*
- config_name: 20231101.skr
data_files:
- split: train
path: 20231101.skr/train-*
- config_name: 20231101.sl
data_files:
- split: train
path: 20231101.sl/train-*
- config_name: 20231101.sm
data_files:
- split: train
path: 20231101.sm/train-*
- config_name: 20231101.smn
data_files:
- split: train
path: 20231101.smn/train-*
- config_name: 20231101.sn
data_files:
- split: train
path: 20231101.sn/train-*
- config_name: 20231101.so
data_files:
- split: train
path: 20231101.so/train-*
- config_name: 20231101.sq
data_files:
- split: train
path: 20231101.sq/train-*
- config_name: 20231101.sr
data_files:
- split: train
path: 20231101.sr/train-*
- config_name: 20231101.srn
data_files:
- split: train
path: 20231101.srn/train-*
- config_name: 20231101.ss
data_files:
- split: train
path: 20231101.ss/train-*
- config_name: 20231101.st
data_files:
- split: train
path: 20231101.st/train-*
- config_name: 20231101.stq
data_files:
- split: train
path: 20231101.stq/train-*
- config_name: 20231101.su
data_files:
- split: train
path: 20231101.su/train-*
- config_name: 20231101.sv
data_files:
- split: train
path: 20231101.sv/train-*
- config_name: 20231101.sw
data_files:
- split: train
path: 20231101.sw/train-*
- config_name: 20231101.szl
data_files:
- split: train
path: 20231101.szl/train-*
- config_name: 20231101.szy
data_files:
- split: train
path: 20231101.szy/train-*
- config_name: 20231101.ta
data_files:
- split: train
path: 20231101.ta/train-*
- config_name: 20231101.tay
data_files:
- split: train
path: 20231101.tay/train-*
- config_name: 20231101.tcy
data_files:
- split: train
path: 20231101.tcy/train-*
- config_name: 20231101.te
data_files:
- split: train
path: 20231101.te/train-*
- config_name: 20231101.tet
data_files:
- split: train
path: 20231101.tet/train-*
- config_name: 20231101.tg
data_files:
- split: train
path: 20231101.tg/train-*
- config_name: 20231101.th
data_files:
- split: train
path: 20231101.th/train-*
- config_name: 20231101.ti
data_files:
- split: train
path: 20231101.ti/train-*
- config_name: 20231101.tk
data_files:
- split: train
path: 20231101.tk/train-*
- config_name: 20231101.tl
data_files:
- split: train
path: 20231101.tl/train-*
- config_name: 20231101.tly
data_files:
- split: train
path: 20231101.tly/train-*
- config_name: 20231101.tn
data_files:
- split: train
path: 20231101.tn/train-*
- config_name: 20231101.to
data_files:
- split: train
path: 20231101.to/train-*
- config_name: 20231101.tpi
data_files:
- split: train
path: 20231101.tpi/train-*
- config_name: 20231101.tr
data_files:
- split: train
path: 20231101.tr/train-*
- config_name: 20231101.trv
data_files:
- split: train
path: 20231101.trv/train-*
- config_name: 20231101.ts
data_files:
- split: train
path: 20231101.ts/train-*
- config_name: 20231101.tt
data_files:
- split: train
path: 20231101.tt/train-*
- config_name: 20231101.tum
data_files:
- split: train
path: 20231101.tum/train-*
- config_name: 20231101.tw
data_files:
- split: train
path: 20231101.tw/train-*
- config_name: 20231101.ty
data_files:
- split: train
path: 20231101.ty/train-*
- config_name: 20231101.tyv
data_files:
- split: train
path: 20231101.tyv/train-*
- config_name: 20231101.udm
data_files:
- split: train
path: 20231101.udm/train-*
- config_name: 20231101.ug
data_files:
- split: train
path: 20231101.ug/train-*
- config_name: 20231101.uk
data_files:
- split: train
path: 20231101.uk/train-*
- config_name: 20231101.ur
data_files:
- split: train
path: 20231101.ur/train-*
- config_name: 20231101.uz
data_files:
- split: train
path: 20231101.uz/train-*
- config_name: 20231101.ve
data_files:
- split: train
path: 20231101.ve/train-*
- config_name: 20231101.vec
data_files:
- split: train
path: 20231101.vec/train-*
- config_name: 20231101.vep
data_files:
- split: train
path: 20231101.vep/train-*
- config_name: 20231101.vi
data_files:
- split: train
path: 20231101.vi/train-*
- config_name: 20231101.vls
data_files:
- split: train
path: 20231101.vls/train-*
- config_name: 20231101.vo
data_files:
- split: train
path: 20231101.vo/train-*
- config_name: 20231101.wa
data_files:
- split: train
path: 20231101.wa/train-*
- config_name: 20231101.war
data_files:
- split: train
path: 20231101.war/train-*
- config_name: 20231101.wo
data_files:
- split: train
path: 20231101.wo/train-*
- config_name: 20231101.wuu
data_files:
- split: train
path: 20231101.wuu/train-*
- config_name: 20231101.xal
data_files:
- split: train
path: 20231101.xal/train-*
- config_name: 20231101.xh
data_files:
- split: train
path: 20231101.xh/train-*
- config_name: 20231101.xmf
data_files:
- split: train
path: 20231101.xmf/train-*
- config_name: 20231101.yi
data_files:
- split: train
path: 20231101.yi/train-*
- config_name: 20231101.yo
data_files:
- split: train
path: 20231101.yo/train-*
- config_name: 20231101.za
data_files:
- split: train
path: 20231101.za/train-*
- config_name: 20231101.zea
data_files:
- split: train
path: 20231101.zea/train-*
- config_name: 20231101.zh
data_files:
- split: train
path: 20231101.zh/train-*
- config_name: 20231101.zh-classical
data_files:
- split: train
path: 20231101.zh-classical/train-*
- config_name: 20231101.zh-min-nan
data_files:
- split: train
path: 20231101.zh-min-nan/train-*
- config_name: 20231101.zh-yue
data_files:
- split: train
path: 20231101.zh-yue/train-*
- config_name: 20231101.zu
data_files:
- split: train
path: 20231101.zu/train-*
dataset_info:
- config_name: 20231101.ab
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4334455
num_examples: 6152
download_size: 1237796
dataset_size: 4334455
- config_name: 20231101.ace
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5065801
num_examples: 13003
download_size: 1574258
dataset_size: 5065801
- config_name: 20231101.ady
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 765030
num_examples: 706
download_size: 347450
dataset_size: 765030
- config_name: 20231101.af
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 226672176
num_examples: 112518
download_size: 124485544
dataset_size: 226672176
- config_name: 20231101.als
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 81450196
num_examples: 30013
download_size: 49452211
dataset_size: 81450196
- config_name: 20231101.alt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6819963
num_examples: 1087
download_size: 2910477
dataset_size: 6819963
- config_name: 20231101.am
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24218002
num_examples: 13906
download_size: 10720027
dataset_size: 24218002
- config_name: 20231101.ami
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4460174
num_examples: 1628
download_size: 2261859
dataset_size: 4460174
- config_name: 20231101.an
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 57572050
num_examples: 44249
download_size: 29573020
dataset_size: 57572050
- config_name: 20231101.ang
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2913906
num_examples: 4121
download_size: 1789811
dataset_size: 2913906
- config_name: 20231101.anp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9226211
num_examples: 2749
download_size: 3355979
dataset_size: 9226211
- config_name: 20231101.ar
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3124486159
num_examples: 1219201
download_size: 1323304271
dataset_size: 3124486159
- config_name: 20231101.arc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 849731
num_examples: 1936
download_size: 369584
dataset_size: 849731
- config_name: 20231101.ary
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12049878
num_examples: 8087
download_size: 4672257
dataset_size: 12049878
- config_name: 20231101.arz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1402294447
num_examples: 1620194
download_size: 317231585
dataset_size: 1402294447
- config_name: 20231101.as
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 90312333
num_examples: 12338
download_size: 34581561
dataset_size: 90312333
- config_name: 20231101.ast
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 470575521
num_examples: 133419
download_size: 271196430
dataset_size: 470575521
- config_name: 20231101.atj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1012467
num_examples: 1971
download_size: 513962
dataset_size: 1012467
- config_name: 20231101.av
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6084045
num_examples: 3426
download_size: 2573436
dataset_size: 6084045
- config_name: 20231101.avk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 32119428
num_examples: 28353
download_size: 7984474
dataset_size: 32119428
- config_name: 20231101.awa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3703396
num_examples: 3679
download_size: 1269824
dataset_size: 3703396
- config_name: 20231101.ay
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4395813
num_examples: 5384
download_size: 1756131
dataset_size: 4395813
- config_name: 20231101.az
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 433663157
num_examples: 196158
download_size: 230064038
dataset_size: 433663157
- config_name: 20231101.azb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 187041147
num_examples: 243376
download_size: 46739926
dataset_size: 187041147
- config_name: 20231101.ba
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 297738837
num_examples: 63319
download_size: 122595805
dataset_size: 297738837
- config_name: 20231101.ban
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18012727
num_examples: 20986
download_size: 6715876
dataset_size: 18012727
- config_name: 20231101.bar
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36317102
num_examples: 27096
download_size: 21799389
dataset_size: 36317102
- config_name: 20231101.bat-smg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7212849
num_examples: 17221
download_size: 3348765
dataset_size: 7212849
- config_name: 20231101.bcl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 20394331
num_examples: 15743
download_size: 11369234
dataset_size: 20394331
- config_name: 20231101.be
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 624718980
num_examples: 236165
download_size: 284921288
dataset_size: 624718980
- config_name: 20231101.be-x-old
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 252510447
num_examples: 84361
download_size: 114318588
dataset_size: 252510447
- config_name: 20231101.bg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1103334425
num_examples: 294275
download_size: 512344058
dataset_size: 1103334425
- config_name: 20231101.bh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16675295
num_examples: 8612
download_size: 5880458
dataset_size: 16675295
- config_name: 20231101.bi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 404249
num_examples: 1548
download_size: 203610
dataset_size: 404249
- config_name: 20231101.bjn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6884860
num_examples: 10519
download_size: 3323032
dataset_size: 6884860
- config_name: 20231101.blk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 26566991
num_examples: 2946
download_size: 8028430
dataset_size: 26566991
- config_name: 20231101.bm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 623659
num_examples: 1258
download_size: 343812
dataset_size: 623659
- config_name: 20231101.bn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 962624238
num_examples: 143069
download_size: 343885999
dataset_size: 962624238
- config_name: 20231101.bo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 132723880
num_examples: 12881
download_size: 38851784
dataset_size: 132723880
- config_name: 20231101.bpy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42975314
num_examples: 25165
download_size: 6568483
dataset_size: 42975314
- config_name: 20231101.br
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 85635744
num_examples: 84340
download_size: 49768597
dataset_size: 85635744
- config_name: 20231101.bs
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 193734399
num_examples: 92596
download_size: 107858627
dataset_size: 193734399
- config_name: 20231101.bug
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3434889
num_examples: 15880
download_size: 817034
dataset_size: 3434889
- config_name: 20231101.bxr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6687172
num_examples: 2791
download_size: 3078699
dataset_size: 6687172
- config_name: 20231101.ca
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1958810542
num_examples: 737409
download_size: 1116799343
dataset_size: 1958810542
- config_name: 20231101.cbk-zam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2061944
num_examples: 3285
download_size: 825899
dataset_size: 2061944
- config_name: 20231101.cdo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5109207
num_examples: 16449
download_size: 1982914
dataset_size: 5109207
- config_name: 20231101.ce
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 730387049
num_examples: 601271
download_size: 88393330
dataset_size: 730387049
- config_name: 20231101.ceb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4568256711
num_examples: 6122708
download_size: 828085216
dataset_size: 4568256711
- config_name: 20231101.ch
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 178002
num_examples: 576
download_size: 89277
dataset_size: 178002
- config_name: 20231101.chr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 767618
num_examples: 1113
download_size: 343140
dataset_size: 767618
- config_name: 20231101.chy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 148139
num_examples: 802
download_size: 75865
dataset_size: 148139
- config_name: 20231101.ckb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 107150420
num_examples: 52024
download_size: 42964544
dataset_size: 107150420
- config_name: 20231101.co
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11104243
num_examples: 7799
download_size: 5794731
dataset_size: 11104243
- config_name: 20231101.cr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 57257
num_examples: 187
download_size: 36081
dataset_size: 57257
- config_name: 20231101.crh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9689171
num_examples: 27691
download_size: 3654461
dataset_size: 9689171
- config_name: 20231101.cs
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1566286962
num_examples: 534044
download_size: 976484249
dataset_size: 1566286962
- config_name: 20231101.csb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3748643
num_examples: 5480
download_size: 2055233
dataset_size: 3748643
- config_name: 20231101.cu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 981592
num_examples: 1235
download_size: 398252
dataset_size: 981592
- config_name: 20231101.cv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 81873026
num_examples: 51863
download_size: 29640641
dataset_size: 81873026
- config_name: 20231101.cy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 305837783
num_examples: 279455
download_size: 112257456
dataset_size: 305837783
- config_name: 20231101.da
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 547068330
num_examples: 295347
download_size: 327688122
dataset_size: 547068330
- config_name: 20231101.dag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21618973
num_examples: 10071
download_size: 9026986
dataset_size: 21618973
- config_name: 20231101.de
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9622925305
num_examples: 2845308
download_size: 5771317942
dataset_size: 9622925305
- config_name: 20231101.din
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 564398
num_examples: 512
download_size: 340530
dataset_size: 564398
- config_name: 20231101.diq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19671441
num_examples: 41775
download_size: 7616839
dataset_size: 19671441
- config_name: 20231101.dsb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3315228
num_examples: 3379
download_size: 1931937
dataset_size: 3315228
- config_name: 20231101.dty
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7030648
num_examples: 3632
download_size: 2521250
dataset_size: 7030648
- config_name: 20231101.dv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13934393
num_examples: 4352
download_size: 5283133
dataset_size: 13934393
- config_name: 20231101.dz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8855969
num_examples: 788
download_size: 2583520
dataset_size: 8855969
- config_name: 20231101.ee
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 898491
num_examples: 1181
download_size: 492813
dataset_size: 898491
- config_name: 20231101.el
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1345589075
num_examples: 226834
download_size: 637372489
dataset_size: 1345589075
- config_name: 20231101.eml
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3625415
num_examples: 12961
download_size: 1689575
dataset_size: 3625415
- config_name: 20231101.en
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 20200062385
num_examples: 6407814
download_size: 11630929031
dataset_size: 20200062385
- config_name: 20231101.eo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 523113804
num_examples: 344851
download_size: 297738138
dataset_size: 523113804
- config_name: 20231101.es
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6033536133
num_examples: 1841155
download_size: 3493595869
dataset_size: 6033536133
- config_name: 20231101.et
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 440177170
num_examples: 240397
download_size: 265444734
dataset_size: 440177170
- config_name: 20231101.eu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 565567318
num_examples: 416347
download_size: 270355505
dataset_size: 565567318
- config_name: 20231101.ext
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4389633
num_examples: 3785
download_size: 2761099
dataset_size: 4389633
- config_name: 20231101.fa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1899154938
num_examples: 979869
download_size: 759368283
dataset_size: 1899154938
- config_name: 20231101.fat
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2032812
num_examples: 1122
download_size: 1124684
dataset_size: 2032812
- config_name: 20231101.ff
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1867995
num_examples: 2419
download_size: 1087702
dataset_size: 1867995
- config_name: 20231101.fi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1146146663
num_examples: 561598
download_size: 680512230
dataset_size: 1146146663
- config_name: 20231101.fiu-vro
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4636361
num_examples: 6590
download_size: 2434159
dataset_size: 4636361
- config_name: 20231101.fj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 604791
num_examples: 1294
download_size: 328059
dataset_size: 604791
- config_name: 20231101.fo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15415249
num_examples: 14080
download_size: 8857239
dataset_size: 15415249
- config_name: 20231101.fon
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 592216
num_examples: 705
download_size: 317444
dataset_size: 592216
- config_name: 20231101.fr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8065794826
num_examples: 2564646
download_size: 4614488286
dataset_size: 8065794826
- config_name: 20231101.frp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3676441
num_examples: 5766
download_size: 1914046
dataset_size: 3676441
- config_name: 20231101.frr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10819914
num_examples: 18666
download_size: 5317694
dataset_size: 10819914
- config_name: 20231101.fur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4090412
num_examples: 4001
download_size: 2421238
dataset_size: 4090412
- config_name: 20231101.fy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 134196708
num_examples: 52416
download_size: 76002257
dataset_size: 134196708
- config_name: 20231101.ga
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 60640820
num_examples: 59156
download_size: 34136733
dataset_size: 60640820
- config_name: 20231101.gag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2428849
num_examples: 2968
download_size: 1331866
dataset_size: 2428849
- config_name: 20231101.gan
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2915229
num_examples: 6743
download_size: 1508844
dataset_size: 2915229
- config_name: 20231101.gcr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2338277
num_examples: 2399
download_size: 1345482
dataset_size: 2338277
- config_name: 20231101.gd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14051607
num_examples: 15979
download_size: 7190137
dataset_size: 14051607
- config_name: 20231101.gl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 493905881
num_examples: 200092
download_size: 291104907
dataset_size: 493905881
- config_name: 20231101.glk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6086185
num_examples: 7049
download_size: 2382997
dataset_size: 6086185
- config_name: 20231101.gn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6921948
num_examples: 5519
download_size: 3806548
dataset_size: 6921948
- config_name: 20231101.gom
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30889533
num_examples: 4259
download_size: 11306217
dataset_size: 30889533
- config_name: 20231101.gor
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6369540
num_examples: 15359
download_size: 2101154
dataset_size: 6369540
- config_name: 20231101.got
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1533770
num_examples: 1013
download_size: 636307
dataset_size: 1533770
- config_name: 20231101.gpe
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2017667
num_examples: 1110
download_size: 1141261
dataset_size: 2017667
- config_name: 20231101.gu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 121282557
num_examples: 30445
download_size: 39554078
dataset_size: 121282557
- config_name: 20231101.guc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 978923
num_examples: 679
download_size: 578311
dataset_size: 978923
- config_name: 20231101.gur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2325435
num_examples: 1383
download_size: 1068954
dataset_size: 2325435
- config_name: 20231101.guw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1913143
num_examples: 1312
download_size: 1042328
dataset_size: 1913143
- config_name: 20231101.gv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6307253
num_examples: 6206
download_size: 3347095
dataset_size: 6307253
- config_name: 20231101.ha
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 77906472
num_examples: 36492
download_size: 43131815
dataset_size: 77906472
- config_name: 20231101.hak
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4523680
num_examples: 10246
download_size: 1878558
dataset_size: 4523680
- config_name: 20231101.haw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1677790
num_examples: 2612
download_size: 696781
dataset_size: 1677790
- config_name: 20231101.he
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1950200381
num_examples: 333874
download_size: 979183998
dataset_size: 1950200381
- config_name: 20231101.hi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 672817362
num_examples: 163093
download_size: 237834604
dataset_size: 672817362
- config_name: 20231101.hif
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5685329
num_examples: 10986
download_size: 2715682
dataset_size: 5685329
- config_name: 20231101.hr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 443636903
num_examples: 202848
download_size: 275245343
dataset_size: 443636903
- config_name: 20231101.hsb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15667118
num_examples: 13957
download_size: 7437491
dataset_size: 15667118
- config_name: 20231101.ht
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 55088040
num_examples: 70159
download_size: 21993952
dataset_size: 55088040
- config_name: 20231101.hu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1515899113
num_examples: 532427
download_size: 904857314
dataset_size: 1515899113
- config_name: 20231101.hy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1179459973
num_examples: 303036
download_size: 490121120
dataset_size: 1179459973
- config_name: 20231101.hyw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 59564550
num_examples: 11725
download_size: 27450541
dataset_size: 59564550
- config_name: 20231101.ia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16409449
num_examples: 28247
download_size: 8237640
dataset_size: 16409449
- config_name: 20231101.id
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1125928594
num_examples: 665622
download_size: 583801799
dataset_size: 1125928594
- config_name: 20231101.ie
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6737711
num_examples: 11877
download_size: 3019044
dataset_size: 6737711
- config_name: 20231101.ig
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 66086115
num_examples: 22908
download_size: 34663540
dataset_size: 66086115
- config_name: 20231101.ik
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 199773
num_examples: 846
download_size: 115758
dataset_size: 199773
- config_name: 20231101.ilo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16854494
num_examples: 15371
download_size: 7352572
dataset_size: 16854494
- config_name: 20231101.inh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2727253
num_examples: 2123
download_size: 1279524
dataset_size: 2727253
- config_name: 20231101.io
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 38735196
num_examples: 40930
download_size: 17106040
dataset_size: 38735196
- config_name: 20231101.is
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 87856729
num_examples: 57453
download_size: 52286137
dataset_size: 87856729
- config_name: 20231101.it
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4924856310
num_examples: 1833639
download_size: 2931265519
dataset_size: 4924856310
- config_name: 20231101.iu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 291185
num_examples: 562
download_size: 136987
dataset_size: 291185
- config_name: 20231101.ja
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7039610767
num_examples: 1389467
download_size: 3941998526
dataset_size: 7039610767
- config_name: 20231101.jam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1142348
num_examples: 1780
download_size: 702664
dataset_size: 1142348
- config_name: 20231101.jbo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2523538
num_examples: 1394
download_size: 890356
dataset_size: 2523538
- config_name: 20231101.jv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 72786688
num_examples: 73380
download_size: 36852134
dataset_size: 72786688
- config_name: 20231101.ka
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 699872960
num_examples: 169602
download_size: 239987665
dataset_size: 699872960
- config_name: 20231101.kaa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5139436
num_examples: 4074
download_size: 2913134
dataset_size: 5139436
- config_name: 20231101.kab
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4392542
num_examples: 5830
download_size: 2580584
dataset_size: 4392542
- config_name: 20231101.kbd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3014575
num_examples: 1670
download_size: 1304580
dataset_size: 3014575
- config_name: 20231101.kbp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3584563
num_examples: 1931
download_size: 1806400
dataset_size: 3584563
- config_name: 20231101.kcg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 914665
num_examples: 1151
download_size: 513904
dataset_size: 914665
- config_name: 20231101.kg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 390163
num_examples: 1329
download_size: 209059
dataset_size: 390163
- config_name: 20231101.ki
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 760980
num_examples: 1668
download_size: 427003
dataset_size: 760980
- config_name: 20231101.kk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 497917145
num_examples: 238615
download_size: 180750520
dataset_size: 497917145
- config_name: 20231101.kl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 313658
num_examples: 301
download_size: 193719
dataset_size: 313658
- config_name: 20231101.km
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 103252582
num_examples: 11994
download_size: 35567417
dataset_size: 103252582
- config_name: 20231101.kn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 402848197
num_examples: 31437
download_size: 147156434
dataset_size: 402848197
- config_name: 20231101.ko
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1412099944
num_examples: 647897
download_size: 782677061
dataset_size: 1412099944
- config_name: 20231101.koi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5103799
num_examples: 3504
download_size: 1888392
dataset_size: 5103799
- config_name: 20231101.krc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4589808
num_examples: 2100
download_size: 2022144
dataset_size: 4589808
- config_name: 20231101.ks
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2868186
num_examples: 4307
download_size: 1094458
dataset_size: 2868186
- config_name: 20231101.ksh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3117003
num_examples: 2945
download_size: 2009928
dataset_size: 3117003
- config_name: 20231101.ku
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 44523131
num_examples: 63076
download_size: 22938233
dataset_size: 44523131
- config_name: 20231101.kv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9245577
num_examples: 5595
download_size: 3690978
dataset_size: 9245577
- config_name: 20231101.kw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4687165
num_examples: 6995
download_size: 2711398
dataset_size: 4687165
- config_name: 20231101.ky
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 166911089
num_examples: 79438
download_size: 63947035
dataset_size: 166911089
- config_name: 20231101.la
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 141080163
num_examples: 138263
download_size: 76588430
dataset_size: 141080163
- config_name: 20231101.lad
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4901343
num_examples: 3663
download_size: 2754531
dataset_size: 4901343
- config_name: 20231101.lb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 88826996
num_examples: 62414
download_size: 50515020
dataset_size: 88826996
- config_name: 20231101.lbe
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 745140
num_examples: 1279
download_size: 304394
dataset_size: 745140
- config_name: 20231101.lez
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9794637
num_examples: 4264
download_size: 3864848
dataset_size: 9794637
- config_name: 20231101.lfn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8870685
num_examples: 4832
download_size: 5207546
dataset_size: 8870685
- config_name: 20231101.lg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6891539
num_examples: 4048
download_size: 3708097
dataset_size: 6891539
- config_name: 20231101.li
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 29633678
num_examples: 14849
download_size: 17727918
dataset_size: 29633678
- config_name: 20231101.lij
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11448686
num_examples: 11203
download_size: 6255409
dataset_size: 11448686
- config_name: 20231101.lld
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 50163974
num_examples: 180677
download_size: 13866243
dataset_size: 50163974
- config_name: 20231101.lmo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 43496783
num_examples: 73510
download_size: 19142356
dataset_size: 43496783
- config_name: 20231101.ln
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2035050
num_examples: 3534
download_size: 1122138
dataset_size: 2035050
- config_name: 20231101.lo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15283258
num_examples: 5014
download_size: 5646554
dataset_size: 15283258
- config_name: 20231101.lt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 336559824
num_examples: 211292
download_size: 194873569
dataset_size: 336559824
- config_name: 20231101.ltg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 915364
num_examples: 1070
download_size: 530299
dataset_size: 915364
- config_name: 20231101.lv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 227272112
num_examples: 123413
download_size: 129739227
dataset_size: 227272112
- config_name: 20231101.mad
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1596836
num_examples: 1192
download_size: 908630
dataset_size: 1596836
- config_name: 20231101.mai
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21562856
num_examples: 14714
download_size: 6180231
dataset_size: 21562856
- config_name: 20231101.map-bms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5341068
num_examples: 13580
download_size: 2377123
dataset_size: 5341068
- config_name: 20231101.mdf
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4694770
num_examples: 4257
download_size: 1725294
dataset_size: 4694770
- config_name: 20231101.mg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 73767229
num_examples: 96316
download_size: 22117304
dataset_size: 73767229
- config_name: 20231101.mhr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19249450
num_examples: 11347
download_size: 6902162
dataset_size: 19249450
- config_name: 20231101.mi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4169094
num_examples: 7919
download_size: 1044444
dataset_size: 4169094
- config_name: 20231101.min
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 118995918
num_examples: 227143
download_size: 25691303
dataset_size: 118995918
- config_name: 20231101.mk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 651422351
num_examples: 139559
download_size: 271265486
dataset_size: 651422351
- config_name: 20231101.ml
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 494135127
num_examples: 85791
download_size: 183071274
dataset_size: 494135127
- config_name: 20231101.mn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 91943210
num_examples: 24048
download_size: 41521786
dataset_size: 91943210
- config_name: 20231101.mni
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9820483
num_examples: 10894
download_size: 2208525
dataset_size: 9820483
- config_name: 20231101.mnw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 47237206
num_examples: 3295
download_size: 13765461
dataset_size: 47237206
- config_name: 20231101.mr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 261879018
num_examples: 94133
download_size: 81991233
dataset_size: 261879018
- config_name: 20231101.mrj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8732281
num_examples: 10542
download_size: 3283618
dataset_size: 8732281
- config_name: 20231101.ms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 423352360
num_examples: 368628
download_size: 210149264
dataset_size: 423352360
- config_name: 20231101.mt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 32009639
num_examples: 5743
download_size: 18686521
dataset_size: 32009639
- config_name: 20231101.mwl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19353725
num_examples: 4500
download_size: 11521563
dataset_size: 19353725
- config_name: 20231101.my
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 314417700
num_examples: 109310
download_size: 85497205
dataset_size: 314417700
- config_name: 20231101.myv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11145865
num_examples: 7958
download_size: 4600620
dataset_size: 11145865
- config_name: 20231101.mzn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16335757
num_examples: 18717
download_size: 5419390
dataset_size: 16335757
- config_name: 20231101.nah
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2503320
num_examples: 6218
download_size: 1191779
dataset_size: 2503320
- config_name: 20231101.nap
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6395706
num_examples: 14884
download_size: 3188122
dataset_size: 6395706
- config_name: 20231101.nds
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 92990126
num_examples: 84285
download_size: 48106879
dataset_size: 92990126
- config_name: 20231101.nds-nl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13582403
num_examples: 7847
download_size: 8354427
dataset_size: 13582403
- config_name: 20231101.ne
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 109032486
num_examples: 32885
download_size: 37548833
dataset_size: 109032486
- config_name: 20231101.new
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 159095610
num_examples: 73003
download_size: 20517810
dataset_size: 159095610
- config_name: 20231101.nia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2117902
num_examples: 1714
download_size: 1086670
dataset_size: 2117902
- config_name: 20231101.nl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2646316266
num_examples: 2135977
download_size: 1436843432
dataset_size: 2646316266
- config_name: 20231101.nn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 237467406
num_examples: 167653
download_size: 134751873
dataset_size: 237467406
- config_name: 20231101.no
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1033188011
num_examples: 617937
download_size: 590970350
dataset_size: 1033188011
- config_name: 20231101.nov
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 965640
num_examples: 1693
download_size: 493500
dataset_size: 965640
- config_name: 20231101.nqo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8261058
num_examples: 1580
download_size: 3508645
dataset_size: 8261058
- config_name: 20231101.nrm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3216817
num_examples: 4902
download_size: 1507257
dataset_size: 3216817
- config_name: 20231101.nso
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2796467
num_examples: 8650
download_size: 936349
dataset_size: 2796467
- config_name: 20231101.nv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16993060
num_examples: 22460
download_size: 3304031
dataset_size: 16993060
- config_name: 20231101.ny
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1691825
num_examples: 1129
download_size: 938621
dataset_size: 1691825
- config_name: 20231101.oc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 120092607
num_examples: 89101
download_size: 64043588
dataset_size: 120092607
- config_name: 20231101.olo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3173332
num_examples: 4640
download_size: 1724315
dataset_size: 3173332
- config_name: 20231101.om
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3604768
num_examples: 1970
download_size: 1982849
dataset_size: 3604768
- config_name: 20231101.or
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 75078226
num_examples: 17375
download_size: 26706212
dataset_size: 75078226
- config_name: 20231101.os
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13182881
num_examples: 17663
download_size: 5572799
dataset_size: 13182881
- config_name: 20231101.pa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 212972877
num_examples: 51423
download_size: 81452929
dataset_size: 212972877
- config_name: 20231101.pag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1391816
num_examples: 2665
download_size: 455808
dataset_size: 1391816
- config_name: 20231101.pam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8294902
num_examples: 9006
download_size: 4277038
dataset_size: 8294902
- config_name: 20231101.pap
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4251480
num_examples: 3520
download_size: 2435005
dataset_size: 4251480
- config_name: 20231101.pcd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5704321
num_examples: 5717
download_size: 3145572
dataset_size: 5704321
- config_name: 20231101.pcm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1886987
num_examples: 1238
download_size: 1160762
dataset_size: 1886987
- config_name: 20231101.pdc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1225978
num_examples: 2176
download_size: 698254
dataset_size: 1225978
- config_name: 20231101.pfl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3694464
num_examples: 2762
download_size: 1971214
dataset_size: 3694464
- config_name: 20231101.pi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1144100
num_examples: 3057
download_size: 200764
dataset_size: 1144100
- config_name: 20231101.pih
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 278139
num_examples: 934
download_size: 177092
dataset_size: 278139
- config_name: 20231101.pl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2950148809
num_examples: 1587721
download_size: 1765059986
dataset_size: 2950148809
- config_name: 20231101.pms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 34340217
num_examples: 67980
download_size: 12008880
dataset_size: 34340217
- config_name: 20231101.pnb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 304117649
num_examples: 72307
download_size: 133266242
dataset_size: 304117649
- config_name: 20231101.pnt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 630636
num_examples: 533
download_size: 275639
dataset_size: 630636
- config_name: 20231101.ps
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 114259737
num_examples: 20529
download_size: 53312545
dataset_size: 114259737
- config_name: 20231101.pt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2758783436
num_examples: 1112246
download_size: 1579641059
dataset_size: 2758783436
- config_name: 20231101.pwn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 811954
num_examples: 408
download_size: 444109
dataset_size: 811954
- config_name: 20231101.qu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16828457
num_examples: 24196
download_size: 7688106
dataset_size: 16828457
- config_name: 20231101.rm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18053014
num_examples: 3822
download_size: 10483970
dataset_size: 18053014
- config_name: 20231101.rmy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 611778
num_examples: 1279
download_size: 356457
dataset_size: 611778
- config_name: 20231101.rn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 530318
num_examples: 819
download_size: 301252
dataset_size: 530318
- config_name: 20231101.ro
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 847410736
num_examples: 442389
download_size: 466937380
dataset_size: 847410736
- config_name: 20231101.roa-rup
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1687829
num_examples: 1432
download_size: 951677
dataset_size: 1687829
- config_name: 20231101.roa-tara
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7470331
num_examples: 9367
download_size: 4003095
dataset_size: 7470331
- config_name: 20231101.ru
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10277958919
num_examples: 1945063
download_size: 4876849588
dataset_size: 10277958919
- config_name: 20231101.rue
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13128572
num_examples: 8759
download_size: 6346106
dataset_size: 13128572
- config_name: 20231101.rw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11898854
num_examples: 8063
download_size: 6623388
dataset_size: 11898854
- config_name: 20231101.sa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 69854997
num_examples: 12156
download_size: 23850161
dataset_size: 69854997
- config_name: 20231101.sah
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 48562374
num_examples: 17098
download_size: 21675888
dataset_size: 48562374
- config_name: 20231101.sat
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 45247783
num_examples: 9767
download_size: 15428584
dataset_size: 45247783
- config_name: 20231101.sc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12776438
num_examples: 7586
download_size: 7711996
dataset_size: 12776438
- config_name: 20231101.scn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17685098
num_examples: 26530
download_size: 10223816
dataset_size: 17685098
- config_name: 20231101.sco
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42808738
num_examples: 35276
download_size: 24287944
dataset_size: 42808738
- config_name: 20231101.sd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 37021659
num_examples: 16928
download_size: 17591997
dataset_size: 37021659
- config_name: 20231101.se
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3600527
num_examples: 8043
download_size: 1816006
dataset_size: 3600527
- config_name: 20231101.sg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 140127
num_examples: 564
download_size: 72486
dataset_size: 140127
- config_name: 20231101.sh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 569225870
num_examples: 458392
download_size: 266379293
dataset_size: 569225870
- config_name: 20231101.shi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2369002
num_examples: 1779
download_size: 1359828
dataset_size: 2369002
- config_name: 20231101.shn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33553593
num_examples: 13945
download_size: 8163231
dataset_size: 33553593
- config_name: 20231101.si
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 138806443
num_examples: 23065
download_size: 54229127
dataset_size: 138806443
- config_name: 20231101.simple
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 291254232
num_examples: 241787
download_size: 156885218
dataset_size: 291254232
- config_name: 20231101.sk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 416804817
num_examples: 242235
download_size: 239513292
dataset_size: 416804817
- config_name: 20231101.skr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22705446
num_examples: 5819
download_size: 9978607
dataset_size: 22705446
- config_name: 20231101.sl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 454829910
num_examples: 183006
download_size: 267485569
dataset_size: 454829910
- config_name: 20231101.sm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 902927
num_examples: 1151
download_size: 492349
dataset_size: 902927
- config_name: 20231101.smn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5764244
num_examples: 5383
download_size: 2813872
dataset_size: 5764244
- config_name: 20231101.sn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9790528
num_examples: 11621
download_size: 4979456
dataset_size: 9790528
- config_name: 20231101.so
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13663784
num_examples: 9021
download_size: 7940363
dataset_size: 13663784
- config_name: 20231101.sq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 208779652
num_examples: 104854
download_size: 116945494
dataset_size: 208779652
- config_name: 20231101.sr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1721596392
num_examples: 676605
download_size: 697391786
dataset_size: 1721596392
- config_name: 20231101.srn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 649317
num_examples: 1219
download_size: 215103
dataset_size: 649317
- config_name: 20231101.ss
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1076102
num_examples: 945
download_size: 600997
dataset_size: 1076102
- config_name: 20231101.st
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 968161
num_examples: 1099
download_size: 530165
dataset_size: 968161
- config_name: 20231101.stq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4942784
num_examples: 4134
download_size: 2884429
dataset_size: 4942784
- config_name: 20231101.su
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 48066965
num_examples: 61555
download_size: 19806020
dataset_size: 48066965
- config_name: 20231101.sv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2153690744
num_examples: 2574513
download_size: 974261228
dataset_size: 2153690744
- config_name: 20231101.sw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 73119299
num_examples: 78587
download_size: 35936177
dataset_size: 73119299
- config_name: 20231101.szl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21439309
num_examples: 57035
download_size: 7347967
dataset_size: 21439309
- config_name: 20231101.szy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11355780
num_examples: 4885
download_size: 6192815
dataset_size: 11355780
- config_name: 20231101.ta
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 810734099
num_examples: 160651
download_size: 265652020
dataset_size: 810734099
- config_name: 20231101.tay
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2974229
num_examples: 2747
download_size: 1232811
dataset_size: 2974229
- config_name: 20231101.tcy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12166612
num_examples: 2202
download_size: 4611006
dataset_size: 12166612
- config_name: 20231101.te
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 730376585
num_examples: 87854
download_size: 215097076
dataset_size: 730376585
- config_name: 20231101.tet
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1466200
num_examples: 1468
download_size: 744390
dataset_size: 1466200
- config_name: 20231101.tg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 148256281
num_examples: 110962
download_size: 49825647
dataset_size: 148256281
- config_name: 20231101.th
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1014547923
num_examples: 159719
download_size: 371916105
dataset_size: 1014547923
- config_name: 20231101.ti
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 729995
num_examples: 435
download_size: 363723
dataset_size: 729995
- config_name: 20231101.tk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13326412
num_examples: 7918
download_size: 7383654
dataset_size: 13326412
- config_name: 20231101.tl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 85794472
num_examples: 45341
download_size: 45797527
dataset_size: 85794472
- config_name: 20231101.tly
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2590482
num_examples: 8086
download_size: 1070456
dataset_size: 2590482
- config_name: 20231101.tn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4380768
num_examples: 1585
download_size: 1708110
dataset_size: 4380768
- config_name: 20231101.to
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1090611
num_examples: 1887
download_size: 518244
dataset_size: 1090611
- config_name: 20231101.tpi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 460420
num_examples: 1399
download_size: 241908
dataset_size: 460420
- config_name: 20231101.tr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 997254242
num_examples: 534988
download_size: 552923659
dataset_size: 997254242
- config_name: 20231101.trv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4971204
num_examples: 1880
download_size: 2706664
dataset_size: 4971204
- config_name: 20231101.ts
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 847032
num_examples: 785
download_size: 455648
dataset_size: 847032
- config_name: 20231101.tt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 681325421
num_examples: 501116
download_size: 129141056
dataset_size: 681325421
- config_name: 20231101.tum
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13429984
num_examples: 18708
download_size: 5459856
dataset_size: 13429984
- config_name: 20231101.tw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7982767
num_examples: 3978
download_size: 4118530
dataset_size: 7982767
- config_name: 20231101.ty
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 338743
num_examples: 1355
download_size: 150963
dataset_size: 338743
- config_name: 20231101.tyv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14324694
num_examples: 3491
download_size: 6528290
dataset_size: 14324694
- config_name: 20231101.udm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7036113
num_examples: 5677
download_size: 2982821
dataset_size: 7036113
- config_name: 20231101.ug
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42254159
num_examples: 8634
download_size: 17741860
dataset_size: 42254159
- config_name: 20231101.uk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4969483901
num_examples: 1294720
download_size: 2276769383
dataset_size: 4969483901
- config_name: 20231101.ur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 410511855
num_examples: 200154
download_size: 167627869
dataset_size: 410511855
- config_name: 20231101.uz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 397176774
num_examples: 246729
download_size: 210262652
dataset_size: 397176774
- config_name: 20231101.ve
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 359542
num_examples: 840
download_size: 163318
dataset_size: 359542
- config_name: 20231101.vec
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 37917528
num_examples: 69268
download_size: 16179506
dataset_size: 37917528
- config_name: 20231101.vep
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11643856
num_examples: 6960
download_size: 6423002
dataset_size: 11643856
- config_name: 20231101.vi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1617830227
num_examples: 1288680
download_size: 729557588
dataset_size: 1617830227
- config_name: 20231101.vls
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11336278
num_examples: 7872
download_size: 6985406
dataset_size: 11336278
- config_name: 20231101.vo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19521708
num_examples: 35193
download_size: 6582571
dataset_size: 19521708
- config_name: 20231101.wa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12268826
num_examples: 12038
download_size: 7327616
dataset_size: 12268826
- config_name: 20231101.war
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 467647882
num_examples: 1266394
download_size: 104588442
dataset_size: 467647882
- config_name: 20231101.wo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3525303
num_examples: 1746
download_size: 2094574
dataset_size: 3525303
- config_name: 20231101.wuu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25029545
num_examples: 43010
download_size: 15985963
dataset_size: 25029545
- config_name: 20231101.xal
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1391731
num_examples: 2295
download_size: 507198
dataset_size: 1391731
- config_name: 20231101.xh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3665998
num_examples: 1883
download_size: 2505472
dataset_size: 3665998
- config_name: 20231101.xmf
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 37712629
num_examples: 18099
download_size: 12948576
dataset_size: 37712629
- config_name: 20231101.yi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36038273
num_examples: 15179
download_size: 16218296
dataset_size: 36038273
- config_name: 20231101.yo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19081408
num_examples: 33819
download_size: 8861465
dataset_size: 19081408
- config_name: 20231101.za
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1365300
num_examples: 2993
download_size: 666521
dataset_size: 1365300
- config_name: 20231101.zea
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5224563
num_examples: 6082
download_size: 2620396
dataset_size: 5224563
- config_name: 20231101.zh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2790577882
num_examples: 1384748
download_size: 1721150260
dataset_size: 2790577882
- config_name: 20231101.zh-classical
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14869227
num_examples: 12708
download_size: 10098073
dataset_size: 14869227
- config_name: 20231101.zh-min-nan
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 153672031
num_examples: 432798
download_size: 37122048
dataset_size: 153672031
- config_name: 20231101.zh-yue
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 109936351
num_examples: 134140
download_size: 64950815
dataset_size: 109936351
- config_name: 20231101.zu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7088246
num_examples: 11561
download_size: 3792429
dataset_size: 7088246
language_bcp47:
- be-tarask
- en-simple
---
# Dataset Card for Wikimedia Wikipedia
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **Repository:**
- **Paper:**
- **Point of Contact:**
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/)
with one subset per language, each containing a single train split.
Each example contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
All language subsets have already been processed for recent dump, and you can load them per date and language this way:
```python
from datasets import load_dataset
ds = load_dataset("wikimedia/wikipedia", "20231101.en")
```
#### Data Visualization
Click the [Nomic Atlas](https://atlas.nomic.ai/map/475c26d7-b142-4795-9887-02b6eeb18dc0/0d312be6-a3bb-4586-b6b7-53dcd0cbefa5) map below to visualize the 6.4 million samples in the `20231101.en` split.
<a href="https://atlas.nomic.ai/map/475c26d7-b142-4795-9887-02b6eeb18dc0/0d312be6-a3bb-4586-b6b7-53dcd0cbefa5">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6480c476cacb1c4a0696eeb8/sZNN6Vubc0Oue83vKaJUu.webp" alt="Nomic-Atlas Wikipedia Map" width="25%"/>
</a>
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages here: https://meta.wikimedia.org/wiki/List_of_Wikipedias
## Dataset Structure
### Data Instances
An example looks as follows:
```
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
```
### Data Fields
The data fields are the same among all configurations:
- `id` (`str`): ID of the article.
- `url` (`str`): URL of the article.
- `title` (`str`): Title of the article.
- `text` (`str`): Text content of the article.
### Data Splits
All configurations contain a single `train` split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset is built from the Wikipedia dumps: https://dumps.wikimedia.org
You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html
The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool.
When uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump
for the "bbc", "dga", nor "zgh" Wikipedias. We have reported the issue to the Wikimedia Phabricator: https://phabricator.wikimedia.org/T351761
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Copyright licensing information: https://dumps.wikimedia.org/legal.html
All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL)
and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/).
Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details.
Text written by some authors may be released under additional licenses or into the public domain.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
``` |
Yelp/yelp_review_full | Yelp | 2024-01-04T17:14:53Z | 11,747 | 119 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1509.01626",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: YelpReviewFull
license_details: yelp-licence
dataset_info:
config_name: yelp_review_full
features:
- name: label
dtype:
class_label:
names:
'0': 1 star
'1': 2 star
'2': 3 stars
'3': 4 stars
'4': 5 stars
- name: text
dtype: string
splits:
- name: train
num_bytes: 483811554
num_examples: 650000
- name: test
num_bytes: 37271188
num_examples: 50000
download_size: 322952369
dataset_size: 521082742
configs:
- config_name: yelp_review_full
data_files:
- split: train
path: yelp_review_full/train-*
- split: test
path: yelp_review_full/test-*
default: true
train-eval-index:
- config: yelp_review_full
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
---
# Dataset Card for YelpReviewFull
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Yelp](https://www.yelp.com/dataset)
- **Repository:** [Crepe](https://github.com/zhangxiangxiao/Crepe)
- **Paper:** [Character-level Convolutional Networks for Text Classification](https://arxiv.org/abs/1509.01626)
- **Point of Contact:** [Xiang Zhang](mailto:[email protected])
### Dataset Summary
The Yelp reviews dataset consists of reviews from Yelp.
It is extracted from the Yelp Dataset Challenge 2015 data.
### Supported Tasks and Leaderboards
- `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment.
### Languages
The reviews were mainly written in english.
## Dataset Structure
### Data Instances
A typical data point, comprises of a text and the corresponding label.
An example from the YelpReviewFull test set looks as follows:
```
{
'label': 0,
'text': 'I got \'new\' tires from them and within two weeks got a flat. I took my car to a local mechanic to see if i could get the hole patched, but they said the reason I had a flat was because the previous patch had blown - WAIT, WHAT? I just got the tire and never needed to have it patched? This was supposed to be a new tire. \\nI took the tire over to Flynn\'s and they told me that someone punctured my tire, then tried to patch it. So there are resentful tire slashers? I find that very unlikely. After arguing with the guy and telling him that his logic was far fetched he said he\'d give me a new tire \\"this time\\". \\nI will never go back to Flynn\'s b/c of the way this guy treated me and the simple fact that they gave me a used tire!'
}
```
### Data Fields
- 'text': The review texts are escaped using double quotes ("), and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n".
- 'label': Corresponds to the score associated with the review (between 1 and 5).
### Data Splits
The Yelp reviews full star dataset is constructed by randomly taking 130,000 training samples and 10,000 testing samples for each review star from 1 to 5.
In total there are 650,000 trainig samples and 50,000 testing samples.
## Dataset Creation
### Curation Rationale
The Yelp reviews full star dataset is constructed by Xiang Zhang ([email protected]) from the Yelp Dataset Challenge 2015. It is first used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
You can check the official [yelp-dataset-agreement](https://s3-media3.fl.yelpcdn.com/assets/srv0/engineering_pages/bea5c1e92bf3/assets/vendor/yelp-dataset-agreement.pdf).
### Citation Information
Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
### Contributions
Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset. |
uclanlp/wino_bias | uclanlp | 2024-01-04T16:50:33Z | 1,832 | 18 | [
"task_categories:token-classification",
"task_ids:coreference-resolution",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1804.06876",
"region:us"
] | [
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- coreference-resolution
paperswithcode_id: winobias
pretty_name: WinoBias
dataset_info:
- config_name: type1_anti
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
'54': '-'
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
'38': '-'
- name: verbal_predicates
sequence: string
- name: coreference_clusters
sequence: string
splits:
- name: validation
num_bytes: 380510
num_examples: 396
- name: test
num_bytes: 402893
num_examples: 396
download_size: 65383
dataset_size: 783403
- config_name: type1_pro
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
'54': '-'
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
'38': '-'
- name: verbal_predicates
sequence: string
- name: coreference_clusters
sequence: string
splits:
- name: validation
num_bytes: 379044
num_examples: 396
- name: test
num_bytes: 401705
num_examples: 396
download_size: 65516
dataset_size: 780749
- config_name: type2_anti
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
'54': '-'
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
'38': '-'
- name: verbal_predicates
sequence: string
- name: coreference_clusters
sequence: string
splits:
- name: validation
num_bytes: 368421
num_examples: 396
- name: test
num_bytes: 376926
num_examples: 396
download_size: 62555
dataset_size: 745347
- config_name: type2_pro
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
'54': '-'
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
'38': '-'
- name: verbal_predicates
sequence: string
- name: coreference_clusters
sequence: string
splits:
- name: validation
num_bytes: 366957
num_examples: 396
- name: test
num_bytes: 375144
num_examples: 396
download_size: 62483
dataset_size: 742101
- config_name: wino_bias
features:
- name: document_id
dtype: string
- name: part_number
dtype: string
- name: word_number
sequence: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
'47': HYPH
'48': XX
'49': NFP
'50': AFX
'51': ADD
'52': -LRB-
'53': -RRB-
- name: parse_bit
sequence: string
- name: predicate_lemma
sequence: string
- name: predicate_framenet_id
sequence: string
- name: word_sense
sequence: string
- name: speaker
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': B-NORP
'3': I-NORP
'4': B-FAC
'5': I-FAC
'6': B-ORG
'7': I-ORG
'8': B-GPE
'9': I-GPE
'10': B-LOC
'11': I-LOC
'12': B-PRODUCT
'13': I-PRODUCT
'14': B-EVENT
'15': I-EVENT
'16': B-WORK_OF_ART
'17': I-WORK_OF_ART
'18': B-LAW
'19': I-LAW
'20': B-LANGUAGE
'21': I-LANGUAGE
'22': B-DATE
'23': I-DATE
'24': B-TIME
'25': I-TIME
'26': B-PERCENT
'27': I-PERCENT
'28': B-MONEY
'29': I-MONEY
'30': B-QUANTITY
'31': I-QUANTITY
'32': B-ORDINAL
'33': I-ORDINAL
'34': B-CARDINAL
'35': I-CARDINAL
'36': '*'
'37': '0'
- name: verbal_predicates
sequence: string
splits:
- name: train
num_bytes: 173899234
num_examples: 150335
download_size: 268725744
dataset_size: 173899234
configs:
- config_name: type1_anti
data_files:
- split: validation
path: type1_anti/validation-*
- split: test
path: type1_anti/test-*
- config_name: type1_pro
data_files:
- split: validation
path: type1_pro/validation-*
- split: test
path: type1_pro/test-*
- config_name: type2_anti
data_files:
- split: validation
path: type2_anti/validation-*
- split: test
path: type2_anti/test-*
- config_name: type2_pro
data_files:
- split: validation
path: type2_pro/validation-*
- split: test
path: type2_pro/test-*
---
# Dataset Card for Wino_Bias dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WinoBias](https://uclanlp.github.io/corefBias/overview)
- **Repository:**
- **Paper:** [Arxiv](https://arxiv.org/abs/1804.06876)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
WinoBias, a Winograd-schema dataset for coreference resolution focused on gender bias.
The corpus contains Winograd-schema style sentences with entities corresponding to people referred by their occupation (e.g. the nurse, the doctor, the carpenter).
### Supported Tasks and Leaderboards
The underlying task is coreference resolution.
### Languages
English
## Dataset Structure
### Data Instances
The dataset has 4 subsets: `type1_pro`, `type1_anti`, `type2_pro` and `type2_anti`.
The `*_pro` subsets contain sentences that reinforce gender stereotypes (e.g. mechanics are male, nurses are female), whereas the `*_anti` datasets contain "anti-stereotypical" sentences (e.g. mechanics are female, nurses are male).
The `type1` (*WB-Knowledge*) subsets contain sentences for which world knowledge is necessary to resolve the co-references, and `type2` (*WB-Syntax*) subsets require only the syntactic information present in the sentence to resolve them.
### Data Fields
- document_id = This is a variation on the document filename
- part_number = Some files are divided into multiple parts numbered as 000, 001, 002, ... etc.
- word_num = This is the word index of the word in that sentence.
- tokens = This is the token as segmented/tokenized in the Treebank.
- pos_tags = This is the Penn Treebank style part of speech. When parse information is missing, all part of speeches except the one for which there is some sense or proposition annotation are marked with a XX tag. The verb is marked with just a VERB tag.
- parse_bit = This is the bracketed structure broken before the first open parenthesis in the parse, and the word/part-of-speech leaf replaced with a *. The full parse can be created by substituting the asterix with the "([pos] [word])" string (or leaf) and concatenating the items in the rows of that column. When the parse information is missing, the first word of a sentence is tagged as "(TOP*" and the last word is tagged as "*)" and all intermediate words are tagged with a "*".
- predicate_lemma = The predicate lemma is mentioned for the rows for which we have semantic role information or word sense information. All other rows are marked with a "-".
- predicate_framenet_id = This is the PropBank frameset ID of the predicate in predicate_lemma.
- word_sense = This is the word sense of the word in Column tokens.
- speaker = This is the speaker or author name where available.
- ner_tags = These columns identifies the spans representing various named entities. For documents which do not have named entity annotation, each line is represented with an "*".
- verbal_predicates = There is one column each of predicate argument structure information for the predicate mentioned in predicate_lemma. If there are no predicates tagged in a sentence this is a single column with all rows marked with an "*".
### Data Splits
Dev and Test Split available
## Dataset Creation
### Curation Rationale
The WinoBias dataset was introduced in 2018 (see [paper](https://arxiv.org/abs/1804.06876)), with its original task being *coreference resolution*, which is a task that aims to identify mentions that refer to the same entity or person.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The dataset was created by researchers familiar with the WinoBias project, based on two prototypical templates provided by the authors, in which entities interact in plausible ways.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
"Researchers familiar with the [WinoBias] project"
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[Recent work](https://www.microsoft.com/en-us/research/uploads/prod/2021/06/The_Salmon_paper.pdf) has shown that this dataset contains grammatical issues, incorrect or ambiguous labels, and stereotype conflation, among other limitations.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez and Kai-Wei Chan
### Licensing Information
MIT Licence
### Citation Information
@article{DBLP:journals/corr/abs-1804-06876,
author = {Jieyu Zhao and
Tianlu Wang and
Mark Yatskar and
Vicente Ordonez and
Kai{-}Wei Chang},
title = {Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods},
journal = {CoRR},
volume = {abs/1804.06876},
year = {2018},
url = {http://arxiv.org/abs/1804.06876},
archivePrefix = {arXiv},
eprint = {1804.06876},
timestamp = {Mon, 13 Aug 2018 16:47:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1804-06876.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
### Contributions
Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset. Updated by [@JieyuZhao](https://github.com/JieyuZhao). |
Stanford/web_questions | Stanford | 2024-01-04T16:41:06Z | 4,003 | 32 | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: webquestions
pretty_name: WebQuestions
dataset_info:
features:
- name: url
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
splits:
- name: train
num_bytes: 530711
num_examples: 3778
- name: test
num_bytes: 288184
num_examples: 2032
download_size: 402395
dataset_size: 818895
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "web_questions"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a](https://worksheets.codalab.org/worksheets/0xba659fe363cb46e7a505c5b6a774dc8a)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Semantic Parsing on Freebase from Question-Answer Pairs](https://aclanthology.org/D13-1160/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.27 MB
- **Size of the generated dataset:** 0.83 MB
- **Total amount of disk used:** 2.10 MB
### Dataset Summary
This dataset consists of 6,642 question/answer pairs.
The questions are supposed to be answerable by Freebase, a large knowledge graph.
The questions are mostly centered around a single named entity.
The questions are popular ones asked on the web (at least in 2013).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.27 MB
- **Size of the generated dataset:** 0.83 MB
- **Total amount of disk used:** 2.10 MB
An example of 'train' looks as follows.
```
{
"answers": ["Jamaican Creole English Language", "Jamaican English"],
"question": "what does jamaican people speak?",
"url": "http://www.freebase.com/view/en/jamaica"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `url`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a `list` of `string` features.
### Data Splits
| name |train|test|
|-------|----:|---:|
|default| 3778|2032|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{berant-etal-2013-semantic,
title = "Semantic Parsing on {F}reebase from Question-Answer Pairs",
author = "Berant, Jonathan and
Chou, Andrew and
Frostig, Roy and
Liang, Percy",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1160",
pages = "1533--1544",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset. |
allenai/sciq | allenai | 2024-01-04T16:23:51Z | 16,907 | 106 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: sciq
pretty_name: SciQ
dataset_info:
features:
- name: question
dtype: string
- name: distractor3
dtype: string
- name: distractor1
dtype: string
- name: distractor2
dtype: string
- name: correct_answer
dtype: string
- name: support
dtype: string
splits:
- name: train
num_bytes: 6546183
num_examples: 11679
- name: validation
num_bytes: 554120
num_examples: 1000
- name: test
num_bytes: 563927
num_examples: 1000
download_size: 4674410
dataset_size: 7664230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "sciq"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/sciq](https://allenai.org/data/sciq)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
### Dataset Summary
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"correct_answer": "coriolis effect",
"distractor1": "muon effect",
"distractor2": "centrifugal effect",
"distractor3": "tropical effect",
"question": "What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?",
"support": "\"Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `question`: a `string` feature.
- `distractor3`: a `string` feature.
- `distractor1`: a `string` feature.
- `distractor2`: a `string` feature.
- `correct_answer`: a `string` feature.
- `support`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|11679| 1000|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the [Creative Commons Attribution-NonCommercial 3.0 Unported License](http://creativecommons.org/licenses/by-nc/3.0/).
### Citation Information
```
@inproceedings{SciQ,
title={Crowdsourcing Multiple Choice Science Questions},
author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
year={2017},
journal={arXiv:1707.06209v1}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
textmachinelab/quail | textmachinelab | 2024-01-04T16:18:32Z | 17,696 | 7 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"multiple-choice"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: quail
pretty_name: Question Answering for Artificial Intelligence (QuAIL)
dataset_info:
config_name: quail
features:
- name: id
dtype: string
- name: context_id
dtype: string
- name: question_id
dtype: string
- name: domain
dtype: string
- name: metadata
struct:
- name: author
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: question_type
dtype: string
- name: answers
sequence: string
- name: correct_answer_id
dtype: int32
splits:
- name: train
num_bytes: 23432601
num_examples: 10246
- name: validation
num_bytes: 4989531
num_examples: 2164
- name: challenge
num_bytes: 1199792
num_examples: 556
download_size: 2286403
dataset_size: 29621924
configs:
- config_name: quail
data_files:
- split: train
path: quail/train-*
- split: validation
path: quail/validation-*
- split: challenge
path: quail/challenge-*
default: true
---
# Dataset Card for "quail"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://text-machine-lab.github.io/blog/2020/quail/](https://text-machine-lab.github.io/blog/2020/quail/)
- **Repository:** https://github.com/text-machine-lab/quail
- **Paper:** [Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks](https://doi.org/10.1609/aaai.v34i05.6398 )
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 6.41 MB
- **Size of the generated dataset:** 29.62 MB
- **Total amount of disk used:** 36.03 MB
### Dataset Summary
QuAIL is a reading comprehension dataset. QuAIL contains 15K multi-choice questions in texts 300-350 tokens long 4 domains (news, user stories, fiction, blogs).QuAIL is balanced and annotated for question types.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### quail
- **Size of downloaded dataset files:** 6.41 MB
- **Size of the generated dataset:** 29.62 MB
- **Total amount of disk used:** 36.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": ["the cousin is not friendly", "the cousin could have been pretier", "not enough information", "the cousin was too nice"],
"context": "\"That fall came and I went back to Michigan and the school year went by and summer came and I never really thought about it. I'm...",
"context_id": "f001",
"correct_answer_id": 0,
"domain": "fiction",
"id": "f001_19",
"metadata": {
"author": "Joseph Devon",
"title": "Black Eyed Susan",
"url": "http://manybooks.net/pages/devonjother08black_eyed_susan/0.html"
},
"question": "After the events in the text what does the author think about the cousin?",
"question_id": "19",
"question_type": "Subsequent_state"
}
```
### Data Fields
The data fields are the same among all splits.
#### quail
- `id`: a `string` feature.
- `context_id`: a `string` feature.
- `question_id`: a `string` feature.
- `domain`: a `string` feature.
- `author`: a `string` feature.
- `title`: a `string` feature.
- `url`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `question_type`: a `string` feature.
- `answers`: a `list` of `string` features.
- `correct_answer_id`: a `int32` feature.
### Data Splits
|name |train|challenge|validation|
|-----|----:|--------:|---------:|
|quail|10246| 556| 2164|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{DBLP:conf/aaai/RogersKDR20,
author = {Anna Rogers and
Olga Kovaleva and
Matthew Downey and
Anna Rumshisky},
title = {Getting Closer to {AI} Complete Question Answering: {A} Set of Prerequisite
Real Tasks},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {8722--8731},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6398},
timestamp = {Thu, 04 Jun 2020 13:18:48 +0200},
biburl = {https://dblp.org/rec/conf/aaai/RogersKDR20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@sai-prasanna](https://github.com/sai-prasanna), [@ngdodd](https://github.com/ngdodd) for adding this dataset. |
allenai/qasc | allenai | 2024-01-04T16:17:46Z | 1,337 | 17 | [
"task_categories:question-answering",
"task_categories:multiple-choice",
"task_ids:extractive-qa",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1910.11473",
"region:us"
] | [
"question-answering",
"multiple-choice"
] | 2022-03-02T23:29:22Z | 2 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- extractive-qa
- multiple-choice-qa
paperswithcode_id: qasc
pretty_name: Question Answering via Sentence Composition (QASC)
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
- name: fact1
dtype: string
- name: fact2
dtype: string
- name: combinedfact
dtype: string
- name: formatted_question
dtype: string
splits:
- name: train
num_bytes: 4891878
num_examples: 8134
- name: test
num_bytes: 390534
num_examples: 920
- name: validation
num_bytes: 559180
num_examples: 926
download_size: 2349698
dataset_size: 5841592
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
# Dataset Card for "qasc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/qasc](https://allenai.org/data/qasc)
- **Repository:** https://github.com/allenai/qasc/
- **Paper:** [QASC: A Dataset for Question Answering via Sentence Composition](https://arxiv.org/abs/1910.11473)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.61 MB
- **Size of the generated dataset:** 5.87 MB
- **Total amount of disk used:** 7.49 MB
### Dataset Summary
QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice
questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.61 MB
- **Size of the generated dataset:** 5.87 MB
- **Total amount of disk used:** 7.49 MB
An example of 'validation' looks as follows.
```
{
"answerKey": "F",
"choices": {
"label": ["A", "B", "C", "D", "E", "F", "G", "H"],
"text": ["sand", "occurs over a wide range", "forests", "Global warming", "rapid changes occur", "local weather conditions", "measure of motion", "city life"]
},
"combinedfact": "Climate is generally described in terms of local weather conditions",
"fact1": "Climate is generally described in terms of temperature and moisture.",
"fact2": "Fire behavior is driven by local weather conditions such as winds, temperature and moisture.",
"formatted_question": "Climate is generally described in terms of what? (A) sand (B) occurs over a wide range (C) forests (D) Global warming (E) rapid changes occur (F) local weather conditions (G) measure of motion (H) city life",
"id": "3NGI5ARFTT4HNGVWXAMLNBMFA0U1PG",
"question": "Climate is generally described in terms of what?"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `question`: a `string` feature.
- `choices`: a dictionary feature containing:
- `text`: a `string` feature.
- `label`: a `string` feature.
- `answerKey`: a `string` feature.
- `fact1`: a `string` feature.
- `fact2`: a `string` feature.
- `combinedfact`: a `string` feature.
- `formatted_question`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 8134| 926| 920|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is released under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```
@article{allenai:qasc,
author = {Tushar Khot and Peter Clark and Michal Guerquin and Peter Jansen and Ashish Sabharwal},
title = {QASC: A Dataset for Question Answering via Sentence Composition},
journal = {arXiv:1910.11473v2},
year = {2020},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
openai/openai_humaneval | openai | 2024-01-04T16:08:05Z | 81,423 | 314 | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2107.03374",
"region:us",
"code-generation"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: humaneval
pretty_name: OpenAI HumanEval
tags:
- code-generation
dataset_info:
config_name: openai_humaneval
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
splits:
- name: test
num_bytes: 194394
num_examples: 164
download_size: 83920
dataset_size: 194394
configs:
- config_name: openai_humaneval
data_files:
- split: test
path: openai_humaneval/test-*
default: true
---
# Dataset Card for OpenAI HumanEval
## Table of Contents
- [OpenAI HumanEval](#openai-humaneval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/openai/human-eval)
- **Paper:** [Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374)
### Dataset Summary
The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
### Supported Tasks and Leaderboards
### Languages
The programming problems are written in Python and contain English natural text in comments and docstrings.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("openai_humaneval")
DatasetDict({
test: Dataset({
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
num_rows: 164
})
})
```
### Data Instances
An example of a dataset instance:
```
{
"task_id": "test/0",
"prompt": "def return1():\n",
"canonical_solution": " return 1",
"test": "def check(candidate):\n assert candidate() == 1",
"entry_point": "return1"
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
### Data Splits
The dataset only consists of a test split with 164 samples.
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Source Data
The dataset was handcrafted by engineers and researchers at OpenAI.
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
None.
## Considerations for Using the Data
Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
OpenAI
### Licensing Information
MIT License
### Citation Information
```
@misc{chen2021evaluating,
title={Evaluating Large Language Models Trained on Code},
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
year={2021},
eprint={2107.03374},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
### Contributions
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset. |
nyu-mll/multi_nli | nyu-mll | 2024-01-04T16:06:27Z | 3,810 | 100 | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"license:cc-by-sa-3.0",
"license:mit",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- cc-by-3.0
- cc-by-sa-3.0
- mit
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
- multi-input-text-classification
paperswithcode_id: multinli
pretty_name: Multi-Genre Natural Language Inference
license_details: Open Portion of the American National Corpus
dataset_info:
features:
- name: promptID
dtype: int32
- name: pairID
dtype: string
- name: premise
dtype: string
- name: premise_binary_parse
dtype: string
- name: premise_parse
dtype: string
- name: hypothesis
dtype: string
- name: hypothesis_binary_parse
dtype: string
- name: hypothesis_parse
dtype: string
- name: genre
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 410210306
num_examples: 392702
- name: validation_matched
num_bytes: 10063907
num_examples: 9815
- name: validation_mismatched
num_bytes: 10610189
num_examples: 9832
download_size: 224005223
dataset_size: 430884402
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation_matched
path: data/validation_matched-*
- split: validation_mismatched
path: data/validation_mismatched-*
---
# Dataset Card for Multi-Genre Natural Language Inference (MultiNLI)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.nyu.edu/projects/bowman/multinli/](https://www.nyu.edu/projects/bowman/multinli/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 226.85 MB
- **Size of the generated dataset:** 76.95 MB
- **Total amount of disk used:** 303.81 MB
### Dataset Summary
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a
crowd-sourced collection of 433k sentence pairs annotated with textual
entailment information. The corpus is modeled on the SNLI corpus, but differs in
that covers a range of genres of spoken and written text, and supports a
distinctive cross-genre generalization evaluation. The corpus served as the
basis for the shared task of the RepEval 2017 Workshop at EMNLP in Copenhagen.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The dataset contains samples in English only.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 226.85 MB
- **Size of the generated dataset:** 76.95 MB
- **Total amount of disk used:** 303.81 MB
Example of a data instance:
```
{
"promptID": 31193,
"pairID": "31193n",
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"premise_binary_parse": "( ( Conceptually ( cream skimming ) ) ( ( has ( ( ( two ( basic dimensions ) ) - ) ( ( product and ) geography ) ) ) . ) )",
"premise_parse": "(ROOT (S (NP (JJ Conceptually) (NN cream) (NN skimming)) (VP (VBZ has) (NP (NP (CD two) (JJ basic) (NNS dimensions)) (: -) (NP (NN product) (CC and) (NN geography)))) (. .)))",
"hypothesis": "Product and geography are what make cream skimming work. ",
"hypothesis_binary_parse": "( ( ( Product and ) geography ) ( ( are ( what ( make ( cream ( skimming work ) ) ) ) ) . ) )",
"hypothesis_parse": "(ROOT (S (NP (NN Product) (CC and) (NN geography)) (VP (VBP are) (SBAR (WHNP (WP what)) (S (VP (VBP make) (NP (NP (NN cream)) (VP (VBG skimming) (NP (NN work)))))))) (. .)))",
"genre": "government",
"label": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `promptID`: Unique identifier for prompt
- `pairID`: Unique identifier for pair
- `{premise,hypothesis}`: combination of `premise` and `hypothesis`
- `{premise,hypothesis} parse`: Each sentence as parsed by the Stanford PCFG Parser 3.5.2
- `{premise,hypothesis} binary parse`: parses in unlabeled binary-branching format
- `genre`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). Dataset instances which don't have any gold label are marked with -1 label. Make sure you filter them before starting the training using `datasets.Dataset.filter`.
### Data Splits
|train |validation_matched|validation_mismatched|
|-----:|-----------------:|--------------------:|
|392702| 9815| 9832|
## Dataset Creation
### Curation Rationale
They constructed MultiNLI so as to make it possible to explicitly evaluate models both on the quality of their sentence representations within the training domain and on their ability to derive reasonable representations in unfamiliar domains.
### Source Data
#### Initial Data Collection and Normalization
They created each sentence pair by selecting a premise sentence from a preexisting text source and asked a human annotator to compose a novel sentence to pair with it as a hypothesis.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The majority of the corpus is released under the OANC’s license, which allows all content to be freely used, modified, and shared under permissive terms. The data in the FICTION section falls under several permissive licenses; Seven Swords is available under a Creative Commons Share-Alike 3.0 Unported License, and with the explicit permission of the author, Living History and Password Incorrect are available under Creative Commons Attribution 3.0 Unported Licenses; the remaining works of fiction are in the public domain in the United States (but may be licensed differently elsewhere).
### Citation Information
```
@InProceedings{N18-1101,
author = "Williams, Adina
and Nangia, Nikita
and Bowman, Samuel",
title = "A Broad-Coverage Challenge Corpus for
Sentence Understanding through Inference",
booktitle = "Proceedings of the 2018 Conference of
the North American Chapter of the
Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
pages = "1112--1122",
location = "New Orleans, Louisiana",
url = "http://aclweb.org/anthology/N18-1101"
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
curaihealth/medical_questions_pairs | curaihealth | 2024-01-04T14:27:42Z | 828 | 48 | [
"task_categories:text-classification",
"task_ids:semantic-similarity-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2008.13546",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
pretty_name: MedicalQuestionsPairs
dataset_info:
features:
- name: dr_id
dtype: int32
- name: question_1
dtype: string
- name: question_2
dtype: string
- name: label
dtype:
class_label:
names:
'0': 0
'1': 1
splits:
- name: train
num_bytes: 701642
num_examples: 3048
download_size: 313704
dataset_size: 701642
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for [medical_questions_pairs]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Medical questions pairs repository](https://github.com/curai/medical-question-pair-dataset)
- **Paper:** [Effective Transfer Learning for Identifying Similar Questions:Matching User Questions to COVID-19 FAQs](https://arxiv.org/abs/2008.13546)
### Dataset Summary
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
### Supported Tasks and Leaderboards
- `text-classification` : The dataset can be used to train a model to identify similar and non similar medical question pairs.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
The dataset contains dr_id, question_1, question_2, label. 11 different doctors were used for this task so dr_id ranges from 1 to 11. The label is 1 if the question pair is similar and 0 otherwise.
### Data Fields
- `dr_id`: 11 different doctors were used for this task so dr_id ranges from 1 to 11
- `question_1`: Original Question
- `question_2`: Rewritten Question maintaining the same intent like Original Question
- `label`: The label is 1 if the question pair is similar and 0 otherwise.
### Data Splits
The dataset as of now consists of only one split(train) but can be split seperately based on the requirement
| | train |
|----------------------------|------:|
| Non similar Question Pairs | 1524 |
| Similar Question Pairs | 1524 |
## Dataset Creation
Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
### Curation Rationale
[More Information Needed]
### Source Data
1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
Doctors with a list of 1524 patient-asked questions randomly sampled from the publicly available crawl of [HealthTap](https://github.com/durakkerem/Medical-Question-Answer-Datasets). Each question results in one similar and one different pair through the following instructions provided to the labelers:
- Rewrite the original question in a different way while maintaining the same intent. Restructure the syntax as much as possible and change medical details that would not impact your response. e.g. "I'm a 22-y-o female" could become "My 26 year old daughter"
- Come up with a related but dissimilar question for which the answer to the original question would be WRONG OR IRRELEVANT. Use similar key words.
The first instruction generates a positive question pair (similar) and the second generates a negative question pair (different). With the above instructions, the task was intentionally framed such that positive question pairs can look very different by superficial metrics, and negative question pairs can conversely look very similar. This ensures that the task is not trivial.
#### Who are the annotators?
**Curai's doctors**
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
[More Information Needed]
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{mccreery2020effective,
title={Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs},
author={Clara H. McCreery and Namit Katariya and Anitha Kannan and Manish Chablani and Xavier Amatriain},
year={2020},
eprint={2008.13546},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
### Contributions
Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset. |
google-research-datasets/mbpp | google-research-datasets | 2024-01-04T14:26:37Z | 53,321 | 171 | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2108.07732",
"region:us",
"code-generation"
] | [
"text2text-generation"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
pretty_name: Mostly Basic Python Problems
tags:
- code-generation
dataset_info:
- config_name: full
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
splits:
- name: train
num_bytes: 176879
num_examples: 374
- name: test
num_bytes: 244104
num_examples: 500
- name: validation
num_bytes: 42405
num_examples: 90
- name: prompt
num_bytes: 4550
num_examples: 10
download_size: 236069
dataset_size: 467938
- config_name: sanitized
features:
- name: source_file
dtype: string
- name: task_id
dtype: int32
- name: prompt
dtype: string
- name: code
dtype: string
- name: test_imports
sequence: string
- name: test_list
sequence: string
splits:
- name: train
num_bytes: 63453
num_examples: 120
- name: test
num_bytes: 132720
num_examples: 257
- name: validation
num_bytes: 20050
num_examples: 43
- name: prompt
num_bytes: 3407
num_examples: 7
download_size: 115422
dataset_size: 219630
configs:
- config_name: full
data_files:
- split: train
path: full/train-*
- split: test
path: full/test-*
- split: validation
path: full/validation-*
- split: prompt
path: full/prompt-*
default: true
- config_name: sanitized
data_files:
- split: train
path: sanitized/train-*
- split: test
path: sanitized/test-*
- split: validation
path: sanitized/validation-*
- split: prompt
path: sanitized/prompt-*
---
# Dataset Card for Mostly Basic Python Problems (mbpp)
## Table of Contents
- [Dataset Card for Mostly Basic Python Problems (mbpp)](#dataset-card-for-mostly-basic-python-problems-(mbpp))
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/google-research/google-research/tree/master/mbpp
- **Paper:** [Program Synthesis with Large Language Models](https://arxiv.org/abs/2108.07732)
### Dataset Summary
The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us.
Released [here](https://github.com/google-research/google-research/tree/master/mbpp) as part of [Program Synthesis with Large Language Models, Austin et. al., 2021](https://arxiv.org/abs/2108.07732).
### Supported Tasks and Leaderboards
This dataset is used to evaluate code generations.
### Languages
English - Python code
## Dataset Structure
```python
dataset_full = load_dataset("mbpp")
DatasetDict({
test: Dataset({
features: ['task_id', 'text', 'code', 'test_list', 'test_setup_code', 'challenge_test_list'],
num_rows: 974
})
})
dataset_sanitized = load_dataset("mbpp", "sanitized")
DatasetDict({
test: Dataset({
features: ['source_file', 'task_id', 'prompt', 'code', 'test_imports', 'test_list'],
num_rows: 427
})
})
```
### Data Instances
#### mbpp - full
```
{
'task_id': 1,
'text': 'Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].',
'code': 'R = 3\r\nC = 3\r\ndef min_cost(cost, m, n): \r\n\ttc = [[0 for x in range(C)] for x in range(R)] \r\n\ttc[0][0] = cost[0][0] \r\n\tfor i in range(1, m+1): \r\n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \r\n\tfor j in range(1, n+1): \r\n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \r\n\tfor i in range(1, m+1): \r\n\t\tfor j in range(1, n+1): \r\n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \r\n\treturn tc[m][n]',
'test_list': [
'assert min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8',
'assert min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12',
'assert min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16'],
'test_setup_code': '',
'challenge_test_list': []
}
```
#### mbpp - sanitized
```
{
'source_file': 'Benchmark Questions Verification V2.ipynb',
'task_id': 2,
'prompt': 'Write a function to find the shared elements from the given two lists.',
'code': 'def similar_elements(test_tup1, test_tup2):\n res = tuple(set(test_tup1) & set(test_tup2))\n return (res) ',
'test_imports': [],
'test_list': [
'assert set(similar_elements((3, 4, 5, 6),(5, 7, 4, 10))) == set((4, 5))',
'assert set(similar_elements((1, 2, 3, 4),(5, 4, 3, 7))) == set((3, 4))',
'assert set(similar_elements((11, 12, 14, 13),(17, 15, 14, 13))) == set((13, 14))'
]
}
```
### Data Fields
- `source_file`: unknown
- `text`/`prompt`: description of programming task
- `code`: solution for programming task
- `test_setup_code`/`test_imports`: necessary code imports to execute tests
- `test_list`: list of tests to verify solution
- `challenge_test_list`: list of more challenging test to further probe solution
### Data Splits
There are two version of the dataset (full and sanitized), each with four splits:
- train
- evaluation
- test
- prompt
The `prompt` split corresponds to samples used for few-shot prompting and not for training.
## Dataset Creation
See section 2.1 of original [paper](https://arxiv.org/abs/2108.07732).
### Curation Rationale
In order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides.
### Source Data
#### Initial Data Collection and Normalization
The dataset was manually created from scratch.
#### Who are the source language producers?
The dataset was created with an internal crowdsourcing effort at Google.
### Annotations
#### Annotation process
The full dataset was created first and a subset then underwent a second round to improve the task descriptions.
#### Who are the annotators?
The dataset was created with an internal crowdsourcing effort at Google.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Discussion of Biases
### Other Known Limitations
Since the task descriptions might not be expressive enough to solve the task. The `sanitized` split aims at addressing this issue by having a second round of annotators improve the dataset.
## Additional Information
### Dataset Curators
Google Research
### Licensing Information
CC-BY-4.0
### Citation Information
```
@article{austin2021program,
title={Program Synthesis with Large Language Models},
author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others},
journal={arXiv preprint arXiv:2108.07732},
year={2021}
```
### Contributions
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset. |
coastalcph/lex_glue | coastalcph | 2024-01-04T14:25:27Z | 27,251 | 57 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:multiple-choice-qa",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2110.00976",
"arxiv:2109.00904",
"arxiv:1805.01217",
"arxiv:2104.08671",
"region:us"
] | [
"question-answering",
"text-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended
task_categories:
- question-answering
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
- multiple-choice-qa
- topic-classification
pretty_name: LexGLUE
config_names:
- case_hold
- ecthr_a
- ecthr_b
- eurlex
- ledgar
- scotus
- unfair_tos
dataset_info:
- config_name: case_hold
features:
- name: context
dtype: string
- name: endings
sequence: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
splits:
- name: train
num_bytes: 74781706
num_examples: 45000
- name: test
num_bytes: 5989952
num_examples: 3600
- name: validation
num_bytes: 6474603
num_examples: 3900
download_size: 47303537
dataset_size: 87246261
- config_name: ecthr_a
features:
- name: text
sequence: string
- name: labels
sequence:
class_label:
names:
'0': '2'
'1': '3'
'2': '5'
'3': '6'
'4': '8'
'5': '9'
'6': '10'
'7': '11'
'8': '14'
'9': P1-1
splits:
- name: train
num_bytes: 89637449
num_examples: 9000
- name: test
num_bytes: 11884168
num_examples: 1000
- name: validation
num_bytes: 10985168
num_examples: 1000
download_size: 53352586
dataset_size: 112506785
- config_name: ecthr_b
features:
- name: text
sequence: string
- name: labels
sequence:
class_label:
names:
'0': '2'
'1': '3'
'2': '5'
'3': '6'
'4': '8'
'5': '9'
'6': '10'
'7': '11'
'8': '14'
'9': P1-1
splits:
- name: train
num_bytes: 89657649
num_examples: 9000
- name: test
num_bytes: 11886928
num_examples: 1000
- name: validation
num_bytes: 10987816
num_examples: 1000
download_size: 53352494
dataset_size: 112532393
- config_name: eurlex
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': '100163'
'1': '100168'
'2': '100169'
'3': '100170'
'4': '100171'
'5': '100172'
'6': '100173'
'7': '100174'
'8': '100175'
'9': '100176'
'10': '100177'
'11': '100179'
'12': '100180'
'13': '100183'
'14': '100184'
'15': '100185'
'16': '100186'
'17': '100187'
'18': '100189'
'19': '100190'
'20': '100191'
'21': '100192'
'22': '100193'
'23': '100194'
'24': '100195'
'25': '100196'
'26': '100197'
'27': '100198'
'28': '100199'
'29': '100200'
'30': '100201'
'31': '100202'
'32': '100204'
'33': '100205'
'34': '100206'
'35': '100207'
'36': '100212'
'37': '100214'
'38': '100215'
'39': '100220'
'40': '100221'
'41': '100222'
'42': '100223'
'43': '100224'
'44': '100226'
'45': '100227'
'46': '100229'
'47': '100230'
'48': '100231'
'49': '100232'
'50': '100233'
'51': '100234'
'52': '100235'
'53': '100237'
'54': '100238'
'55': '100239'
'56': '100240'
'57': '100241'
'58': '100242'
'59': '100243'
'60': '100244'
'61': '100245'
'62': '100246'
'63': '100247'
'64': '100248'
'65': '100249'
'66': '100250'
'67': '100252'
'68': '100253'
'69': '100254'
'70': '100255'
'71': '100256'
'72': '100257'
'73': '100258'
'74': '100259'
'75': '100260'
'76': '100261'
'77': '100262'
'78': '100263'
'79': '100264'
'80': '100265'
'81': '100266'
'82': '100268'
'83': '100269'
'84': '100270'
'85': '100271'
'86': '100272'
'87': '100273'
'88': '100274'
'89': '100275'
'90': '100276'
'91': '100277'
'92': '100278'
'93': '100279'
'94': '100280'
'95': '100281'
'96': '100282'
'97': '100283'
'98': '100284'
'99': '100285'
splits:
- name: train
num_bytes: 390770241
num_examples: 55000
- name: test
num_bytes: 59739094
num_examples: 5000
- name: validation
num_bytes: 41544476
num_examples: 5000
download_size: 208028049
dataset_size: 492053811
- config_name: ledgar
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Adjustments
'1': Agreements
'2': Amendments
'3': Anti-Corruption Laws
'4': Applicable Laws
'5': Approvals
'6': Arbitration
'7': Assignments
'8': Assigns
'9': Authority
'10': Authorizations
'11': Base Salary
'12': Benefits
'13': Binding Effects
'14': Books
'15': Brokers
'16': Capitalization
'17': Change In Control
'18': Closings
'19': Compliance With Laws
'20': Confidentiality
'21': Consent To Jurisdiction
'22': Consents
'23': Construction
'24': Cooperation
'25': Costs
'26': Counterparts
'27': Death
'28': Defined Terms
'29': Definitions
'30': Disability
'31': Disclosures
'32': Duties
'33': Effective Dates
'34': Effectiveness
'35': Employment
'36': Enforceability
'37': Enforcements
'38': Entire Agreements
'39': Erisa
'40': Existence
'41': Expenses
'42': Fees
'43': Financial Statements
'44': Forfeitures
'45': Further Assurances
'46': General
'47': Governing Laws
'48': Headings
'49': Indemnifications
'50': Indemnity
'51': Insurances
'52': Integration
'53': Intellectual Property
'54': Interests
'55': Interpretations
'56': Jurisdictions
'57': Liens
'58': Litigations
'59': Miscellaneous
'60': Modifications
'61': No Conflicts
'62': No Defaults
'63': No Waivers
'64': Non-Disparagement
'65': Notices
'66': Organizations
'67': Participations
'68': Payments
'69': Positions
'70': Powers
'71': Publicity
'72': Qualifications
'73': Records
'74': Releases
'75': Remedies
'76': Representations
'77': Sales
'78': Sanctions
'79': Severability
'80': Solvency
'81': Specific Performance
'82': Submission To Jurisdiction
'83': Subsidiaries
'84': Successors
'85': Survival
'86': Tax Withholdings
'87': Taxes
'88': Terminations
'89': Terms
'90': Titles
'91': Transactions With Affiliates
'92': Use Of Proceeds
'93': Vacations
'94': Venues
'95': Vesting
'96': Waiver Of Jury Trials
'97': Waivers
'98': Warranties
'99': Withholdings
splits:
- name: train
num_bytes: 43358291
num_examples: 60000
- name: test
num_bytes: 6845581
num_examples: 10000
- name: validation
num_bytes: 7143588
num_examples: 10000
download_size: 27650585
dataset_size: 57347460
- config_name: scotus
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
'5': '6'
'6': '7'
'7': '8'
'8': '9'
'9': '10'
'10': '11'
'11': '12'
'12': '13'
splits:
- name: train
num_bytes: 178959316
num_examples: 5000
- name: test
num_bytes: 76213279
num_examples: 1400
- name: validation
num_bytes: 75600243
num_examples: 1400
download_size: 173411399
dataset_size: 330772838
- config_name: unfair_tos
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': Limitation of liability
'1': Unilateral termination
'2': Unilateral change
'3': Content removal
'4': Contract by using
'5': Choice of law
'6': Jurisdiction
'7': Arbitration
splits:
- name: train
num_bytes: 1041782
num_examples: 5532
- name: test
num_bytes: 303099
num_examples: 1607
- name: validation
num_bytes: 452111
num_examples: 2275
download_size: 865604
dataset_size: 1796992
configs:
- config_name: case_hold
data_files:
- split: train
path: case_hold/train-*
- split: test
path: case_hold/test-*
- split: validation
path: case_hold/validation-*
- config_name: ecthr_a
data_files:
- split: train
path: ecthr_a/train-*
- split: test
path: ecthr_a/test-*
- split: validation
path: ecthr_a/validation-*
- config_name: ecthr_b
data_files:
- split: train
path: ecthr_b/train-*
- split: test
path: ecthr_b/test-*
- split: validation
path: ecthr_b/validation-*
- config_name: eurlex
data_files:
- split: train
path: eurlex/train-*
- split: test
path: eurlex/test-*
- split: validation
path: eurlex/validation-*
- config_name: ledgar
data_files:
- split: train
path: ledgar/train-*
- split: test
path: ledgar/test-*
- split: validation
path: ledgar/validation-*
- config_name: scotus
data_files:
- split: train
path: scotus/train-*
- split: test
path: scotus/test-*
- split: validation
path: scotus/validation-*
- config_name: unfair_tos
data_files:
- split: train
path: unfair_tos/train-*
- split: test
path: unfair_tos/test-*
- split: validation
path: unfair_tos/validation-*
---
# Dataset Card for "LexGLUE"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lex-glue
- **Repository:** https://github.com/coastalcph/lex-glue
- **Paper:** https://arxiv.org/abs/2110.00976
- **Leaderboard:** https://github.com/coastalcph/lex-glue
- **Point of Contact:** [Ilias Chalkidis](mailto:[email protected])
### Dataset Summary
Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we introduce the *Legal General Language Understanding Evaluation (LexGLUE) benchmark*, a benchmark dataset to evaluate the performance of NLP methods in legal tasks. LexGLUE is based on seven existing legal NLP datasets, selected using criteria largely from SuperGLUE.
As in GLUE and SuperGLUE (Wang et al., 2019b,a), one of our goals is to push towards generic (or ‘foundation’) models that can cope with multiple NLP tasks, in our case legal NLP tasks possibly with limited task-specific fine-tuning. Another goal is to provide a convenient and informative entry point for NLP researchers and practitioners wishing to explore or develop methods for legalNLP. Having these goals in mind, the datasets we include in LexGLUE and the tasks they address have been simplified in several ways to make it easier for newcomers and generic models to address all tasks.
LexGLUE benchmark is accompanied by experimental infrastructure that relies on Hugging Face Transformers library and resides at: https://github.com/coastalcph/lex-glue.
### Supported Tasks and Leaderboards
The supported tasks are the following:
<table>
<tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Task Type</td><td>Classes</td><tr>
<tr><td>ECtHR (Task A)</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>Multi-label classification</td><td>10+1</td></tr>
<tr><td>ECtHR (Task B)</td><td> <a href="https://aclanthology.org/2021.naacl-main.22/">Chalkidis et al. (2021a)</a> </td><td>ECHR</td><td>Multi-label classification </td><td>10+1</td></tr>
<tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>Multi-class classification</td><td>14</td></tr>
<tr><td>EUR-LEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. (2021b)</a></td><td>EU Law</td><td>Multi-label classification</td><td>100</td></tr>
<tr><td>LEDGAR</td><td> <a href="https://aclanthology.org/2020.lrec-1.155/">Tuggener et al. (2020)</a></td><td>Contracts</td><td>Multi-class classification</td><td>100</td></tr>
<tr><td>UNFAIR-ToS</td><td><a href="https://arxiv.org/abs/1805.01217"> Lippi et al. (2019)</a></td><td>Contracts</td><td>Multi-label classification</td><td>8+1</td></tr>
<tr><td>CaseHOLD</td><td><a href="https://arxiv.org/abs/2104.08671">Zheng et al. (2021)</a></td><td>US Law</td><td>Multiple choice QA</td><td>n/a</td></tr>
</table>
#### ecthr_a
The European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR). For each case, the dataset provides a list of factual paragraphs (facts) from the case description. Each case is mapped to articles of the ECHR that were violated (if any).
#### ecthr_b
The European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR). For each case, the dataset provides a list of factual paragraphs (facts) from the case description. Each case is mapped to articles of ECHR that were allegedly violated (considered by the court).
#### scotus
The US Supreme Court (SCOTUS) is the highest federal court in the United States of America and generally hears only the most controversial or otherwise complex cases which have not been sufficiently well solved by lower courts. This is a single-label multi-class classification task, where given a document (court opinion), the task is to predict the relevant issue areas. The 14 issue areas cluster 278 issues whose focus is on the subject matter of the controversy (dispute).
#### eurlex
European Union (EU) legislation is published in EUR-Lex portal. All EU laws are annotated by EU's Publications Office with multiple concepts from the EuroVoc thesaurus, a multilingual thesaurus maintained by the Publications Office. The current version of EuroVoc contains more than 7k concepts referring to various activities of the EU and its Member States (e.g., economics, health-care, trade). Given a document, the task is to predict its EuroVoc labels (concepts).
#### ledgar
LEDGAR dataset aims contract provision (paragraph) classification. The contract provisions come from contracts obtained from the US Securities and Exchange Commission (SEC) filings, which are publicly available from EDGAR. Each label represents the single main topic (theme) of the corresponding contract provision.
#### unfair_tos
The UNFAIR-ToS dataset contains 50 Terms of Service (ToS) from on-line platforms (e.g., YouTube, Ebay, Facebook, etc.). The dataset has been annotated on the sentence-level with 8 types of unfair contractual terms (sentences), meaning terms that potentially violate user rights according to the European consumer law.
#### case_hold
The CaseHOLD (Case Holdings on Legal Decisions) dataset includes multiple choice questions about holdings of US court cases from the Harvard Law Library case law corpus. Holdings are short summaries of legal rulings accompany referenced decisions relevant for the present case. The input consists of an excerpt (or prompt) from a court decision, containing a reference to a particular case, while the holding statement is masked out. The model must identify the correct (masked) holding statement from a selection of five choices.
The current leaderboard includes several Transformer-based (Vaswaniet al., 2017) pre-trained language models, which achieve state-of-the-art performance in most NLP tasks (Bommasani et al., 2021) and NLU benchmarks (Wang et al., 2019a). Results reported by [Chalkidis et al. (2021)](https://arxiv.org/abs/2110.00976):
*Task-wise Test Results*
<table>
<tr><td><b>Dataset</b></td><td><b>ECtHR A</b></td><td><b>ECtHR B</b></td><td><b>SCOTUS</b></td><td><b>EUR-LEX</b></td><td><b>LEDGAR</b></td><td><b>UNFAIR-ToS</b></td><td><b>CaseHOLD</b></td></tr>
<tr><td><b>Model</b></td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1</td><td>μ-F1 / m-F1 </td></tr>
<tr><td>TFIDF+SVM</td><td> 64.7 / 51.7 </td><td>74.6 / 65.1 </td><td> <b>78.2</b> / <b>69.5</b> </td><td>71.3 / 51.4 </td><td>87.2 / 82.4 </td><td>95.4 / 78.8</td><td>n/a </td></tr>
<tr><td colspan="8" style='text-align:center'><b>Medium-sized Models (L=12, H=768, A=12)</b></td></tr>
<td>BERT</td> <td> 71.2 / 63.6 </td> <td> 79.7 / 73.4 </td> <td> 68.3 / 58.3 </td> <td> 71.4 / 57.2 </td> <td> 87.6 / 81.8 </td> <td> 95.6 / 81.3 </td> <td> 70.8 </td> </tr>
<td>RoBERTa</td> <td> 69.2 / 59.0 </td> <td> 77.3 / 68.9 </td> <td> 71.6 / 62.0 </td> <td> 71.9 / <b>57.9</b> </td> <td> 87.9 / 82.3 </td> <td> 95.2 / 79.2 </td> <td> 71.4 </td> </tr>
<td>DeBERTa</td> <td> 70.0 / 60.8 </td> <td> 78.8 / 71.0 </td> <td> 71.1 / 62.7 </td> <td> <b>72.1</b> / 57.4 </td> <td> 88.2 / 83.1 </td> <td> 95.5 / 80.3 </td> <td> 72.6 </td> </tr>
<td>Longformer</td> <td> 69.9 / 64.7 </td> <td> 79.4 / 71.7 </td> <td> 72.9 / 64.0 </td> <td> 71.6 / 57.7 </td> <td> 88.2 / 83.0 </td> <td> 95.5 / 80.9 </td> <td> 71.9 </td> </tr>
<td>BigBird</td> <td> 70.0 / 62.9 </td> <td> 78.8 / 70.9 </td> <td> 72.8 / 62.0 </td> <td> 71.5 / 56.8 </td> <td> 87.8 / 82.6 </td> <td> 95.7 / 81.3 </td> <td> 70.8 </td> </tr>
<td>Legal-BERT</td> <td> 70.0 / 64.0 </td> <td> <b>80.4</b> / <b>74.7</b> </td> <td> 76.4 / 66.5 </td> <td> <b>72.1</b> / 57.4 </td> <td> 88.2 / 83.0 </td> <td> <b>96.0</b> / <b>83.0</b> </td> <td> 75.3 </td> </tr>
<td>CaseLaw-BERT</td> <td> 69.8 / 62.9 </td> <td> 78.8 / 70.3 </td> <td> 76.6 / 65.9 </td> <td> 70.7 / 56.6 </td> <td> 88.3 / 83.0 </td> <td> <b>96.0</b> / 82.3 </td> <td> <b>75.4</b> </td> </tr>
<tr><td colspan="8" style='text-align:center'><b>Large-sized Models (L=24, H=1024, A=18)</b></td></tr>
<tr><td>RoBERTa</td> <td> <b>73.8</b> / <b>67.6</b> </td> <td> 79.8 / 71.6 </td> <td> 75.5 / 66.3 </td> <td> 67.9 / 50.3 </td> <td> <b>88.6</b> / <b>83.6</b> </td> <td> 95.8 / 81.6 </td> <td> 74.4 </td> </tr>
</table>
*Averaged (Mean over Tasks) Test Results*
<table>
<tr><td><b>Averaging</b></td><td><b>Arithmetic</b></td><td><b>Harmonic</b></td><td><b>Geometric</b></td></tr>
<tr><td><b>Model</b></td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td><td>μ-F1 / m-F1 </td></tr>
<tr><td colspan="4" style='text-align:center'><b>Medium-sized Models (L=12, H=768, A=12)</b></td></tr>
<tr><td>BERT</td><td> 77.8 / 69.5 </td><td> 76.7 / 68.2 </td><td> 77.2 / 68.8 </td></tr>
<tr><td>RoBERTa</td><td> 77.8 / 68.7 </td><td> 76.8 / 67.5 </td><td> 77.3 / 68.1 </td></tr>
<tr><td>DeBERTa</td><td> 78.3 / 69.7 </td><td> 77.4 / 68.5 </td><td> 77.8 / 69.1 </td></tr>
<tr><td>Longformer</td><td> 78.5 / 70.5 </td><td> 77.5 / 69.5 </td><td> 78.0 / 70.0 </td></tr>
<tr><td>BigBird</td><td> 78.2 / 69.6 </td><td> 77.2 / 68.5 </td><td> 77.7 / 69.0 </td></tr>
<tr><td>Legal-BERT</td><td> <b>79.8</b> / <b>72.0</b> </td><td> <b>78.9</b> / <b>70.8</b> </td><td> <b>79.3</b> / <b>71.4</b> </td></tr>
<tr><td>CaseLaw-BERT</td><td> 79.4 / 70.9 </td><td> 78.5 / 69.7 </td><td> 78.9 / 70.3 </td></tr>
<tr><td colspan="4" style='text-align:center'><b>Large-sized Models (L=24, H=1024, A=18)</b></td></tr>
<tr><td>RoBERTa</td><td> 79.4 / 70.8 </td><td> 78.4 / 69.1 </td><td> 78.9 / 70.0 </td></tr>
</table>
### Languages
We only consider English datasets, to make experimentation easier for researchers across the globe.
## Dataset Structure
### Data Instances
#### ecthr_a
An example of 'train' looks as follows.
```json
{
"text": ["8. The applicant was arrested in the early morning of 21 October 1990 ...", ...],
"labels": [6]
}
```
#### ecthr_b
An example of 'train' looks as follows.
```json
{
"text": ["8. The applicant was arrested in the early morning of 21 October 1990 ...", ...],
"label": [5, 6]
}
```
#### scotus
An example of 'train' looks as follows.
```json
{
"text": "Per Curiam\nSUPREME COURT OF THE UNITED STATES\nRANDY WHITE, WARDEN v. ROGER L. WHEELER\n Decided December 14, 2015\nPER CURIAM.\nA death sentence imposed by a Kentucky trial court and\naffirmed by the ...",
"label": 8
}
```
#### eurlex
An example of 'train' looks as follows.
```json
{
"text": "COMMISSION REGULATION (EC) No 1629/96 of 13 August 1996 on an invitation to tender for the refund on export of wholly milled round grain rice to certain third countries ...",
"labels": [4, 20, 21, 35, 68]
}
```
#### ledgar
An example of 'train' looks as follows.
```json
{
"text": "All Taxes shall be the financial responsibility of the party obligated to pay such Taxes as determined by applicable law and neither party is or shall be liable at any time for any of the other party ...",
"label": 32
}
```
#### unfair_tos
An example of 'train' looks as follows.
```json
{
"text": "tinder may terminate your account at any time without notice if it believes that you have violated this agreement.",
"label": 2
}
```
#### casehold
An example of 'test' looks as follows.
```json
{
"context": "In Granato v. City and County of Denver, No. CIV 11-0304 MSK/BNB, 2011 WL 3820730 (D.Colo. Aug. 20, 2011), the Honorable Marcia S. Krieger, now-Chief United States District Judge for the District of Colorado, ruled similarly: At a minimum, a party asserting a Mo-nell claim must plead sufficient facts to identify ... to act pursuant to City or State policy, custom, decision, ordinance, re d 503, 506-07 (3d Cir.l985)(<HOLDING>).",
"endings": ["holding that courts are to accept allegations in the complaint as being true including monell policies and writing that a federal court reviewing the sufficiency of a complaint has a limited task",
"holding that for purposes of a class certification motion the court must accept as true all factual allegations in the complaint and may draw reasonable inferences therefrom",
"recognizing that the allegations of the complaint must be accepted as true on a threshold motion to dismiss",
"holding that a court need not accept as true conclusory allegations which are contradicted by documents referred to in the complaint",
"holding that where the defendant was in default the district court correctly accepted the fact allegations of the complaint as true"
],
"label": 0
}
```
### Data Fields
#### ecthr_a
- `text`: a list of `string` features (list of factual paragraphs (facts) from the case description).
- `labels`: a list of classification labels (a list of violated ECHR articles, if any) .
<details>
<summary>List of ECHR articles</summary>
"Article 2", "Article 3", "Article 5", "Article 6", "Article 8", "Article 9", "Article 10", "Article 11", "Article 14", "Article 1 of Protocol 1"
</details>
#### ecthr_b
- `text`: a list of `string` features (list of factual paragraphs (facts) from the case description)
- `labels`: a list of classification labels (a list of articles considered).
<details>
<summary>List of ECHR articles</summary>
"Article 2", "Article 3", "Article 5", "Article 6", "Article 8", "Article 9", "Article 10", "Article 11", "Article 14", "Article 1 of Protocol 1"
</details>
#### scotus
- `text`: a `string` feature (the court opinion).
- `label`: a classification label (the relevant issue area).
<details>
<summary>List of issue areas</summary>
(1, Criminal Procedure), (2, Civil Rights), (3, First Amendment), (4, Due Process), (5, Privacy), (6, Attorneys), (7, Unions), (8, Economic Activity), (9, Judicial Power), (10, Federalism), (11, Interstate Relations), (12, Federal Taxation), (13, Miscellaneous), (14, Private Action)
</details>
#### eurlex
- `text`: a `string` feature (an EU law).
- `labels`: a list of classification labels (a list of relevant EUROVOC concepts).
<details>
<summary>List of EUROVOC concepts</summary>
The list is very long including 100 EUROVOC concepts. You can find the EUROVOC concepts descriptors <a href="https://raw.githubusercontent.com/nlpaueb/multi-eurlex/master/data/eurovoc_descriptors.json">here</a>.
</details>
#### ledgar
- `text`: a `string` feature (a contract provision/paragraph).
- `label`: a classification label (the type of contract provision).
<details>
<summary>List of contract provision types</summary>
"Adjustments", "Agreements", "Amendments", "Anti-Corruption Laws", "Applicable Laws", "Approvals", "Arbitration", "Assignments", "Assigns", "Authority", "Authorizations", "Base Salary", "Benefits", "Binding Effects", "Books", "Brokers", "Capitalization", "Change In Control", "Closings", "Compliance With Laws", "Confidentiality", "Consent To Jurisdiction", "Consents", "Construction", "Cooperation", "Costs", "Counterparts", "Death", "Defined Terms", "Definitions", "Disability", "Disclosures", "Duties", "Effective Dates", "Effectiveness", "Employment", "Enforceability", "Enforcements", "Entire Agreements", "Erisa", "Existence", "Expenses", "Fees", "Financial Statements", "Forfeitures", "Further Assurances", "General", "Governing Laws", "Headings", "Indemnifications", "Indemnity", "Insurances", "Integration", "Intellectual Property", "Interests", "Interpretations", "Jurisdictions", "Liens", "Litigations", "Miscellaneous", "Modifications", "No Conflicts", "No Defaults", "No Waivers", "Non-Disparagement", "Notices", "Organizations", "Participations", "Payments", "Positions", "Powers", "Publicity", "Qualifications", "Records", "Releases", "Remedies", "Representations", "Sales", "Sanctions", "Severability", "Solvency", "Specific Performance", "Submission To Jurisdiction", "Subsidiaries", "Successors", "Survival", "Tax Withholdings", "Taxes", "Terminations", "Terms", "Titles", "Transactions With Affiliates", "Use Of Proceeds", "Vacations", "Venues", "Vesting", "Waiver Of Jury Trials", "Waivers", "Warranties", "Withholdings",
</details>
#### unfair_tos
- `text`: a `string` feature (a ToS sentence)
- `labels`: a list of classification labels (a list of unfair types, if any).
<details>
<summary>List of unfair types</summary>
"Limitation of liability", "Unilateral termination", "Unilateral change", "Content removal", "Contract by using", "Choice of law", "Jurisdiction", "Arbitration"
</details>
#### casehold
- `context`: a `string` feature (a context sentence incl. a masked holding statement).
- `holdings`: a list of `string` features (a list of candidate holding statements).
- `label`: a classification label (the id of the original/correct holding).
### Data Splits
<table>
<tr><td>Dataset </td><td>Training</td><td>Development</td><td>Test</td><td>Total</td></tr>
<tr><td>ECtHR (Task A)</td><td>9,000</td><td>1,000</td><td>1,000</td><td>11,000</td></tr>
<tr><td>ECtHR (Task B)</td><td>9,000</td><td>1,000</td><td>1,000</td><td>11,000</td></tr>
<tr><td>SCOTUS</td><td>5,000</td><td>1,400</td><td>1,400</td><td>7,800</td></tr>
<tr><td>EUR-LEX</td><td>55,000</td><td>5,000</td><td>5,000</td><td>65,000</td></tr>
<tr><td>LEDGAR</td><td>60,000</td><td>10,000</td><td>10,000</td><td>80,000</td></tr>
<tr><td>UNFAIR-ToS</td><td>5,532</td><td>2,275</td><td>1,607</td><td>9,414</td></tr>
<tr><td>CaseHOLD</td><td>45,000</td><td>3,900</td><td>3,900</td><td>52,800</td></tr>
</table>
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
<table>
<tr><td>Dataset</td><td>Source</td><td>Sub-domain</td><td>Task Type</td><tr>
<tr><td>ECtHR (Task A)</td><td> <a href="https://aclanthology.org/P19-1424/">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>Multi-label classification</td></tr>
<tr><td>ECtHR (Task B)</td><td> <a href="https://aclanthology.org/2021.naacl-main.22/">Chalkidis et al. (2021a)</a> </td><td>ECHR</td><td>Multi-label classification </td></tr>
<tr><td>SCOTUS</td><td> <a href="http://scdb.wustl.edu">Spaeth et al. (2020)</a></td><td>US Law</td><td>Multi-class classification</td></tr>
<tr><td>EUR-LEX</td><td> <a href="https://arxiv.org/abs/2109.00904">Chalkidis et al. (2021b)</a></td><td>EU Law</td><td>Multi-label classification</td></tr>
<tr><td>LEDGAR</td><td> <a href="https://aclanthology.org/2020.lrec-1.155/">Tuggener et al. (2020)</a></td><td>Contracts</td><td>Multi-class classification</td></tr>
<tr><td>UNFAIR-ToS</td><td><a href="https://arxiv.org/abs/1805.01217"> Lippi et al. (2019)</a></td><td>Contracts</td><td>Multi-label classification</td></tr>
<tr><td>CaseHOLD</td><td><a href="https://arxiv.org/abs/2104.08671">Zheng et al. (2021)</a></td><td>US Law</td><td>Multiple choice QA</td></tr>
</table>
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Curators
*Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*
*LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*
*2022. In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland.*
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
[*Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*
*LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*
*2022. In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland.*](https://arxiv.org/abs/2110.00976)
```
@inproceedings{chalkidis-etal-2021-lexglue,
title={LexGLUE: A Benchmark Dataset for Legal Language Understanding in English},
author={Chalkidis, Ilias and Jana, Abhik and Hartung, Dirk and
Bommarito, Michael and Androutsopoulos, Ion and Katz, Daniel Martin and
Aletras, Nikolaos},
year={2022},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
address={Dubln, Ireland},
}
```
### Contributions
Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset. |
klue/klue | klue | 2024-01-04T14:05:57Z | 10,323 | 74 | [
"task_categories:fill-mask",
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:token-classification",
"task_ids:extractive-qa",
"task_ids:named-entity-recognition",
"task_ids:natural-language-inference",
"task_ids:parsing",
"task_ids:semantic-similarity-scoring",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2105.09680",
"region:us",
"relation-extraction"
] | [
"fill-mask",
"question-answering",
"text-classification",
"text-generation",
"token-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- ko
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- fill-mask
- question-answering
- text-classification
- text-generation
- token-classification
task_ids:
- extractive-qa
- named-entity-recognition
- natural-language-inference
- parsing
- semantic-similarity-scoring
- text-scoring
- topic-classification
paperswithcode_id: klue
pretty_name: KLUE
config_names:
- dp
- mrc
- ner
- nli
- re
- sts
- wos
- ynat
tags:
- relation-extraction
dataset_info:
- config_name: dp
features:
- name: sentence
dtype: string
- name: index
list: int32
- name: word_form
list: string
- name: lemma
list: string
- name: pos
list: string
- name: head
list: int32
- name: deprel
list: string
splits:
- name: train
num_bytes: 7899965
num_examples: 10000
- name: validation
num_bytes: 1557462
num_examples: 2000
download_size: 3742577
dataset_size: 9457427
- config_name: mrc
features:
- name: title
dtype: string
- name: context
dtype: string
- name: news_category
dtype: string
- name: source
dtype: string
- name: guid
dtype: string
- name: is_impossible
dtype: bool
- name: question_type
dtype: int32
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 46505593
num_examples: 17554
- name: validation
num_bytes: 15583017
num_examples: 5841
download_size: 30098472
dataset_size: 62088610
- config_name: ner
features:
- name: sentence
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-DT
'1': I-DT
'2': B-LC
'3': I-LC
'4': B-OG
'5': I-OG
'6': B-PS
'7': I-PS
'8': B-QT
'9': I-QT
'10': B-TI
'11': I-TI
'12': O
splits:
- name: train
num_bytes: 19891905
num_examples: 21008
- name: validation
num_bytes: 4937563
num_examples: 5000
download_size: 5265887
dataset_size: 24829468
- config_name: nli
features:
- name: guid
dtype: string
- name: source
dtype: string
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 5719882
num_examples: 24998
- name: validation
num_bytes: 673260
num_examples: 3000
download_size: 2056116
dataset_size: 6393142
- config_name: re
features:
- name: guid
dtype: string
- name: sentence
dtype: string
- name: subject_entity
struct:
- name: word
dtype: string
- name: start_idx
dtype: int32
- name: end_idx
dtype: int32
- name: type
dtype: string
- name: object_entity
struct:
- name: word
dtype: string
- name: start_idx
dtype: int32
- name: end_idx
dtype: int32
- name: type
dtype: string
- name: label
dtype:
class_label:
names:
'0': no_relation
'1': org:dissolved
'2': org:founded
'3': org:place_of_headquarters
'4': org:alternate_names
'5': org:member_of
'6': org:members
'7': org:political/religious_affiliation
'8': org:product
'9': org:founded_by
'10': org:top_members/employees
'11': org:number_of_employees/members
'12': per:date_of_birth
'13': per:date_of_death
'14': per:place_of_birth
'15': per:place_of_death
'16': per:place_of_residence
'17': per:origin
'18': per:employee_of
'19': per:schools_attended
'20': per:alternate_names
'21': per:parents
'22': per:children
'23': per:siblings
'24': per:spouse
'25': per:other_family
'26': per:colleagues
'27': per:product
'28': per:religion
'29': per:title
- name: source
dtype: string
splits:
- name: train
num_bytes: 11145426
num_examples: 32470
- name: validation
num_bytes: 2559272
num_examples: 7765
download_size: 8190257
dataset_size: 13704698
- config_name: sts
features:
- name: guid
dtype: string
- name: source
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: labels
struct:
- name: label
dtype: float64
- name: real-label
dtype: float64
- name: binary-label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 2832889
num_examples: 11668
- name: validation
num_bytes: 122641
num_examples: 519
download_size: 1587855
dataset_size: 2955530
- config_name: wos
features:
- name: guid
dtype: string
- name: domains
list: string
- name: dialogue
list:
- name: role
dtype: string
- name: text
dtype: string
- name: state
list: string
splits:
- name: train
num_bytes: 26676970
num_examples: 8000
- name: validation
num_bytes: 3488911
num_examples: 1000
download_size: 6358855
dataset_size: 30165881
- config_name: ynat
features:
- name: guid
dtype: string
- name: title
dtype: string
- name: label
dtype:
class_label:
names:
'0': IT과학
'1': 경제
'2': 사회
'3': 생활문화
'4': 세계
'5': 스포츠
'6': 정치
- name: url
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 10109584
num_examples: 45678
- name: validation
num_bytes: 2039181
num_examples: 9107
download_size: 5012303
dataset_size: 12148765
configs:
- config_name: dp
data_files:
- split: train
path: dp/train-*
- split: validation
path: dp/validation-*
- config_name: mrc
data_files:
- split: train
path: mrc/train-*
- split: validation
path: mrc/validation-*
- config_name: ner
data_files:
- split: train
path: ner/train-*
- split: validation
path: ner/validation-*
- config_name: nli
data_files:
- split: train
path: nli/train-*
- split: validation
path: nli/validation-*
- config_name: re
data_files:
- split: train
path: re/train-*
- split: validation
path: re/validation-*
- config_name: sts
data_files:
- split: train
path: sts/train-*
- split: validation
path: sts/validation-*
- config_name: wos
data_files:
- split: train
path: wos/train-*
- split: validation
path: wos/validation-*
- config_name: ynat
data_files:
- split: train
path: ynat/train-*
- split: validation
path: ynat/validation-*
---
# Dataset Card for KLUE
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://klue-benchmark.com/
- **Repository:** https://github.com/KLUE-benchmark/KLUE
- **Paper:** [KLUE: Korean Language Understanding Evaluation](https://arxiv.org/abs/2105.09680)
- **Leaderboard:** [Leaderboard](https://klue-benchmark.com/leaderboard)
- **Point of Contact:** https://github.com/KLUE-benchmark/KLUE/issues
### Dataset Summary
KLUE is a collection of 8 tasks to evaluate natural language understanding capability of Korean language models. We delibrately select the 8 tasks, which are Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking.
### Supported Tasks and Leaderboards
Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking
### Languages
`ko-KR`
## Dataset Structure
### Data Instances
#### ynat
An example of 'train' looks as follows.
```
{'date': '2016.06.30. 오전 10:36',
'guid': 'ynat-v1_train_00000',
'label': 3,
'title': '유튜브 내달 2일까지 크리에이터 지원 공간 운영',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008508947'}
```
#### sts
An example of 'train' looks as follows.
```
{'guid': 'klue-sts-v1_train_00000',
'labels': {'label': 3.7, 'real-label': 3.714285714285714, 'binary-label': 1},
'sentence1': '숙소 위치는 찾기 쉽고 일반적인 한국의 반지하 숙소입니다.',
'sentence2': '숙박시설의 위치는 쉽게 찾을 수 있고 한국의 대표적인 반지하 숙박시설입니다.',
'source': 'airbnb-rtt'}
```
#### nli
An example of 'train' looks as follows.
```
{'guid': 'klue-nli-v1_train_00000',
'hypothesis': '힛걸 진심 최고로 멋지다.',
'label': 0,
'premise': '힛걸 진심 최고다 그 어떤 히어로보다 멋지다',
'source': 'NSMC'}
```
#### ner
An example of 'train' looks as follows.
```
{'tokens': ['특', '히', ' ', '영', '동', '고', '속', '도', '로', ' ', '강', '릉', ' ', '방', '향', ' ', '문', '막', '휴', '게', '소', '에', '서', ' ', '만', '종', '분', '기', '점', '까', '지', ' ', '5', '㎞', ' ', '구', '간', '에', '는', ' ', '승', '용', '차', ' ', '전', '용', ' ', '임', '시', ' ', '갓', '길', '차', '로', '제', '를', ' ', '운', '영', '하', '기', '로', ' ', '했', '다', '.'],
'ner_tags': [12, 12, 12, 2, 3, 3, 3, 3, 3, 12, 2, 3, 12, 12, 12, 12, 2, 3, 3, 3, 3, 12, 12, 12, 2, 3, 3, 3, 3, 12, 12, 12, 8, 9, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12],
'sentence': '특히 <영동고속도로:LC> <강릉:LC> 방향 <문막휴게소:LC>에서 <만종분기점:LC>까지 <5㎞:QT> 구간에는 승용차 전용 임시 갓길차로제를 운영하기로 했다.'}
```
#### re
An example of 'train' looks as follows.
```
{'guid': 'klue-re-v1_train_00000',
'label': 0,
'object_entity': {'word': '조지 해리슨',
'start_idx': 13,
'end_idx': 18,
'type': 'PER'},
'sentence': '〈Something〉는 조지 해리슨이 쓰고 비틀즈가 1969년 앨범 《Abbey Road》에 담은 노래다.',
'source': 'wikipedia',
'subject_entity': {'word': '비틀즈',
'start_idx': 24,
'end_idx': 26,
'type': 'ORG'}}
```
#### dp
An example of 'train' looks as follows.
```
{'deprel': ['NP', 'NP_OBJ', 'VP', 'NP', 'NP_SBJ', 'NP', 'NP_MOD', 'NP_CNJ', 'NP_CNJ', 'NP', 'NP', 'NP_OBJ', 'AP', 'VP'],
'head': [2, 3, 14, 5, 14, 7, 10, 10, 10, 11, 12, 14, 14, 0],
'index': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
'lemma': ['해당', '그림 을', '보 면', '디즈니', '공주 들 이', '브리트니', '스피어스 의', '앨범 이나', '뮤직 비디오 ,', '화보', '속', '모습 을', '똑같이', '재연 하 였 다 .'],
'pos': ['NNG', 'NNG+JKO', 'VV+EC', 'NNP', 'NNG+XSN+JKS', 'NNP', 'NNP+JKG', 'NNG+JC', 'NNG+NNG+SP', 'NNG', 'NNG', 'NNG+JKO', 'MAG', 'NNG+XSA+EP+EF+SF'],
'sentence': '해당 그림을 보면 디즈니 공주들이 브리트니 스피어스의 앨범이나 뮤직비디오, 화보 속 모습을 똑같이 재연했다.',
'word_form': ['해당', '그림을', '보면', '디즈니', '공주들이', '브리트니', '스피어스의', '앨범이나', '뮤직비디오,', '화보', '속', '모습을', '똑같이', '재연했다.']}
```
#### mrc
An example of 'train' looks as follows.
```
{'answers': {'answer_start': [478, 478], 'text': ['한 달가량', '한 달']},
'context': '올여름 장마가 17일 제주도에서 시작됐다. 서울 등 중부지방은 예년보다 사나흘 정도 늦은 이달 말께 장마가 시작될 전망이다.17일 기상청에 따르면 제주도 남쪽 먼바다에 있는 장마전선의 영향으로 이날 제주도 산간 및 내륙지역에 호우주의보가 내려지면서 곳곳에 100㎜에 육박하는 많은 비가 내렸다. 제주의 장마는 평년보다 2~3일, 지난해보다는 하루 일찍 시작됐다. 장마는 고온다습한 북태평양 기단과 한랭 습윤한 오호츠크해 기단이 만나 형성되는 장마전선에서 내리는 비를 뜻한다.장마전선은 18일 제주도 먼 남쪽 해상으로 내려갔다가 20일께 다시 북상해 전남 남해안까지 영향을 줄 것으로 보인다. 이에 따라 20~21일 남부지방에도 예년보다 사흘 정도 장마가 일찍 찾아올 전망이다. 그러나 장마전선을 밀어올리는 북태평양 고기압 세력이 약해 서울 등 중부지방은 평년보다 사나흘가량 늦은 이달 말부터 장마가 시작될 것이라는 게 기상청의 설명이다. 장마전선은 이후 한 달가량 한반도 중남부를 오르내리며 곳곳에 비를 뿌릴 전망이다. 최근 30년간 평균치에 따르면 중부지방의 장마 시작일은 6월24~25일이었으며 장마기간은 32일, 강수일수는 17.2일이었다.기상청은 올해 장마기간의 평균 강수량이 350~400㎜로 평년과 비슷하거나 적을 것으로 내다봤다. 브라질 월드컵 한국과 러시아의 경기가 열리는 18일 오전 서울은 대체로 구름이 많이 끼지만 비는 오지 않을 것으로 예상돼 거리 응원에는 지장이 없을 전망이다.',
'guid': 'klue-mrc-v1_train_12759',
'is_impossible': False,
'news_category': '종합',
'question': '북태평양 기단과 오호츠크해 기단이 만나 국내에 머무르는 기간은?',
'question_type': 1,
'source': 'hankyung',
'title': '제주도 장마 시작 … 중부는 이달 말부터'}
```
#### wos
An example of 'train' looks as follows.
```
{'dialogue': [{'role': 'user',
'text': '쇼핑을 하려는데 서울 서쪽에 있을까요?',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽']},
{'role': 'sys',
'text': '서울 서쪽에 쇼핑이 가능한 곳이라면 노량진 수산물 도매시장이 있습니다.',
'state': []},
{'role': 'user',
'text': '오 네 거기 주소 좀 알려주세요.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '노량진 수산물 도매시장의 주소는 서울 동작구 93806입니다.', 'state': []},
{'role': 'user',
'text': '알려주시는김에 연락처랑 평점도 좀 알려주세요.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '그럼. 연락처는 6182006591이고 평점은 4점입니다.', 'state': []},
{'role': 'user',
'text': '와 감사합니다.',
'state': ['관광-종류-쇼핑', '관광-지역-서울 서쪽', '관광-이름-노량진 수산물 도매시장']},
{'role': 'sys', 'text': '감사합니다.', 'state': []}],
'domains': ['관광'],
'guid': 'wos-v1_train_00001'}
```
### Data Fields
#### ynat
+ `guid`: a `string` feature
+ `title`: a `string` feature
+ `label`: a classification label, with possible values `IT과학`(0), `경제`(1), `사회`(2), `생활문화`(3), `세계`(4), `스포츠`(5), `정치`(6)
+ `url`: a `string` feature
+ `date`: a `string` feature
#### sts
+ `guid`: a `string` feature
+ `source`: a `string` feature
+ `sentence1`: a `string` feature
+ `sentence2`: a `string` feature
+ `labels`: a dictionary feature containing
+ `label`: a `float64` feature
+ `real-label`: a `float64` feature
+ `binary-label`: a classification label, with possible values `negative`(0), `positive`(1)
#### nli
+ `guid`: a `string` feature
+ `source`: a `string` feature
+ `premise`: a `string` feature
+ `hypothesis`: a `string` feature
+ `label`: a classification label, with possible values `entailment`(0), `neutral`(1), `contradiction`(2)
#### ner
+ `sentence`: a `string` feature
+ `tokens`: a list of a `string` feature (tokenization is at character level)
+ `ner_tags`: a list of classification labels, with possible values including `B-DT`(0), `I-DT`(1),
`B-LC`(2), `I-LC`(3), `B-OG`(4), `I-OG`(5), `B-PS`(6), `I-PS`(7), `B-QT`(8), `I-QT`(9), `B-TI`(10),
`I-TI`(11), `O`(12)
#### re
+ `guid`: a `string` feature
+ `sentence`: a `string` feature
+ `subject_entity`: a dictionary feature containing
+ `word`: a `string` feature
+ `start_idx`: a `int32` feature
+ `end_idx`: a `int32` feature
+ `type`: a `string` feature
+ `object_entity`: a dictionary feature containing
+ `word`: a `string` feature
+ `start_idx`: a `int32` feature
+ `end_idx`: a `int32` feature
+ `type`: a `string` feature
+ `label`: a list of labels, with possible values including `no_relation`(0), `org:dissolved`(1),
`org:founded`(2), `org:place_of_headquarters`(3), `org:alternate_names`(4), `org:member_of`(5),
`org:members`(6), `org:political/religious_affiliation`(7), `org:product`(8), `org:founded_by`(9),`org:top_members/employees`(10),
`org:number_of_employees/members`(11), `per:date_of_birth`(12), `per:date_of_death`(13), `per:place_of_birth`(14),
`per:place_of_death`(15), `per:place_of_residence`(16), `per:origin`(17), `per:employee_of`(18),
`per:schools_attended`(19), `per:alternate_names`(20), `per:parents`(21), `per:children`(22),
`per:siblings`(23), `per:spouse`(24), `per:other_family`(25), `per:colleagues`(26), `per:product`(27),
`per:religion`(28), `per:title`(29),
+ `source`: a `string` feature
#### dp
+ `sentence`: a `string` feature
+ `index`: a list of `int32` feature
+ `word_form`: a list of `string` feature
+ `lemma`: a list of `string` feature
+ `pos`: a list of `string` feature
+ `head`: a list of `int32` feature
+ `deprel`: a list of `string` feature
#### mrc
+ `title`: a `string` feature
+ `context`: a `string` feature
+ `news_category`: a `string` feature
+ `source`: a `string` feature
+ `guid`: a `string` feature
+ `is_impossible`: a `bool` feature
+ `question_type`: a `int32` feature
+ `question`: a `string` feature
+ `answers`: a dictionary feature containing
+ `answer_start`: a `int32` feature
+ `text`: a `string` feature
#### wos
+ `guid`: a `string` feature
+ `domains`: a `string` feature
+ `dialogue`: a list of dictionary feature containing
+ `role`: a `string` feature
+ `text`: a `string` feature
+ `state`: a `string` feature
### Data Splits
#### ynat
You can see more details in [here](https://klue-benchmark.com/tasks/66/data/description).
+ train: 45,678
+ validation: 9,107
#### sts
You can see more details in [here](https://klue-benchmark.com/tasks/67/data/description).
+ train: 11,668
+ validation: 519
#### nli
You can see more details in [here](https://klue-benchmark.com/tasks/68/data/description).
+ train: 24,998
+ validation: 3,000
#### ner
You can see more details in [here](https://klue-benchmark.com/tasks/69/overview/description).
+ train: 21,008
+ validation: 5,000
#### re
You can see more details in [here](https://klue-benchmark.com/tasks/70/overview/description).
+ train: 32,470
+ validation: 7,765
#### dp
You can see more details in [here](https://klue-benchmark.com/tasks/71/data/description).
+ train: 10,000
+ validation: 2,000
#### mrc
You can see more details in [here](https://klue-benchmark.com/tasks/72/overview/description).
+ train: 17,554
+ validation: 5,841
#### wos
You can see more details in [here](https://klue-benchmark.com/tasks/73/overview/description).
+ train: 8,000
+ validation: 1,000
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@jungwhank](https://github.com/jungwhank), [@bzantium](https://github.com/bzantium) for adding this dataset. |
ai4bharat/indic_glue | ai4bharat | 2024-01-04T12:36:30Z | 78,429 | 11 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:multiple-choice",
"task_ids:topic-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-analysis",
"task_ids:semantic-similarity-scoring",
"task_ids:named-entity-recognition",
"task_ids:multiple-choice-qa",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:extended|other",
"language:as",
"language:bn",
"language:en",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"discourse-mode-classification",
"paraphrase-identification",
"cross-lingual-similarity",
"headline-classification"
] | [
"text-classification",
"token-classification",
"multiple-choice"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- other
language_creators:
- found
language:
- as
- bn
- en
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- other
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
task_categories:
- text-classification
- token-classification
- multiple-choice
task_ids:
- topic-classification
- natural-language-inference
- sentiment-analysis
- semantic-similarity-scoring
- named-entity-recognition
- multiple-choice-qa
pretty_name: IndicGLUE
tags:
- discourse-mode-classification
- paraphrase-identification
- cross-lingual-similarity
- headline-classification
dataset_info:
- config_name: actsa-sc.te
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 1370907
num_examples: 4328
- name: validation
num_bytes: 166089
num_examples: 541
- name: test
num_bytes: 168291
num_examples: 541
download_size: 727630
dataset_size: 1705287
- config_name: bbca.hi
features:
- name: label
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22126205
num_examples: 3467
- name: test
num_bytes: 5501148
num_examples: 866
download_size: 10349015
dataset_size: 27627353
- config_name: copa.en
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 46033
num_examples: 400
- name: validation
num_bytes: 11679
num_examples: 100
- name: test
num_bytes: 55846
num_examples: 500
download_size: 79431
dataset_size: 113558
- config_name: copa.gu
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 92097
num_examples: 362
- name: validation
num_bytes: 23450
num_examples: 88
- name: test
num_bytes: 109997
num_examples: 448
download_size: 107668
dataset_size: 225544
- config_name: copa.hi
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 93376
num_examples: 362
- name: validation
num_bytes: 23559
num_examples: 88
- name: test
num_bytes: 112830
num_examples: 449
download_size: 104233
dataset_size: 229765
- config_name: copa.mr
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 93441
num_examples: 362
- name: validation
num_bytes: 23874
num_examples: 88
- name: test
num_bytes: 112055
num_examples: 449
download_size: 105962
dataset_size: 229370
- config_name: csqa.as
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 3800523
num_examples: 2942
download_size: 1390423
dataset_size: 3800523
- config_name: csqa.bn
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 54671018
num_examples: 38845
download_size: 19648180
dataset_size: 54671018
- config_name: csqa.gu
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 29131607
num_examples: 22861
download_size: 6027825
dataset_size: 29131607
- config_name: csqa.hi
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 40409347
num_examples: 35140
download_size: 14711258
dataset_size: 40409347
- config_name: csqa.kn
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 21199816
num_examples: 13666
download_size: 7669655
dataset_size: 21199816
- config_name: csqa.ml
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 47220836
num_examples: 26537
download_size: 17382215
dataset_size: 47220836
- config_name: csqa.mr
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 13667174
num_examples: 11370
download_size: 5072738
dataset_size: 13667174
- config_name: csqa.or
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 2562365
num_examples: 1975
download_size: 948046
dataset_size: 2562365
- config_name: csqa.pa
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 5806097
num_examples: 5667
download_size: 2194109
dataset_size: 5806097
- config_name: csqa.ta
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 61868481
num_examples: 38590
download_size: 20789467
dataset_size: 61868481
- config_name: csqa.te
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 58784997
num_examples: 41338
download_size: 17447618
dataset_size: 58784997
- config_name: cvit-mkb-clsr.en-bn
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1990957
num_examples: 5522
download_size: 945551
dataset_size: 1990957
- config_name: cvit-mkb-clsr.en-gu
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2303377
num_examples: 6463
download_size: 1093313
dataset_size: 2303377
- config_name: cvit-mkb-clsr.en-hi
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1855989
num_examples: 5169
download_size: 890609
dataset_size: 1855989
- config_name: cvit-mkb-clsr.en-ml
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1990089
num_examples: 4886
download_size: 868956
dataset_size: 1990089
- config_name: cvit-mkb-clsr.en-mr
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2130601
num_examples: 5760
download_size: 993961
dataset_size: 2130601
- config_name: cvit-mkb-clsr.en-or
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 274873
num_examples: 752
download_size: 134334
dataset_size: 274873
- config_name: cvit-mkb-clsr.en-ta
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2565178
num_examples: 5637
download_size: 1091653
dataset_size: 2565178
- config_name: cvit-mkb-clsr.en-te
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1771129
num_examples: 5049
download_size: 840410
dataset_size: 1771129
- config_name: cvit-mkb-clsr.en-ur
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 288430
num_examples: 1006
download_size: 166129
dataset_size: 288430
- config_name: iitp-mr.hi
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 6704905
num_examples: 2480
- name: validation
num_bytes: 822218
num_examples: 310
- name: test
num_bytes: 702373
num_examples: 310
download_size: 3151762
dataset_size: 8229496
- config_name: iitp-pr.hi
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 945589
num_examples: 4182
- name: validation
num_bytes: 120100
num_examples: 523
- name: test
num_bytes: 121910
num_examples: 523
download_size: 509822
dataset_size: 1187599
- config_name: inltkh.gu
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 883063
num_examples: 5269
- name: validation
num_bytes: 111201
num_examples: 659
- name: test
num_bytes: 110757
num_examples: 659
download_size: 515094
dataset_size: 1105021
- config_name: inltkh.ml
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1108145
num_examples: 5036
- name: validation
num_bytes: 140055
num_examples: 630
- name: test
num_bytes: 138847
num_examples: 630
download_size: 571019
dataset_size: 1387047
- config_name: inltkh.mr
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1462614
num_examples: 9672
- name: validation
num_bytes: 180306
num_examples: 1210
- name: test
num_bytes: 180558
num_examples: 1210
download_size: 840304
dataset_size: 1823478
- config_name: inltkh.ta
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 2659569
num_examples: 5346
- name: validation
num_bytes: 316083
num_examples: 669
- name: test
num_bytes: 320465
num_examples: 669
download_size: 1271262
dataset_size: 3296117
- config_name: inltkh.te
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1361667
num_examples: 4328
- name: validation
num_bytes: 170471
num_examples: 541
- name: test
num_bytes: 173149
num_examples: 541
download_size: 726293
dataset_size: 1705287
- config_name: md.hi
features:
- name: sentence
dtype: string
- name: discourse_mode
dtype: string
- name: story_number
dtype: int32
- name: id
dtype: int32
splits:
- name: train
num_bytes: 1672109
num_examples: 7974
- name: validation
num_bytes: 211187
num_examples: 997
- name: test
num_bytes: 210175
num_examples: 997
download_size: 939801
dataset_size: 2093471
- config_name: sna.bn
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': kolkata
'1': state
'2': national
'3': sports
'4': entertainment
'5': international
splits:
- name: train
num_bytes: 46070046
num_examples: 11284
- name: validation
num_bytes: 5648126
num_examples: 1411
- name: test
num_bytes: 5799979
num_examples: 1411
download_size: 21415940
dataset_size: 57518151
- config_name: wiki-ner.as
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 374983
num_examples: 1021
- name: validation
num_bytes: 49312
num_examples: 157
- name: test
num_bytes: 50456
num_examples: 160
download_size: 72919
dataset_size: 474751
- config_name: wiki-ner.bn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 7502824
num_examples: 20223
- name: validation
num_bytes: 988683
num_examples: 2985
- name: test
num_bytes: 985941
num_examples: 2690
download_size: 1278219
dataset_size: 9477448
- config_name: wiki-ner.gu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 1571588
num_examples: 2343
- name: validation
num_bytes: 192804
num_examples: 297
- name: test
num_bytes: 197877
num_examples: 255
download_size: 329660
dataset_size: 1962269
- config_name: wiki-ner.hi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 3762505
num_examples: 9463
- name: validation
num_bytes: 468678
num_examples: 1114
- name: test
num_bytes: 475253
num_examples: 1256
download_size: 948132
dataset_size: 4706436
- config_name: wiki-ner.kn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 1352027
num_examples: 2679
- name: validation
num_bytes: 179538
num_examples: 412
- name: test
num_bytes: 180791
num_examples: 476
download_size: 421877
dataset_size: 1712356
- config_name: wiki-ner.ml
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 7678887
num_examples: 15620
- name: validation
num_bytes: 969947
num_examples: 2067
- name: test
num_bytes: 991102
num_examples: 2042
download_size: 2390442
dataset_size: 9639936
- config_name: wiki-ner.mr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 5431489
num_examples: 12151
- name: validation
num_bytes: 701637
num_examples: 1498
- name: test
num_bytes: 655682
num_examples: 1329
download_size: 1410663
dataset_size: 6788808
- config_name: wiki-ner.or
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 493758
num_examples: 1077
- name: validation
num_bytes: 58568
num_examples: 132
- name: test
num_bytes: 62211
num_examples: 153
download_size: 102783
dataset_size: 614537
- config_name: wiki-ner.pa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 520244
num_examples: 1408
- name: validation
num_bytes: 61170
num_examples: 186
- name: test
num_bytes: 61788
num_examples: 179
download_size: 149727
dataset_size: 643202
- config_name: wiki-ner.ta
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 10117080
num_examples: 20466
- name: validation
num_bytes: 1267188
num_examples: 2586
- name: test
num_bytes: 1321626
num_examples: 2611
download_size: 2819083
dataset_size: 12705894
- config_name: wiki-ner.te
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 3881211
num_examples: 7978
- name: validation
num_bytes: 458509
num_examples: 841
- name: test
num_bytes: 507806
num_examples: 1110
download_size: 1006881
dataset_size: 4847526
- config_name: wnli.en
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 104569
num_examples: 635
- name: validation
num_bytes: 11878
num_examples: 71
- name: test
num_bytes: 37297
num_examples: 146
download_size: 57667
dataset_size: 153744
- config_name: wnli.gu
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 251554
num_examples: 635
- name: validation
num_bytes: 28175
num_examples: 71
- name: test
num_bytes: 94578
num_examples: 146
download_size: 98032
dataset_size: 374307
- config_name: wnli.hi
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 253334
num_examples: 635
- name: validation
num_bytes: 28676
num_examples: 71
- name: test
num_bytes: 90823
num_examples: 146
download_size: 99450
dataset_size: 372833
- config_name: wnli.mr
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 256649
num_examples: 635
- name: validation
num_bytes: 29218
num_examples: 71
- name: test
num_bytes: 97128
num_examples: 146
download_size: 103774
dataset_size: 382995
- config_name: wstp.as
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 13581336
num_examples: 5000
- name: validation
num_bytes: 1698968
num_examples: 625
- name: test
num_bytes: 1697650
num_examples: 626
download_size: 6959458
dataset_size: 16977954
- config_name: wstp.bn
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 143340457
num_examples: 47580
- name: validation
num_bytes: 17759236
num_examples: 5947
- name: test
num_bytes: 17633865
num_examples: 5948
download_size: 69145372
dataset_size: 178733558
- config_name: wstp.gu
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 39353464
num_examples: 10004
- name: validation
num_bytes: 4887752
num_examples: 1251
- name: test
num_bytes: 4699158
num_examples: 1251
download_size: 19763249
dataset_size: 48940374
- config_name: wstp.hi
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 158529578
num_examples: 44069
- name: validation
num_bytes: 19371904
num_examples: 5509
- name: test
num_bytes: 19593001
num_examples: 5509
download_size: 77868574
dataset_size: 197494483
- config_name: wstp.kn
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 139950313
num_examples: 35379
- name: validation
num_bytes: 17789782
num_examples: 4422
- name: test
num_bytes: 17897031
num_examples: 4423
download_size: 67719504
dataset_size: 175637126
- config_name: wstp.ml
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 88360504
num_examples: 27527
- name: validation
num_bytes: 11193340
num_examples: 3441
- name: test
num_bytes: 11150914
num_examples: 3441
download_size: 42336357
dataset_size: 110704758
- config_name: wstp.mr
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 28302341
num_examples: 10446
- name: validation
num_bytes: 3328798
num_examples: 1306
- name: test
num_bytes: 3631684
num_examples: 1306
download_size: 13886208
dataset_size: 35262823
- config_name: wstp.or
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 10900006
num_examples: 4015
- name: validation
num_bytes: 1264935
num_examples: 502
- name: test
num_bytes: 1344652
num_examples: 502
download_size: 5319128
dataset_size: 13509593
- config_name: wstp.pa
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 22189730
num_examples: 8772
- name: validation
num_bytes: 2789186
num_examples: 1097
- name: test
num_bytes: 2685767
num_examples: 1097
download_size: 11201369
dataset_size: 27664683
- config_name: wstp.ta
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 151929218
num_examples: 48940
- name: validation
num_bytes: 18817167
num_examples: 6117
- name: test
num_bytes: 18815071
num_examples: 6118
download_size: 68699092
dataset_size: 189561456
- config_name: wstp.te
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 151696691
num_examples: 80000
- name: validation
num_bytes: 19003169
num_examples: 10000
- name: test
num_bytes: 18991913
num_examples: 10000
download_size: 50158580
dataset_size: 189691773
configs:
- config_name: actsa-sc.te
data_files:
- split: train
path: actsa-sc.te/train-*
- split: validation
path: actsa-sc.te/validation-*
- split: test
path: actsa-sc.te/test-*
- config_name: bbca.hi
data_files:
- split: train
path: bbca.hi/train-*
- split: test
path: bbca.hi/test-*
- config_name: copa.en
data_files:
- split: train
path: copa.en/train-*
- split: validation
path: copa.en/validation-*
- split: test
path: copa.en/test-*
- config_name: copa.gu
data_files:
- split: train
path: copa.gu/train-*
- split: validation
path: copa.gu/validation-*
- split: test
path: copa.gu/test-*
- config_name: copa.hi
data_files:
- split: train
path: copa.hi/train-*
- split: validation
path: copa.hi/validation-*
- split: test
path: copa.hi/test-*
- config_name: copa.mr
data_files:
- split: train
path: copa.mr/train-*
- split: validation
path: copa.mr/validation-*
- split: test
path: copa.mr/test-*
- config_name: csqa.as
data_files:
- split: test
path: csqa.as/test-*
- config_name: csqa.bn
data_files:
- split: test
path: csqa.bn/test-*
- config_name: csqa.gu
data_files:
- split: test
path: csqa.gu/test-*
- config_name: csqa.hi
data_files:
- split: test
path: csqa.hi/test-*
- config_name: csqa.kn
data_files:
- split: test
path: csqa.kn/test-*
- config_name: csqa.ml
data_files:
- split: test
path: csqa.ml/test-*
- config_name: csqa.mr
data_files:
- split: test
path: csqa.mr/test-*
- config_name: csqa.or
data_files:
- split: test
path: csqa.or/test-*
- config_name: csqa.pa
data_files:
- split: test
path: csqa.pa/test-*
- config_name: csqa.ta
data_files:
- split: test
path: csqa.ta/test-*
- config_name: csqa.te
data_files:
- split: test
path: csqa.te/test-*
- config_name: cvit-mkb-clsr.en-bn
data_files:
- split: test
path: cvit-mkb-clsr.en-bn/test-*
- config_name: cvit-mkb-clsr.en-gu
data_files:
- split: test
path: cvit-mkb-clsr.en-gu/test-*
- config_name: cvit-mkb-clsr.en-hi
data_files:
- split: test
path: cvit-mkb-clsr.en-hi/test-*
- config_name: cvit-mkb-clsr.en-ml
data_files:
- split: test
path: cvit-mkb-clsr.en-ml/test-*
- config_name: cvit-mkb-clsr.en-mr
data_files:
- split: test
path: cvit-mkb-clsr.en-mr/test-*
- config_name: cvit-mkb-clsr.en-or
data_files:
- split: test
path: cvit-mkb-clsr.en-or/test-*
- config_name: cvit-mkb-clsr.en-ta
data_files:
- split: test
path: cvit-mkb-clsr.en-ta/test-*
- config_name: cvit-mkb-clsr.en-te
data_files:
- split: test
path: cvit-mkb-clsr.en-te/test-*
- config_name: cvit-mkb-clsr.en-ur
data_files:
- split: test
path: cvit-mkb-clsr.en-ur/test-*
- config_name: iitp-mr.hi
data_files:
- split: train
path: iitp-mr.hi/train-*
- split: validation
path: iitp-mr.hi/validation-*
- split: test
path: iitp-mr.hi/test-*
- config_name: iitp-pr.hi
data_files:
- split: train
path: iitp-pr.hi/train-*
- split: validation
path: iitp-pr.hi/validation-*
- split: test
path: iitp-pr.hi/test-*
- config_name: inltkh.gu
data_files:
- split: train
path: inltkh.gu/train-*
- split: validation
path: inltkh.gu/validation-*
- split: test
path: inltkh.gu/test-*
- config_name: inltkh.ml
data_files:
- split: train
path: inltkh.ml/train-*
- split: validation
path: inltkh.ml/validation-*
- split: test
path: inltkh.ml/test-*
- config_name: inltkh.mr
data_files:
- split: train
path: inltkh.mr/train-*
- split: validation
path: inltkh.mr/validation-*
- split: test
path: inltkh.mr/test-*
- config_name: inltkh.ta
data_files:
- split: train
path: inltkh.ta/train-*
- split: validation
path: inltkh.ta/validation-*
- split: test
path: inltkh.ta/test-*
- config_name: inltkh.te
data_files:
- split: train
path: inltkh.te/train-*
- split: validation
path: inltkh.te/validation-*
- split: test
path: inltkh.te/test-*
- config_name: md.hi
data_files:
- split: train
path: md.hi/train-*
- split: validation
path: md.hi/validation-*
- split: test
path: md.hi/test-*
- config_name: sna.bn
data_files:
- split: train
path: sna.bn/train-*
- split: validation
path: sna.bn/validation-*
- split: test
path: sna.bn/test-*
- config_name: wiki-ner.as
data_files:
- split: train
path: wiki-ner.as/train-*
- split: validation
path: wiki-ner.as/validation-*
- split: test
path: wiki-ner.as/test-*
- config_name: wiki-ner.bn
data_files:
- split: train
path: wiki-ner.bn/train-*
- split: validation
path: wiki-ner.bn/validation-*
- split: test
path: wiki-ner.bn/test-*
- config_name: wiki-ner.gu
data_files:
- split: train
path: wiki-ner.gu/train-*
- split: validation
path: wiki-ner.gu/validation-*
- split: test
path: wiki-ner.gu/test-*
- config_name: wiki-ner.hi
data_files:
- split: train
path: wiki-ner.hi/train-*
- split: validation
path: wiki-ner.hi/validation-*
- split: test
path: wiki-ner.hi/test-*
- config_name: wiki-ner.kn
data_files:
- split: train
path: wiki-ner.kn/train-*
- split: validation
path: wiki-ner.kn/validation-*
- split: test
path: wiki-ner.kn/test-*
- config_name: wiki-ner.ml
data_files:
- split: train
path: wiki-ner.ml/train-*
- split: validation
path: wiki-ner.ml/validation-*
- split: test
path: wiki-ner.ml/test-*
- config_name: wiki-ner.mr
data_files:
- split: train
path: wiki-ner.mr/train-*
- split: validation
path: wiki-ner.mr/validation-*
- split: test
path: wiki-ner.mr/test-*
- config_name: wiki-ner.or
data_files:
- split: train
path: wiki-ner.or/train-*
- split: validation
path: wiki-ner.or/validation-*
- split: test
path: wiki-ner.or/test-*
- config_name: wiki-ner.pa
data_files:
- split: train
path: wiki-ner.pa/train-*
- split: validation
path: wiki-ner.pa/validation-*
- split: test
path: wiki-ner.pa/test-*
- config_name: wiki-ner.ta
data_files:
- split: train
path: wiki-ner.ta/train-*
- split: validation
path: wiki-ner.ta/validation-*
- split: test
path: wiki-ner.ta/test-*
- config_name: wiki-ner.te
data_files:
- split: train
path: wiki-ner.te/train-*
- split: validation
path: wiki-ner.te/validation-*
- split: test
path: wiki-ner.te/test-*
- config_name: wnli.en
data_files:
- split: train
path: wnli.en/train-*
- split: validation
path: wnli.en/validation-*
- split: test
path: wnli.en/test-*
- config_name: wnli.gu
data_files:
- split: train
path: wnli.gu/train-*
- split: validation
path: wnli.gu/validation-*
- split: test
path: wnli.gu/test-*
- config_name: wnli.hi
data_files:
- split: train
path: wnli.hi/train-*
- split: validation
path: wnli.hi/validation-*
- split: test
path: wnli.hi/test-*
- config_name: wnli.mr
data_files:
- split: train
path: wnli.mr/train-*
- split: validation
path: wnli.mr/validation-*
- split: test
path: wnli.mr/test-*
- config_name: wstp.as
data_files:
- split: train
path: wstp.as/train-*
- split: validation
path: wstp.as/validation-*
- split: test
path: wstp.as/test-*
- config_name: wstp.bn
data_files:
- split: train
path: wstp.bn/train-*
- split: validation
path: wstp.bn/validation-*
- split: test
path: wstp.bn/test-*
- config_name: wstp.gu
data_files:
- split: train
path: wstp.gu/train-*
- split: validation
path: wstp.gu/validation-*
- split: test
path: wstp.gu/test-*
- config_name: wstp.hi
data_files:
- split: train
path: wstp.hi/train-*
- split: validation
path: wstp.hi/validation-*
- split: test
path: wstp.hi/test-*
- config_name: wstp.kn
data_files:
- split: train
path: wstp.kn/train-*
- split: validation
path: wstp.kn/validation-*
- split: test
path: wstp.kn/test-*
- config_name: wstp.ml
data_files:
- split: train
path: wstp.ml/train-*
- split: validation
path: wstp.ml/validation-*
- split: test
path: wstp.ml/test-*
- config_name: wstp.mr
data_files:
- split: train
path: wstp.mr/train-*
- split: validation
path: wstp.mr/validation-*
- split: test
path: wstp.mr/test-*
- config_name: wstp.or
data_files:
- split: train
path: wstp.or/train-*
- split: validation
path: wstp.or/validation-*
- split: test
path: wstp.or/test-*
- config_name: wstp.pa
data_files:
- split: train
path: wstp.pa/train-*
- split: validation
path: wstp.pa/validation-*
- split: test
path: wstp.pa/test-*
- config_name: wstp.ta
data_files:
- split: train
path: wstp.ta/train-*
- split: validation
path: wstp.ta/validation-*
- split: test
path: wstp.ta/test-*
- config_name: wstp.te
data_files:
- split: train
path: wstp.te/train-*
- split: validation
path: wstp.te/validation-*
- split: test
path: wstp.te/test-*
---
# Dataset Card for "indic_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ai4bharat.iitm.ac.in/indic-glue
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages](https://aclanthology.org/2020.findings-emnlp.445/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.51 GB
- **Size of the generated dataset:** 1.65 GB
- **Total amount of disk used:** 5.16 GB
### Dataset Summary
IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide
variety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te.
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task
in which a system must read a sentence with a pronoun and select the referent of that pronoun from
a list of choices. The examples are manually constructed to foil simple statistical methods: Each
one is contingent on contextual information provided by a single word or phrase in the sentence.
To convert the problem into sentence pair classification, we construct sentence pairs by replacing
the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the
pronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of
new examples derived from fiction books that was shared privately by the authors of the original
corpus. While the included training set is balanced between two classes, the test set is imbalanced
between them (65% not entailment). Also, due to a data quirk, the development set is adversarial:
hypotheses are sometimes shared between training and development examples, so if a model memorizes the
training examples, they will predict the wrong label on corresponding development set
example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence
between a model's score on this task and its score on the unconverted original task. We
call converted dataset WNLI (Winograd NLI). This dataset is translated and publicly released for 3
Indian languages by AI4Bharat.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### actsa-sc.te
- **Size of downloaded dataset files:** 0.38 MB
- **Size of the generated dataset:** 1.71 MB
- **Total amount of disk used:** 2.09 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"label": 0,
"text": "\"ప్రయాణాల్లో ఉన్నవారికోసం బస్ స్టేషన్లు, రైల్వే స్టేషన్లలో పల్స్పోలియో బూతులను ఏర్పాటు చేసి చిన్నారులకు పోలియో చుక్కలు వేసేలా ఏర..."
}
```
#### bbca.hi
- **Size of downloaded dataset files:** 5.77 MB
- **Size of the generated dataset:** 27.63 MB
- **Total amount of disk used:** 33.40 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"label": "pakistan",
"text": "\"नेटिजन यानि इंटरनेट पर सक्रिय नागरिक अब ट्विटर पर सरकार द्वारा लगाए प्रतिबंधों के समर्थन या विरोध में अपने विचार व्यक्त करते है..."
}
```
#### copa.en
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.87 MB
An example of 'validation' looks as follows.
```
{
"choice1": "I swept the floor in the unoccupied room.",
"choice2": "I shut off the light in the unoccupied room.",
"label": 1,
"premise": "I wanted to conserve energy.",
"question": "effect"
}
```
#### copa.gu
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"choice1": "\"સ્ત્રી જાણતી હતી કે તેનો મિત્ર મુશ્કેલ સમયમાંથી પસાર થઈ રહ્યો છે.\"...",
"choice2": "\"મહિલાને લાગ્યું કે તેના મિત્રએ તેની દયાળુ લાભ લીધો છે.\"...",
"label": 0,
"premise": "મહિલાએ તેના મિત્રની મુશ્કેલ વર્તન સહન કરી.",
"question": "cause"
}
```
#### copa.hi
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.99 MB
An example of 'validation' looks as follows.
```
{
"choice1": "मैंने उसका प्रस्ताव ठुकरा दिया।",
"choice2": "उन्होंने मुझे उत्पाद खरीदने के लिए राजी किया।",
"label": 0,
"premise": "मैंने सेल्समैन की पिच पर शक किया।",
"question": "effect"
}
```
### Data Fields
The data fields are the same among all splits.
#### actsa-sc.te
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (0), `negative` (1).
#### bbca.hi
- `label`: a `string` feature.
- `text`: a `string` feature.
#### copa.en
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
#### copa.gu
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
#### copa.hi
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
### Data Splits
#### actsa-sc.te
| |train|validation|test|
|-----------|----:|---------:|---:|
|actsa-sc.te| 4328| 541| 541|
#### bbca.hi
| |train|test|
|-------|----:|---:|
|bbca.hi| 3467| 866|
#### copa.en
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.en| 400| 100| 500|
#### copa.gu
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.gu| 362| 88| 448|
#### copa.hi
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.hi| 362| 88| 449|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{kakwani-etal-2020-indicnlpsuite,
title = "{I}ndic{NLPS}uite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for {I}ndian Languages",
author = "Kakwani, Divyanshu and
Kunchukuttan, Anoop and
Golla, Satish and
N.C., Gokul and
Bhattacharyya, Avik and
Khapra, Mitesh M. and
Kumar, Pratyush",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.445",
doi = "10.18653/v1/2020.findings-emnlp.445",
pages = "4948--4961",
}
@inproceedings{Levesque2011TheWS,
title={The Winograd Schema Challenge},
author={H. Levesque and E. Davis and L. Morgenstern},
booktitle={KR},
year={2011}
}
```
### Contributions
Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset. |
tdavidson/hate_speech_offensive | tdavidson | 2024-01-04T12:06:17Z | 1,339 | 31 | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1703.04009",
"region:us",
"hate-speech-detection"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- expert-generated
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: hate-speech-and-offensive-language
pretty_name: Hate Speech and Offensive Language
tags:
- hate-speech-detection
dataset_info:
features:
- name: count
dtype: int64
- name: hate_speech_count
dtype: int64
- name: offensive_language_count
dtype: int64
- name: neither_count
dtype: int64
- name: class
dtype:
class_label:
names:
'0': hate speech
'1': offensive language
'2': neither
- name: tweet
dtype: string
splits:
- name: train
num_bytes: 3207814
num_examples: 24783
download_size: 1627672
dataset_size: 3207814
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
col_mapping:
tweet: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/t-davidson/hate-speech-and-offensive-language
- **Repository:** https://github.com/t-davidson/hate-speech-and-offensive-language
- **Paper:** https://arxiv.org/abs/1703.04009
- **Leaderboard:**
- **Point of Contact:** https://docs.google.com/forms/d/e/1FAIpQLSdrPNlfVBlqxun2tivzAtsZaOoPC5YYMocn-xscCgeRakLXHg/viewform?usp=pp_url&entry.1506871634&entry.147453066&entry.1390333885&entry.516829772
### Dataset Summary
An annotated dataset for hate speech and offensive language detection on tweets.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (`en`)
## Dataset Structure
### Data Instances
```
{
"count": 3,
"hate_speech_annotation": 0,
"offensive_language_annotation": 0,
"neither_annotation": 3,
"label": 2, # "neither"
"tweet": "!!! RT @mayasolovely: As a woman you shouldn't complain about cleaning up your house. & as a man you should always take the trash out...")
}
```
### Data Fields
```
count: (Integer) number of users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable,
hate_speech_annotation: (Integer) number of users who judged the tweet to be hate speech,
offensive_language_annotation: (Integer) number of users who judged the tweet to be offensive,
neither_annotation: (Integer) number of users who judged the tweet to be neither offensive nor non-offensive,
label: (Class Label) class label for majority of CF users (0: 'hate-speech', 1: 'offensive-language' or 2: 'neither'),
tweet: (string)
```
### Data Splits
This dataset is not splitted, only the train split is available.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
Usernames are not anonymized in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
MIT License
### Citation Information
@inproceedings{hateoffensive,
title = {Automated Hate Speech Detection and the Problem of Offensive Language},
author = {Davidson, Thomas and Warmsley, Dana and Macy, Michael and Weber, Ingmar},
booktitle = {Proceedings of the 11th International AAAI Conference on Web and Social Media},
series = {ICWSM '17},
year = {2017},
location = {Montreal, Canada},
pages = {512-515}
}
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. |
AI-team-UoA/greek_legal_code | AI-team-UoA | 2024-01-04T12:03:50Z | 304 | 12 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:el",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2109.15298",
"region:us"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- found
language_creators:
- found
language:
- el
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- topic-classification
pretty_name: Greek Legal Code
dataset_info:
- config_name: chapter
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ΜΕΤΑΛΛΕΙΑ ΚΑΙ ΟΡΥΧΕΙΑ
'1': ΣΤΑΤΙΩΤΙΚΕΣ ΣΧΟΛΕΣ
'2': ΠΑΡΟΧΕΣ ΑΝΕΡΓΙΑΣ
'3': ΣΙΔΗΡΟΔΡΟΜΙΚΑ ΔΙΚΤΥΑ
'4': ΕΙΔΙΚΑ ΣΤΡΑΤΙΩΤΙΚΑ ΑΔΙΚΗΜΑΤΑ
'5': ΚΡΑΤΙΚΕΣ ΠΡΟΜΗΘΕΙΕΣ
'6': ΑΓΡΟΤΙΚΗ ΑΠΟΚΑΤΑΣΤΑΣΗ
'7': ΑΞΙΩΜΑΤΙΚΟΙ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'8': ΣΧΕΔΙΑ ΠΟΛΕΩΝ
'9': ΣΥΚΑ
'10': ΠΡΟΛΗΨΙΣ ΚΑΙ ΔΙΩΞΙΣ ΤΟΥ ΕΓΚΛΗΜΑΤΟΣ
'11': ΔΙΕΘΝΕΙΣ ΜΕΤΑΦΟΡΕΣ
'12': ΓΕΝΙΚΗ ΣΥΓΚΟΙΝΩΝΙΑ ΚΑΙ ΔΙΑΤΑΞΕΙΣ
'13': ΚΛΗΡΟΝΟΜΙΚΟ ΔΙΚΑΙΟ
'14': ΚΟΙΝΩΝΙΚΗ ΑΝΤΙΛΗΨΗ
'15': ΝΑΥΤΙΛΙΑΚΕΣ ΣΗΜΑΝΣΕΙΣ
'16': ΔΙΕΘΝΕΣ ΠΟΙΝΙΚΟ ΔΙΚΑΙΟ
'17': ΑΣΦΑΛΙΣΤΙΚΟΙ ΟΡΓΑΝΙΣΜΟΙ Ε.Ν
'18': ΣΩΜΑΤΙΚΗ ΑΓΩΓΗ
'19': ΣΠΟΡΟΠΑΡΑΓΩΓΗ
'20': ΥΠΗΡΕΣΙΑΙ ΔΗΜΟΣΙΩΝ ΕΡΓΩΝ
'21': ΤΑΜΕΙΑ ΣΥΝΤΑΞΕΩΝ ΤΡΑΠΕΖΩΝ
'22': ΠΥΡΟΣΒΕΣΤΙΚΟ ΣΩΜΑ
'23': ΔΙΑΦΟΡΕΣ ΒΙΟΜΗΧΑΝΙΕΣ
'24': ΕΚΤΕΛΕΣΗ ΚΑΙ ΣΥΝΕΠΕΙΕΣ ΤΗΣ ΠΟΙΝΗΣ
'25': ΔΙΕΘΝΕΙΣ ΑΣΦΑΛΙΣΤΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'26': ΝΟΜΟΘΕΣΙΑ
'27': ΒΑΜΒΑΚΙ
'28': ΠΑΡΟΧΕΣ ΣΥΝΤΑΞΕΩΝ
'29': ΝΟΜΙΣΜΑ
'30': ΣΥΜΒΑΣΗ ΝΑΥΤΙΚΗΣ ΕΡΓΑΣΙΑΣ
'31': ΟΡΓΑΝΙΣΜΟΊ ΚΟΙΝΩΝΙΚΉΣ ΑΣΦΑΛΊΣΕΩΣ
'32': ΑΓΡΟΤΙΚΗ ΑΣΦΑΛΕΙΑ
'33': ΥΓΕΙΟΝΟΜΙΚΟΣ ΕΛΕΓΧΟΣ ΕΙΣΕΡΧΟΜΕΝΩΝ
'34': ΜΟΥΣΕΙΑ ΚΑΙ ΣΥΛΛΟΓΕΣ
'35': ΠΡΟΣΩΠΙΚΟ Ι.Κ.Α
'36': ΞΕΝΟΔΟΧΕΙΑ
'37': ΚΡΑΤΙΚΗ ΑΣΦΑΛΕΙΑ
'38': ΣΥΝΕΤΑΙΡΙΣΜΟΙ
'39': ΠΟΛΥΕΘΝΕΙΣ ΣΥΜΦΩΝΙΕΣ
'40': ΕΤΕΡΟΔΟΞΟΙ
'41': ΜΕΣΗ ΕΚΠΑΙΔΕΥΣΙΣ
'42': ΓΕΩΡΓΙΚΟΙ ΟΡΓΑΝΙΣΜΟΙ
'43': ΓΕΝΙΚΟ ΛΟΓΙΣΤΗΡΙΟ
'44': ΡΥΘΜΙΣΗ ΤΗΣ ΑΓΟΡΑΣ ΕΡΓΑΣΙΑΣ
'45': ΠΑΡΟΧΟΙ ΚΙΝΗΤΩΝ ΤΗΛΕΠΙΚΟΙΝΩΝΙΩΝ
'46': ΕΜΠΡΑΓΜΑΤΟΣ ΑΣΦΑΛΕΙΑ
'47': ΦΟΡΟΛΟΓΙΑ ΑΚΑΘΑΡΙΣΤΟΥ ΠΡΟΣΟΔΟΥ
'48': ΚΤΗΜΑΤΙΚΕΣ ΤΡΑΠΕΖΕΣ
'49': ΣΤΑΤΙΣΤΙΚΗ
'50': ΚΕΡΑΙΕΣ – ΣΤΑΘΜΟΙ ΚΕΡΑΙΩΝ
'51': ΠΟΙΝΙΚΟΣ ΝΟΜΟΣ
'52': ΜΕΣΑ ΔΙΔΑΣΚΑΛΙΑΣ
'53': ΕΜΠΟΡΙΟ ΦΑΡΜΑΚΩΝ
'54': ΔΙΑΦΟΡΑ
'55': ΔΗΜΟΣΙΑ ΚΤΗΜΑΤΑ
'56': ΕΙΣΦΟΡΕΣ Ι.Κ.Α
'57': ΚΑΤΑΓΓΕΛΙΑ ΣΥΜΒΑΣΕΩΣ ΕΡΓΑΣΙΑΣ
'58': ΠΡΟΣΩΠΙΚΟ ΠΟΛΙΤΙΚΗΣ ΑΕΡΟΠΟΡΙΑΣ
'59': ΔΗΜΟΣΙΟ ΧΡΕΟΣ
'60': ΑΠΟΤΑΜΙΕΥΣΗ
'61': ΑΛΛΟΘΡΗΣΚΟΙ
'62': ΠΛΟΗΓΙΚΗ ΥΠΗΡΕΣΙΑ
'63': ΤΥΠΟΣ ΚΑΙ ΠΛΗΡΟΦΟΡΙΕΣ
'64': ΤΡΟΠΟΠΟΙΗΣΗ ΚΑΙ ΚΑΤΑΡΓΗΣΗ ΤΗΣ ΠΟΙΝΗΣ
'65': ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ ΤΥΠΟΥ
'66': ΟΙΚΟΓΕΝΕΙΑΚΟ ΔΙΚΑΙΟ
'67': ΔΙΟΙΚΗΣΗ ΕΘΝΙΚΗΣ ΟΙΚΟΝΟΜΙΑΣ
'68': ΥΠΟΥΡΓΕΙΟ ΕΘΝΙΚΗΣ ΑΜΥΝΑΣ
'69': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΠΡΟΝΟΙΑΣ
'70': ΠΡΟΣΩΠΙΚΟ ΤΩΝ ΔΙΚΑΣΤΗΡΙΩΝ
'71': ΠΡΟΣΤΑΣΙΑ ΠΡΟΣΩΠΩΝ ΕΙΔΙΚΩΝ ΚΑΤΗΓΟΡΙΩΝ
'72': ΠΑΡΟΧΕΣ ΑΣΘΕΝΕΙΑΣ
'73': ΜΕΤΑΝΑΣΤΕΥΣΗ
'74': ΥΠΟΥΡΓΕΙΟ ΠΑΙΔΕΙΑΣ
'75': ΑΣΦΑΛΕΙΑ ΝΑΥΣΙΠΛΟΪΑΣ
'76': ΟΔΟΠΟΙΪΑ
'77': ΣΤΡΑΤΟΔΙΚΕΙΑ
'78': ΜΙΣΘΩΣΗ
'79': ΕΙΣΠΡΑΞΗ ΔΗΜΟΣΙΩΝ ΕΣΟΔΩΝ
'80': ΟΠΛΙΤΕΣ ΚΑΙ ΑΝΘΥΠΑΣΠΙΣΤΕΣ
'81': ΟΡΓΑΝΙΣΜΟΣ ΤΗΛΕΠΙΚΟΙΝΩΝΙΩΝ ΕΛΛΑΔΑΣ (Ο.Τ.Ε.)
'82': ΌΡΓΑΝΑ ΆΣΚΗΣΗΣ ΔΙΑΧΕΙΡΙΣΤΙΚΟΎ ΕΛΈΓΧΟΥ ΟΡΓΑΝΙΣΜΏΝ ΚΑΙ ΕΠΙΧΕΙΡΉΣΕΩΝ
'83': ΠΟΙΝΙΚΗ ΝΟΜΟΘΕΣΙΑ ΤΥΠΟΥ
'84': ΕΞΑΓΩΓΙΚΟ ΕΜΠΟΡΙΟ
'85': ΑΕΡΟΠΟΡΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'86': ΓΕΩΡΓΙΚΟΙ ΣΥΝΕΤΑΙΡΙΣΜΟΙ ΑΓΡΟΤΙΚΕΣ ΣΥΝΕΤΑΙΡΙΣΤΙΚΕΣ ΟΡΓΑΝΩΣΕΙΣ
'87': ΟΙΚΟΝΟΜΙΚΕΣ ΥΠΗΡΕΣΙΕΣ
'88': ΟΧΥΡΩΣΕΙΣ
'89': ΕΚΤΑΚΤΟΙ ΠΟΙΝΙΚΟΙ ΝΟΜΟΙ
'90': ΕΚΤΕΛΕΣΗ
'91': ΔΙΟΙΚΗΤΙΚΟΙ ΚΑΝΟΝΙΣΜΟΙ
'92': ΥΔΡΑΥΛΙΚΑ ΕΡΓΑ
'93': ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'94': ΕΚΚΑΘΑΡΙΣΕΙΣ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'95': ΔΙΟΙΚΗΣΗ ΕΜΠΟΡΙΚΟΥ ΝΑΥΤΙΚΟΥ
'96': ΑΝΩΤΑΤΟ ΕΙΔΙΚΟ ΔΙΚΑΣΤΗΡΙΟ
'97': ΑΡΤΟΣ
'98': ΕΙΣΑΓΩΓΙΚΟ ΕΜΠΟΡΙΟ
'99': ΑΛΙΕΙΑ
'100': ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΠΕΡΙΟΥΣΙΑ
'101': ΔΙΑΦΟΡΑ ΔΗΜΟΣΙΑ ΕΡΓΑ
'102': ΜΟΝΕΣ
'103': ΠΡΟΕΔΡΟΣ ΤΗΣ ΔΗΜΟΚΡΑΤΙΑΣ ΚΑΙ ΠΡΟΕΔΡΙΑ ΤΗΣ ΔΗΜΟΚΡΑΤΙΑΣ
'104': ΠΟΛΥΕΘΝΕΙΣ ΟΡΓΑΝΙΣΜΟΙ
'105': ΑΡΧΑΙΟΤΗΤΕΣ
'106': ΝΑΟΙ ΚΑΙ ΛΕΙΤΟΥΡΓΟΙ ΑΥΤΩΝ
'107': ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΕΚΠΑΙΔΕΥΣΗ
'108': ΕΝΙΣΧΥΣΙΣ ΤΗΣ ΓΕΩΡΓΙΑΣ
'109': ΕΚΘΕΣΕΙΣ
'110': ΠΡΟΣΤΑΣΙΑ ΤΩΝ ΣΥΝΑΛΛΑΓΩΝ
'111': ΑΣΦΑΛΙΣΗ
'112': ΚΤΗΝΟΤΡΟΦΙΑ
'113': ΕΚΠΑΙΔΕΥΤΙΚΑ ΤΕΛΗ
'114': ΔΙΟΙΚΗΣΗ ΕΚΠΑΙΔΕΥΣΕΩΣ
'115': ΤΑΜΕΙΟ ΠΑΡΑΚΑΤΑΘΗΚΩΝ ΚΑΙ ΔΑΝΕΙΩΝ
'116': ΑΓΑΘΟΕΡΓΑ ΙΔΡΥΜΑΤΑ
'117': ΦΟΡΟΛΟΓΙΚΑ ΔΙΚΑΣΤΗΡΙΑ
'118': ΦΟΡΟΙ ΚΑΤΑΝΑΛΩΣΕΩΣ
'119': ΒΙΒΛΙΟΘΗΚΕΣ-ΠΡΟΣΤΑΣΙΑ ΒΙΒΛΙΟΥ-ΔΙΑΔΟΣΗ ΛΟΓΟΤΕΧΝΙΑΣ
'120': ΤΗΛΕΠΙΚΟΙΝΩΝΙΑΚΕΣ ΚΑΙ ΤΑΧΥΔΡΟΜΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'121': ΙΔΙΩΤΙΚΗ ΕΚΠΑΙΔΕΥΣΗ
'122': ΤΗΛΕΠΙΚΟΙΝΩΝΙΕΣ
'123': ΑΣΥΡΜΑΤΟΣ
'124': ΑΠΟΔΟΧΕΣ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΩΝ
'125': ΥΓΕΙΟΝΟΜΙΚΗ ΥΠΗΡΕΣΙΑ ΣΤΡΑΤΟΥ
'126': ΦΑΡΜΑΚΕΙΑ
'127': ΔΗΜΟΣΙΟ ΛΟΓΙΣΤΙΚΟ
'128': ΝΑΥΤΙΚΗ ΕΚΠΑΙΔΕΥΣΗ
'129': ΕΞΥΠΗΡΕΤΗΣΗ ΠΟΛΙΤΙΚΗΣ ΑΕΡΟΠΟΡΙΑΣ
'130': ΠΑΡΟΧΕΣ Ι.Κ.Α
'131': ΓΕΝΙΚΑ ΥΓΕΙΟΝΟΜΙΚΑ ΜΕΤΡΑ
'132': ΕΚΜΕΤΑΛΛΕΥΣΗ ΘΑΛΑΣΣΙΩΝ ΣΥΓΚΟΙΝΩΝΙΩΝ
'133': ΠΡΟΣΩΠΙΚΟ ΤΑΧΥΔΡΟΜΕΙΩΝ
'134': ΕΚΤΕΛΕΣΤΙΚΗ ΕΞΟΥΣΙΑ
'135': ΣΥΣΤΑΣΗ ΚΑΙ ΕΔΡΑ ΤΟΥ ΚΡΑΤΟΥΣ
'136': ΦΟΡΟΛΟΓΙΑ ΔΙΑΣΚΕΔΑΣΕΩΝ
'137': ΤΗΛΕΦΩΝΑ
'138': ΣΤΡΑΤΟΛΟΓΙΑ
'139': ΕΚΠΑΙΔΕΥΣΗ ΕΡΓΑΤΩΝ
'140': ΥΠΟΥΡΓΕΙΟ ΠΟΛΙΤΙΣΜΟΥ
'141': ΦΟΡΟΛΟΓΙΑ ΟΙΝΟΠΝΕΥΜΑΤΩΔΩΝ ΠΟΤΩΝ
'142': ΥΠΟΥΡΓΕΙΟ ΓΕΩΡΓΙΑΣ
'143': ΣΩΜΑΤΕΙΑ
'144': ΕΙΔΙΚΕΣ ΜΟΡΦΕΣ ΑΠΑΣΧΟΛΗΣΗΣ
'145': ΥΠΟΥΡΓΕΙΟ ΔΙΚΑΙΟΣΥΝΗΣ
'146': ΝΑΥΤΙΛΙΑΚΟΙ ΟΡΓΑΝΙΣΜΟΙ
'147': ΤΟΥΡΙΣΜΟΣ
'148': ΚΑΠΝΟΣ
'149': ΠΡΟΣΤΑΣΙΑ ΗΘΩΝ
'150': ΕΙΔΙΚΕΣ ΥΠΗΡΕΣΙΕΣ ΝΑΥΤΙΚΟΥ
'151': ΑΠΟΔΟΧΕΣ ΣΤΡΑΤΙΩΤΙΚΩΝ
'152': ΠΡΟΝΟΙΑ ΠΛΗΡΩΜΑΤΩΝ Ε.Ν
'153': ΕΙΔΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΠΕΡΙ ΑΝΩΝ.ΕΤΑΙΡΕΙΩΝ
'154': ΔΗΜΟΣΙΑ ΔΙΟΙΚΗΣΗ
'155': ΤΟΠΙΚΑ ΣΧΕΔΙΑ ΠΟΛΕΩΝ
'156': ΠΡΟΣΤΑΣΙΑ ΠΑΙΔΙΚΗΣ ΗΛΙΚΙΑΣ
'157': ΕΛΛΗΝΙΚΗ ΑΣΤΥΝΟΜΙΑ
'158': ΛΙΜΕΝΙΚΟ ΣΩΜΑ
'159': ΤΟΥΡΙΣΤΙΚΗ ΑΣΤΥΝΟΜΙΑ
'160': ΒΙΟΜΗΧΑΝΙΑ
'161': ΣΧΟΛΕΣ ΠΑΝΕΠΙΣΤΗΜΙΟΥ ΑΘΗΝΩΝ
'162': ΑΣΦΑΛΙΣΤΙΚΟΙ ΟΡΓΑΝΙΣΜΟΙ ΣΤΡΑΤΟΥ
'163': ΑΛΥΚΕΣ
'164': ΕΣΩΤΕΡΙΚΟ ΕΜΠΟΡΙΟ
'165': ΕΘΝΙΚΟ ΣΥΣΤΗΜΑ ΥΓΕΙΑΣ
'166': ΝΟΜΟΘΕΤΙΚΗ ΕΞΟΥΣΙΑ
'167': ΔΙΟΙΚΗΣH ΚΟΙΝΩΝIKΗΣ ΠΡΟΝΟΙΑΣ
'168': ΠΛΗΡΩΜΑΤΑ
'169': ΜΑΘΗΤΙΚΗ ΠΡΟΝΟΙΑ
'170': ΔΙΟΙΚΗΣΗ ΤΥΠΟΥ ΚΑΙ ΤΟΥΡΙΣΜΟΥ
'171': ΕΠΟΙΚΙΣΜΟΣ
'172': ΤΡΟΧΙΟΔΡΟΜΟΙ
'173': ΕΠΑΓΓΕΛΜΑΤΙΚΗ ΕΚΠΑΙΔΕΥΣΗ
'174': ΑΕΡΟΠΟΡΙΚΗ ΕΚΠΑΙΔΕΥΣΗ
'175': ΥΠΟΥΡΓΕΙΟ ΕΘΝΙΚΗΣ ΟΙΚΟΝΟΜΙΑΣ
'176': ΘΕΑΤΡΟ
'177': ΥΔΡΕΥΣΗ
'178': ΔΙΕΘΝΕΙΣ ΣΤΡΑΤΙΩΤΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'179': ΕΘΝΙΚΟ ΜΕΤΣΟΒΙΟ ΠΟΛΥΤΕΧΝΕΙΟ
'180': ΥΠΟΥΡΓΕΙΟ ΕΞΩΤΕΡΙΚΩΝ
'181': ΕΥΡΩΠΑΪΚΟΙ ΠΟΛΥΕΘΝΕΙΣ ΟΡΓΑΝΙΣΜΟΙ
'182': ΕΛΕΥΘΕΡΙΑ ΤΗΣ ΕΡΓΑΣΙΑΣ
'183': ΥΠΟΥΡΓΕΙΟ ΕΣΩΤΕΡΙΚΩΝ ΔΗΜ.ΔΙΟΙΚΗΣΗΣ ΚΑΙ ΑΠΟΚΕΝΤΡΩΣΗΣ
'184': ΔΙΑΦΟΡΕΣ ΕΝΟΧΙΚΕΣ ΣΧΕΣΕΙΣ
'185': ΛΗΞΙΑΡΧΕΙΑ
'186': ΕΙΔΙΚΟΙ ΚΑΝΟΝΙΣΜΟΙ
'187': ΤΕΛΩΝΕΙΑΚΕΣ ΣΥΜΒΑΣΕΙΣ
'188': ΝΑΥΤΙΚΟ ΠΟΙΝΙΚΟ ΔΙΚΑΙΟ
'189': ΣΤΕΓΑΣΗ ΔΗΜΟΣΙΩΝ ΥΠΗΡΕΣΙΩΝ
'190': ΠΛΗΡΩΜΑΤΑ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'191': ΣΥΝΤΑΓΜΑΤΙΚΟΣ ΧΑΡΤΗΣ
'192': ΗΛΕΚΤΡΙΣΜΟΣ
'193': ΑΣΦΑΛΙΣΤΙΚΑ ΔΙΚΑΣΤΗΡΙΑ
'194': ΛΕΣΧΕΣ ΕΛΛΗΝΙΚΗΣ ΑΣΤΥΝΟΜΙΑΣ
'195': ΥΠΟΥΡΓΕΙΟ ΔΗΜΟΣΙΑΣ TAΞΗΣ
'196': ΕΚΤΕΛΕΣ ΔΗΜΟΣΙΩΝ ΕΡΓΩΝ
'197': ΠΑΝΕΠΙΣΤΗΜΙΟ ΘΕΣΣΑΛΟΝΙΚΗΣ
'198': ΔΑΣΙΚΗ ΝΟΜΟΘΕΣΙΑ
'199': ΕΙΔΙΚΕΣ ΑΝΩΤΑΤΕΣ ΣΧΟΛΕΣ
'200': ΕΔΑΦΟΣ ΤΟΥ ΕΛΛΗΝΙΚΟΥ ΚΡΑΤΟΥΣ
'201': ΔΙΚΗΓΟΡΟΙ
'202': ΔΙΚΑΙΟ ΤΩΝ ΠΡΟΣΩΠΩΝ
'203': ΔΙΟΙΚΗΣΗ ΤΑΧΥΔΡΟΜΙΚΗΣ, ΤΗΛΕΓΡΑΦΙΚΗΣ
'204': ΣΧΟΛΙΚΑ ΚΤΙΡΙΑ ΚΑΙ ΤΑΜΕΙΑ
'205': ΑΕΡΟΛΙΜΕΝΕΣ
'206': ΥΠΟΘΗΚΟΦΥΛΑΚΕΙΑ
'207': ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ ΠΡΟΣΩΠΙΚΟΥ ΥΠΟΥΡΓΕΙΟΥ ΔΗΜΟΣΙΑΣ ΤΑΞΗΣ
'208': ΔΙΑΧΕΙΡΙΣΕΙΣ ΤΟΥ ΔΗΜΟΣΙΟΥ
'209': ΕΜΠΡΑΓΜΑΤΟ ΔΙΚΑΙΟ
'210': ΦΟΡΤΟΕΚΦΟΡΤΩΣΕΙΣ
'211': ΑΝΩΝΥΜΕΣ ΕΤΑΙΡΕΙΕΣ
'212': ΕΙΔΙΚΟΙ ΕΠΙΣΙΤΙΣΤΙΚΟΙ ΝΟΜΟΙ
'213': ΕΚΚΛΗΣΙΕΣ ΑΛΛΟΔΑΠΗΣ
'214': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ
'215': ΟΡΓΑΝΙΣΜΟΣ ΑΣΦΑΛΙΣΗΣ ΕΛΕΥΘΕΡΩΝ ΕΠΑΓΓΕΛΜΑΤΙΩΝ
'216': ΑΣΦΑΛΕΙΑ ΑΕΡΟΠΛΟΪΑΣ
'217': ΤΑΜΕΙΑ ΑΣΦΑΛΙΣΕΩΣ ΚΑΙ ΑΡΩΓΗΣ
'218': ΑΝΩΤΑΤΗ ΕΚΠΑΙΔΕΥΣΗ
'219': ΠΟΛΕΜΙΚΗ ΔΙΑΘΕΣΙΜΟΤΗΤΑ
'220': ΠΟΙΝΙΚΟ ΚΑΙ ΠΕΙΘΑΡΧΙΚΟ ΔΙΚΑΙΟ
'221': ΦΟΡΟΛΟΓΙΑ ΕΠΙΤΗΔΕΥΜΑΤΟΣ
'222': ΕΚΤΑΚΤΕΣ ΦΟΡΟΛΟΓΙΕΣ
'223': ΠΟΙΝΙΚΗ ΔΙΚΟΝΟΜΙΑ
'224': ΣΤΟΙΧΕΙΩΔΗΣ ΕΚΠΑΙΔΕΥΣΗ
'225': ΣΥΜΒΟΥΛΙΟ ΕΠΙΚΡΑΤΕΙΑΣ ΚΑΙ ΔΙΟΙΚΗΤΙΚΑ ΔΙΚΑΣΤΗΡΙΑ
'226': ΝΟΜΙΚΑ ΠΡΟΣΩΠΑ ΚΑΙ ΕΚΜΕΤΑΛΛΕΥΣΕΙΣ
'227': ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ ΝΑΥΤΙΚΟΥ
'228': ΤΥΠΟΣ
'229': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΕΠΑΓΓΕΛΜΑΤΙΩΝ
'230': ΠΑΝΕΠΙΣΤΗΜΙΟ ΙΩΑΝΝΙΝΩΝ
'231': ΧΡΕΩΓΡΑΦΑ
'232': ΠΡΟΪΟΝΤΑ ΕΛΑΙΑΣ
'233': ΕΚΚΛΗΣΙΑ ΙΟΝΙΩΝ ΝΗΣΩΝ
'234': ΔΙΟΙΚΗΣH ΥΓΙΕΙΝΗΣ
'235': ΑΕΡΟΠΟΡΙΚΟ ΠΟΙΝΙΚΟ ΔΙΚΑΙΟ
'236': ΚΑΤΑΠΟΛΕΜΗΣΗ ΝΟΣΩΝ ΚΑΤ’ ΙΔΙΑΝ
'237': ΕΙΔΙΚΟΙ ΠΟΙΝΙΚΟΙ ΝΟΜΟΙ
'238': ΘΗΡΑ
'239': ΥΓΙΕΙΝΗ ΚΑΙ ΑΣΦΑΛΕΙΑ ΕΡΓΑΖΟΜΕΝΩΝ
'240': ΔΙΟΙΚΗΣΗ ΣΥΓΚΟΙΝΩΝΙΩΝ
'241': ΑΠΟΣΤΟΛΙΚΗ ΔΙΑΚΟΝΙΑ ΕΚΚΛΗΣΙΑΣ ΤΗΣ ΕΛΛΑΔΟΣ
'242': ΠΡΟΣΩΡΙΝΕΣ ΑΤΕΛΕΙΕΣ
'243': ΤΑΧΥΔΡΟΜΙΚΑ ΤΑΜΙΕΥΤΗΡΙΑ
'244': ΑΝΩΤΑΤΗ ΣΧΟΛΗ ΚΑΛΩΝ ΤΕΧΝΩΝ
'245': ΔΙΟΙΚΗΣΗ ΕΡΓΑΣΙΑΣ
'246': ΑΓΙΟΝ ΟΡΟΣ
'247': ΣΧΟΛΕΣ Π. ΝΑΥΤΙΚΟΥ
'248': ΤΡΑΠΕΖΕΣ
'249': ΕΛΕΓΧΟΣ ΚΙΝΗΣΕΩΣ ΜΕ ΤΟ ΕΞΩΤΕΡΙΚΟ
'250': ΕΙΔΙΚΑΙ ΚΑΤΗΓΟΡΙΑΙ ΠΛΟΙΩΝ
'251': ΓΕΩΡΓΙΚΗ ΥΓΙΕΙΝΗ
'252': ΕΞΟΔΑ ΠΟΙΝΙΚΗΣ ΔΙΑΔΙΚΑΣΙΑΣ
'253': ΕΡΓΑΣΙΑ ΓΥΝΑΙΚΩΝ ΚΑΙ ΑΝΗΛΙΚΩΝ
'254': ΔΙΟΙΚΗΣΗ ΕΦΟΔΙΑΣΜΟΥ
'255': ΕΜΠΟΡΙΚΑ ΕΠΑΓΓΕΛΜΑΤΑ
'256': ΕΚΤΕΛΩΝΙΣΤΕΣ
'257': ΦΟΡΟΛΟΓΙΑ ΚΛΗΡΟΝΟΜΙΩΝ, ΔΩΡΕΩΝ ΚΛΠ
'258': ΟΡΓΑΝΙΣΜΟΙ ΥΠΟΥΡΓΕΙΟΥ ΕΡΓΑΣΙΑΣ
'259': ΕΝΙΣΧΥΣΗ ΕΠΙΣΤΗΜΩΝ ΚΑΙ ΤΕΧΝΩΝ
'260': ΔΙΑΦΟΡΟΙ ΦΟΡΟΛΟΓΙΚΟΙ ΝΟΜΟΙ
'261': ΤΕΧΝΙΚΕΣ ΠΡΟΔΙΑΓΡΑΦΕΣ
'262': ΜΗΤΡΩΑ ΔΗΜΟΤΩΝ
'263': ΚΑΤΑΣΤΑΣΗ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'264': ΠΡΟΣΩΠΙΚΟΝ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ
'265': ΥΓΕΙΟΝΟΜΙΚΗ ΑΝΤΙΛΗΨΗ
'266': ΤΕΛΗ ΧΑΡΤΟΣΗΜΟΥ
'267': ΣΤΡΑΤΙΩΤΙΚΟΙ ΓΕΝΙΚΑ
'268': ΛΙΜΕΝΙΚΕΣ ΑΡΧΕΣ
'269': ΕΛΕΓΧΟΣ ΚΥΚΛΟΦΟΡΙΑΣ
'270': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΣ ΚΑΙ ΑΥΤΑΣΦΑΛΙΣΕΩΣ ΥΓΕΙΟΝΟΜΙΚΩΝ
'271': ΠΟΛΙΤΙΚΗ ΚΑΙ ΟΙΚΟΝΟΜΙΚΗ ΕΠΙΣΤΡΑΤΕΥΣΗ
'272': ΤΗΛΕΓΡΑΦΟΙ
'273': ΣΕΙΣΜΟΠΛΗΚΤΟΙ
'274': ΙΑΜΑΤΙΚΕΣ ΠΗΓΕΣ
'275': ΙΔΙΩΤΙΚΟ ΝΑΥΤΙΚΟ ΔΙΚΑΙΟ
'276': ΔΙΕΘΝΕΙΣ ΥΓΕΙΟΝΟΜΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'277': ΝΟΜΙΚΑ ΠΡΟΣΩΠΑ ΔΗΜΟΣΙΟΥ ΔΙΚΑΙΟΥ
'278': ΕΚΚΛΗΣΙΑ ΚΡΗΤΗΣ
'279': ΠΡΟΣΤΑΣΙΑ ΝΟΜΙΣΜΑΤΟΣ
'280': ΠΡΟΣΤΑΣΙΑ ΠΡΟΪΟΝΤΩΝ ΑΜΠΕΛΟΥ
'281': ΑΝΑΠΗΡΟΙ ΚΑΙ ΘΥΜΑΤΑ ΠΟΛΕΜΟΥ
'282': ΠΑΡΟΧΕΣ ΔΙΑΦΟΡΕΣ
'283': ΤΟΠΙΚΗ ΑΥΤΟΔΙΟΙΚΗΣΗ
'284': OΡΓΑΝΩΣΗ ΣΤΡΑΤΟΥ ΞΗΡΑΣ
'285': ΔΙΑΚΟΠΕΣ ΤΗΣ ΕΡΓΑΣΙΑΣ
'286': ΟΡΓΑΝΙΣΜΟΣ ΠΟΛΕΜΙΚΗΣ ΑΕΡΟΠΟΡΙΑΣ
'287': ΕΠΙΜΕΛΗΤΗΡΙΑ
'288': ΕΚΚΛΗΣΙΑ ΤΗΣ ΕΛΛΑΔΟΣ
'289': ΝΑΡΚΩΤΙΚΑ
'290': ΕΚΜΕΤΑΛΛΕΥΣΗ ΤΑΧΥΔΡΟΜΕΙΩΝ
'291': ΜΟΥΣΙΚΗ
'292': ΝΟΜΑΡΧΙΕΣ
'293': ΠΡΟΣΩΠΙΚΟ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'294': ΓΕΝΙΚΟ ΧΗΜΕΙΟ ΤΟΥ ΚΡΑΤΟΥΣ
'295': ΚΡΑΤΙΚΗ
'296': ΔΙΟΙΚΗΣΗ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'297': ΠΑΡΟΧΟΙ ΣΤΑΘΕΡΩΝ ΗΛΕΚΤΡΟΝΙΚΩΝ ΕΠΙΚΟΙΝΩΝΙΩΝ
'298': ΕΠΑΓΓΕΛΜΑΤΙΚΟΣ ΚΙΝΔΥΝΟΣ
'299': ΕΝΟΧΕΣ ΣΕ ΧΡΥΣΟ ΚΑΙ ΣΥΝΑΛΛΑΓΜΑ
'300': ΙΠΠΟΠΑΡΑΓΩΓΗ
'301': ΑΥΤΟΚΙΝΗΤΑ
'302': ΑΓΟΡΑΝΟΜΙΚΕΣ ΔΙΑΤΑΞΕΙΣ
'303': ΠΡΟΣΦΥΓΕΣ
'304': ΔΙΑΦΟΡΑ ΣΤΡΑΤΙΩΤΙΚΑ ΘΕΜΑΤΑ
'305': ΓΕΝ. ΓΡΑΜΜ. ΒΙΟΜΗΧΑΝΙΑΣ - ΓΕΝ. ΓΡΑΜΜ. ΕΡΕΥΝΑΣ ΚΑΙ ΤΕΧΝΟΛΟΓΙΑΣ
'306': ΔΙΑΜΕΤΑΚΟΜΙΣΗ
'307': ΔΙΚΑΙΟΣΤΑΣΙΟ
'308': ΥΔΑΤΑ
'309': ΦΟΡΟΛΟΓΙΚΕΣ ΔΙΕΥΚΟΛΥΝΣΕΙΣ ΚΑΙ ΑΠΑΛΛΑΓΕΣ
'310': ΜΟΝΟΠΩΛΙΑ
'311': ΕΙΔΙΚΕΣ ΔΙΑΔΙΚΑΣΙΕΣ
'312': ΠΡΟΝΟΙΑ ΓΙΑ ΤΟΥΣ ΣΤΡΑΤΙΩΤΙΚΟΥΣ
'313': ΠΟΛΙΤΙΚΗ ΔΙΚΟΝΟΜΙΑ
'314': ΟΡΓΑΝΩΣΗ ΧΡΟΝΟΥ ΕΡΓΑΣΙΑΣ
'315': ΠΡΟΣΩΠΙΚΟ ΤΥΠΟΥ
'316': ΔΙΚΑΣΤΙΚΟΙ ΕΠΙΜΕΛΗΤΕΣ
'317': ΛΟΥΤΡΟΠΟΛΕΙΣ
'318': ΤΕΛΩΝΕΙΑΚΟΣ ΚΩΔΙΚΑΣ
'319': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΝΟΜΙΚΩΝ
'320': ΔΙΑΦΟΡΟΙ ΤΕΛΩΝΕΙΑΚΟΙ ΝΟΜΟΙ
'321': ΔΙΟΙΚΗΣΗ ΠΟΛΙΤΙΚΗΣ ΑΕΡΟΠΟΡΙΑΣ
'322': ΑΕΡΟΠΟΡΙΚΕΣ ΕΚΜΕΤΑΛΛΕΥΣΕΙΣ
'323': ΕΜΠΟΡΙΚΕΣ ΠΡΑΞΕΙΣ
'324': ΔΙΚΑΣΤΗΡΙΑ
'325': ΒΑΣΙΛΕΙΑ ΚΑΙ ΑΝΤΙΒΑΣΙΛΕΙΑ
'326': ΠΡΟΣΩΠΙΚΟ ΠΟΛΕΜΙΚΗΣ ΑΕΡΟΠΟΡΙΑΣ
'327': ΠΡΟΣΤΑΣΙΑ ΚΑΙ ΚΙΝΗΤΡΑ ΙΔΙΩΤΙΚΩΝ ΕΠΕΝΔΥΣΕΩΝ
'328': ΒΑΣΙΛΙΚΑ ΙΔΡΥΜΑΤΑ
'329': ΣΙΔΗΡΟΔΡΟΜΟΙ ΓΕΝΙΚΑ
'330': ΠΝΕΥΜΑΤΙΚΗ ΙΔΙΟΚΤΗΣΙΑ
'331': ΔΙΑΦΟΡΑ ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ
'332': ΥΓΕΙΟΝΟΜΙΚΑ ΕΠΑΓΓΕΛΜΑΤΑ
'333': ΦΟΡΟΛΟΓΙΑ ΚΑΠΝΟΥ
'334': ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ
'335': ΧΩΡΟΦΥΛΑΚΗ
'336': ΤΕΛΩΝΕΙΑΚΗ ΥΠΗΡΕΣΙΑ
'337': ΠΑΝΕΠΙΣΤΗΜΙΟ ΠΑΤΡΩΝ
'338': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΑΣΦΑΛΙΣΤΩΝ
'339': ΑΣΦΑΛΙΣΤΙΚΟΙ ΟΡΓΑΝΙΣΜΟΙ
'340': ΣΤΡΑΤΙΩΤΙΚΑ ΕΡΓΑ ΚΑΙ ΠΡΟΜΗΘΕΙΕΣ
'341': ΥΠΟΝΟΜΟΙ
'342': ΦΟΡΟΛΟΓΙΑ ΚΕΦΑΛΑΙΟΥ
'343': ΕΤΑΙΡΕΙΕΣ ΠΕΡΙΩΡΙΣΜΕΝΗΣ ΕΥΘΥΝΗΣ
'344': ΥΠΟΥΡΓΕΊΟ ΚΟΙΝΩΝΙΚΏΝ ΑΣΦΑΛΊΣΕΩΝ
'345': ΣΥΜΒΟΛΑΙΟΓΡΑΦΟΙ
'346': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΑΡΤΕΡΓΑΤΩΝ
'347': ΕΡΓΑ ΚΑΙ ΠΡΟΜΗΘΕΙΕΣ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ
'348': ΕΛΕΓΚΤΙΚΟ ΣΥΝΕΔΡΙΟ
'349': ΔΙΑΦΟΡΑ ΕΠΙΣΤΗΜΟΝΙΚΑ ΙΔΡΥΜΑΤΑ
'350': ΑΞΙΩΜΑΤΙΚΟΙ ΕΝΟΠΛΩΝ ΔΥΝΑΜΕΩΝ
'351': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΕΜΠΟΡΩΝ (Τ.Α.Ε)
'352': ΣΤΡΑΤΙΩΤΙΚΗ ΠΟΙΝΙΚΗ
'353': ΦΟΡΟΛΟΓΙΑ ΟΙΝΟΠΝΕΥΜΑΤΟΣ
'354': ΟΡΓΑΝΙΣΜΟΣ ΓΕΩΡΓΙΚΩΝ ΑΣΦΑΛΙΣΕΩΝ
'355': ΣΥΛΛΟΓΙΚΕΣ ΣΥΜΒΑΣΕΙΣ ΕΡΓΑΣΙΑΣ
'356': ΧΡΗΜΑΤΙΣΤΗΡΙΑ
'357': ΠΟΛΙΤΙΚΑΙ ΚΑΙ ΣΤΡΑΤΙΩΤΙΚΑΙ ΣΥΝΤΑΞΕΙΣ
'358': ΚΟΙΝΩΝΙΚΗ ΣΤΕΓΑΣΤΙΚΗ ΣΥΝΔΡΟΜΗ
'359': ΚΑΤΟΧΥΡΩΣΗ ΕΠΑΓΓΕΛΜΑΤΩΝ
'360': ΦΟΡΟΛΟΓΙΑ ΚΑΘΑΡΑΣ ΠΡΟΣΟΔΟΥ
'361': ΠΕΡΙΦΕΡΕΙΕΣ
'362': ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΔΙΚΑΙΟΣΥΝΗ
'363': ΥΠΟΥΡΓΕΙΟ ΟΙΚΟΝΟΜΙΚΩΝ
'364': ΕΘΝΙΚΑ ΚΛΗΡΟΔΟΤΗΜΑΤΑ
'365': ΕΓΓΕΙΟΒΕΛΤΙΩΤΙΚΑ ΕΡΓΑ
'366': ΛΙΜΕΝΕΣ
'367': ΦΥΛΑΚΕΣ
'368': ΓΕΩΡΓΙΚΗ ΕΚΠΑΙΔΕΥΣΗ
'369': ΠΛΗΡΩΜΗ ΕΡΓΑΣΙΑΣ
'370': ΕΜΠΟΡΙΚΟΣ ΝΟΜΟΣ
'371': ΙΔΡΥΜΑ ΚΟΙΝΩΝΙΚΩΝ ΑΣΦΑΛΙΣΕΩΝ
'372': ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ ΤΡΑΠΕΖΩΝ
'373': ΕΙΔΙΚΟΙ ΑΓΡΟΤΙΚΟΙ ΝΟΜΟΙ
'374': ΔΙΕΘΝΕΙΣ ΔΙΚΟΝΟΜΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'375': ΥΠΟΥΡΓΕΙΑ ΜΑΚΕΔΟΝΙΑΣ–ΘΡΑΚΗΣ, ΑΙΓΑΙΟΥ Κ.Λ.Π
'376': ΑΣΤΥΝΟΜΙΚΟΊ ΣΚΎΛΟΙ
'377': ΔΙΑΦΟΡΑ ΘΕΜΑΤΑ
'378': ΕΚΔΟΣΗ ΕΓΚΛΗΜΑΤΙΩΝ
'379': ΑΓΟΡΑΝΟΜΙΑ
'380': ΔΙΚΑΣΤΙΚΟ ΤΟΥ ΔΗΜΟΣΙΟΥ
'381': ΑΣΤΙΚΟΣ ΚΩΔΙΚΑΣ
'382': ΤΕΛΩΝΕΙΑΚΕΣ ΑΤΕΛΕΙΕΣ
'383': ΑΓΡΟΤΙΚΕΣ ΜΙΣΘΩΣΕΙΣ
'384': ΛΕΩΦΟΡΕΙΑ
'385': ΓΕΝΙΚΟΙ ΕΠΙΣΙΤΙΣΤΙΚΟΙ ΝΟΜΟΙ
'386': ΑΣΤΥΝΟΜΙΑ ΠΟΛΕΩΝ
'387': ΜΗΧΑΝΙΚΟΙ ΚΑΙ ΕΡΓΟΛΑΒΟΙ
'388': ΠΟΛΕΜΙΚΕΣ ΣΥΝΤΑΞΕΙΣ
splits:
- name: train
num_bytes: 216757887
num_examples: 28536
- name: test
num_bytes: 71533786
num_examples: 9516
- name: validation
num_bytes: 68824457
num_examples: 9511
download_size: 145510070
dataset_size: 357116130
- config_name: subject
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ΜΕΤΟΧΙΚΟ ΤΑΜΕΙΟ Π.Ν
'1': ΜΕΤΑΝΑΣΤΕΥΣΗ ΣΤΟ ΒΕΛΓΙΟ
'2': ΝΑΥΤΙΚΕΣ ΦΥΛΑΚΕΣ
'3': ΚΑΝΟΝΙΣΜΟΣ ΕΚΤΕΛΕΣΕΩΣ ΣΤΡΑΤΙΩΤΙΚΩΝ ΕΡΓΩΝ
'4': ΔΙΟΙΚΗΤΙΚΗ ΚΑΙ ΟΙΚΟΝΟΜΙΚΗ ΥΠΗΡΕΣΙΑ
'5': ΑΣΚΗΣΗ ΠΟΙΝΙΚΗΣ ΑΓΩΓΗΣ
'6': ΚΑΝΟΝΙΣΜΟΣ ΕΣΩΤΕΡΙΚΗΣ ΥΠΗΡΕΣΙΑΣ ΕΠΙΒΑΤΗΓΩΝ ΠΛΟΙΩΝ
'7': ΚΩΔΙΚΑΣ ΠΟΛΙΤΙΚΗΣ ΔΙΚΟΝΟΜΙΑΣ - ΠΑΛΑΙΟΣ
'8': ΚΑΤΑΣΤΑΤΙΚΟ ΤΑΜΕΙΟΥ ΑΣΦΑΛΙΣΕΩΣ ΕΜΠΟΡΩΝ (Τ.Α.Ε)
'9': ΜΗΧΑΝΟΛΟΓΟΙ, ΗΛΕΚΤΡΟΛΟΓΟΙ, ΝΑΥΠΗΓΟΙ ΚΑΙ ΜΗΧΑΝΟΔΗΓΟΙ
'10': ΣΤΕΓΑΣΗ ΠΑΡΑΠΗΓΜΑΤΟΥΧΩΝ
'11': ΝΟΜΙΣΜΑΤΙΚΗ ΕΠΙΤΡΟΠΗ
'12': ΠΕΡΙΦΕΡΕΙΑΚΑ ΤΑΜΕΙΑ
'13': ΜΗΤΡΩΑ ΑΡΡΕΝΩΝ
'14': ΔΙΚΑΣΤΙΚΕΣ ΔΙΑΚΟΠΕΣ
'15': ΣΥΜΦΩΝΙΑ ΠΕΡΙ ΠΡΟΞΕΝΙΚΩΝ ΣΧΕΣΕΩΝ
'16': ΠΑΛΑΙΟΙ ΑΣΤΙΚΟΙ ΚΩΔΙΚΕΣ
'17': ΚΛΑΔΟΣ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΔΙΚΗΓΟΡΩΝ (Κ.Ε.Α.Δ.)
'18': ΟΙΚΟΝΟΜΙΚΕΣ ΑΡΜΟΔΙΟΤΗΤΕΣ ΣΤΡΑΤΙΩΤΙΚΩΝ ΑΡΧΩΝ
'19': ΥΠΟΝΟΜΟΙ ΘΕΣΣΑΛΟΝΙΚΗΣ
'20': ΔΙΑΦΟΡΑ ΥΔΡΑΥΛΙΚΑ ΤΑΜΕΙΑ
'21': ΕΛΕΓΧΟΣ ΘΕΑΤΡΙΚΩΝ ΕΡΓΩΝ ΚΑΙ ΔΙΣΚΩΝ
'22': ΥΠΗΡΕΣΙΑ ΙΠΠΟΠΑΡΑΓΩΓΗΣ
'23': ΣΩΜΑΤΙΚΗ ΑΓΩΓΗ
'24': ΕΚΔΙΚΑΣΗ ΤΕΛΩΝΕΙΑΚΩΝ ΠΑΡΑΒΑΣΕΩΝ
'25': ΚΙΝΗΤΡΑ ΙΔΙΩΤΙΚΩΝ ΕΠΕΝΔΥΣΕΩΝ ΣΤΗΝ ΠΕΡΙΦΕΡΕΙΑ
'26': ΜΕΛΗ ΟΙΚΟΓΕΝΕΙΑΣ ΑΣΦΑΛΙΣΜΕΝΩΝ
'27': ΚΕΡΜΑΤΑ
'28': ΕΠΙΔΟΜΑ ΑΝΑΠΡΟΣΑΡΜΟΓΗΣ
'29': ΕΚΤΕΛΕΣΗ ΔΑΣΙΚΩΝ ΕΡΓΩΝ
'30': ΛΙΠΑΣΜΑΤΑ
'31': ΕΠΙΧΟΡΗΓΗΣΗ ΣΠΟΥΔΑΣΤΩΝ ΤΕΚΝΩΝ ΕΡΓΑΖΟΜΕΝΩΝ
'32': ΠΡΟΣΤΑΣΙΑ ΟΙΝΟΥ
'33': ΠΤΗΤΙΚΟ ΚΑΙ ΚΑΤΑΔΥΤΙΚΟ ΕΠΙΔΟΜΑ
'34': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΥΠΑΛΛΗΛΩΝ ΕΜΠΟΡΙΚΩΝ ΚΑΤΑΣΤΗΜΑΤΩΝ (Τ.Ε.Α.Υ.Ε.Κ.)
'35': ΕΚΚΟΚΚΙΣΗ ΒΑΜΒΑΚΟΣ
'36': ΜΟΝΟΠΩΛΙΟ ΚΙΝΙΝΟΥ
'37': ΙΝΣΤΙΤΟΥΤΑ ΔΙΕΘΝΟΥΣ ΔΙΚΑΙΟΥ
'38': ΙΑΠΩΝΙΑ – ΙΝΔΙΑ –ΙΟΡΔΑΝΙΑ Κ.ΛΠ
'39': ΕΠΙΔΟΜΑ ΣΤΟΛΗΣ
'40': ΑΝΑΓΝΩΡΙΣΕΙΣ
'41': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΕΡΓΟΛΗΠΤΩΝ
'42': ΑΝΑΣΤΟΛΗ ΤΗΣ ΠΟΙΝΗΣ
'43': ΠΟΤΑΜΟΠΛΟΙΑ
'44': ΕΙΔΙΚΗ ΤΕΛΩΝΕΙΑΚΗ ΠΑΡΑΚΟΛΟΥΘΗΣΗ
'45': ΕΠΙΘΕΩΡΗΣΗ ΦΑΡΜΑΚΕΙΩΝ
'46': ΣΥΝΤΑΞΕΙΣ ΘΥΜΑΤΩΝ ΕΘΝΙΚΩΝ
'47': ΑΠΛΟΠΟΙΗΣΗ ΤΕΛΩΝΕΙΑΚΩΝ ΔΙΑΤΥΠΩΣΕΩΝ
'48': ΚΛΑΔΟΣ ΑΣΘΕΝΕΙΑΣ Τ.Α.Κ.Ε
'49': ΥΠΗΡΕΣΙΑ ΥΠΟΔΟΧΗΣ ΠΛΟΙΩΝ ΚΑΙ ΠΟΛΕΜΙΚΗ ΧΡΗΣΗ ΛΙΜΕΝΩΝ
'50': ΦΑΡΜΑΚΕΙΟ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'51': ΤΑΜΕΙΟ ΑΠΟΚΑΤΑΣΤΑΣΕΩΣ ΠΡΟΣΦΥΓΩΝ ΣΥΜΒΟΥΛΙΟΥ ΤΗΣ ΕΥΡΩΠΗΣ
'52': ΝΑΥΤΙΚΕΣ ΕΤΑΙΡΕΙΕΣ
'53': ΙΣΡΑΗΛΙΤΙΚΕΣ ΚΟΙΝΟΤΗΤΕΣ
'54': ΣΕΙΣΜΟΠΛΗΚΤΟΙ ΣΤΕΡΕΑΣ ΕΛΛΑΔΑΣ (ΑΤΤΙΚΗΣ, ΒΟΙΩΤΙΑΣ Κ.Λ.Π.)
'55': ΔΙΑΦΟΡΕΣ ΣΧΟΛΕΣ Π.Ν
'56': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΕΜΠΟΡ.ΚΑΙ ΒΙΟΜ.- ΕΠΑΓΓΕΛ. ΚΑΙ ΒΙΟΤΕΧΝ.
ΕΠΙΜΕΛΗΤΗΡΙΩΝ ΤΟΥ ΚΡΑΤΟΥΣ
'57': ΕΘΝΙΚΗ ΚΤΗΜΑΤΙΚΗ ΤΡΑΠΕΖΑ
'58': ΝΑΥΤΙΚΟΙ ΑΚΟΛΟΥΘΟΙ
'59': ΔΗΜΟΣΙΕΣ ΝΑΥΤΙΚΕΣ ΣΧΟΛΕΣ
'60': ΜΙΚΡΟΦΩΤΟΓΡΑΦΙΕΣ
'61': ΚΑΤΑΣΤΑΤΙΚΟΙ ΝΟΜΟΙ-Τ.Σ.Α.Υ
'62': ΚΑΤΑΣΤΑΣΗ ΑΞΙΩΜΑΤΙΚΩΝ Π.Ν
'63': ΕΛΛΗΝΙΚΑ ΣΧΟΛΕΙΑ ΑΛΛΟΔΑΠΗΣ
'64': ΟΡΓΑΝΙΣΜΟΣ ΟΙΚΟΝΟΜΙΚΗΣ
'65': ΕΘΝΙΚΗ ΤΡΑΠΕΖΑ ΤΗΣ ΕΛΛΑΔΟΣ
'66': ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ Ν.Π.Δ.Δ
'67': ΠΡΟΣΩΠΙΚΟ ΜΕ ΣΧΕΣΗ ΙΔΙΩΤΙΚΟΥ ΔΙΚΑΙΟΥ
'68': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΕΤΑΙΡΕΙΑΣ ΥΔΡΕΥΣΗΣ ΚΑΙ ΑΠΟΧΕΤΕΥΣΗΣ
ΠΡΩΤΕΥΟΥΣΗΣ (Τ.Ε.Α.Π.Ε.Υ.Α.Π.)
'69': ΣΩΜΑ ΟΙΚΟΝΟΜΙΚΟΥ ΕΛΕΓΧΟΥ
'70': ΣΥΜΒΑΣΗ ΠΕΡΙ ΔΙΕΚΔΙΚΗΣΕΩΣ ΔΙΑΤΡΟΦΗΣ
'71': ΙΣΟΤΗΤΑ ΤΩΝ ΔΥΟ ΦΥΛΩΝ
'72': ΤΑΜΕΙΟ ΑΡΩΓΗΣ ΚΑΙ ΕΠΙΚΟΥΡΙΚΟ ΤΑΜΕΙΟ
'73': ΤΟΥΡΙΣΤΙΚΟ ΔΕΛΤΙΟ
'74': ΔΙΑΦΟΡΟΙ ΝΟΜΟΙ
'75': ΟΡΓΑΝΙΣΜΟΣ ΛΙΜΕΝΟΣ ΠΕΙΡΑΙΩΣ ΑΝΩΝΥΜΗ ΕΤΑΙΡΙΑ
'76': ΕΚΚΑΘΑΡΙΣΙΣ ΔΙΟΡΙΣΜΩΝ ΚΑΙ ΠΡΟΑΓΩΓΩΝ ΚΑΤΟΧΗΣ
'77': ΤΑΞΙΝΟΜΗΣΗ ΒΑΜΒΑΚΟΣ
'78': ΠΡΥΤΑΝΕΙΣ ΚΑΙ ΚΟΣΜΗΤΟΡΕΣ
'79': ΥΠΗΡΕΣΙΑΚΟ ΣΥΜΒΟΥΛΙΟ ΕΚΚΛΗΣΙΑΣΤΙΚΗΣ ΕΚΠΑΙΔΕΥΣΗΣ
'80': ΩΡΕΣ ΕΡΓΑΣΙΑΣ ΣΤΗΝ ΒΙΟΜΗΧΑΝΙΑ ΚΑΙ ΒΙΟΤΕΧΝΙΑ
'81': ΧΑΡΤΗΣ ΟΡΓΑΝΙΣΜΟΥ ΟΙΚΟΝΟΜΙΚΗΣ ΣΥΝΕΡΓΑΣΙΑΣ
'82': ΓΥΜΝΑΣΙΟ ΑΠΟΔΗΜΩΝ ΕΛΛΗΝΟΠΑΙΔΩΝ
'83': ΚΑΝΟΝΙΣΜΟΣ ΑΣΘΕΝΕΙΑΣ
'84': ΕΚΔΟΣΕΙΣ ΥΠΟΥΡΓΕΙΟΥ ΕΜΠΟΡΙΚΗΣ ΝΑΥΤΙΛΙΑΣ
'85': ΠΛΗΤΤΟΜΕΝΟΙ ΑΠΟ ΘΕΟΜΗΝΙΕΣ ΚΑΙ ΑΛΛΑ ΕΚΤΑΚΤΑ ΓΕΓΟΝΟΤΑ
'86': ΩΡΕΣ ΕΡΓΑΣΙΑΣ ΠΡΟΣΩΠΙΚΟΥ
'87': ΓΕΩΜΗΛΑ
'88': ΦΟΡΟΛΟΓΙΑ ΑΝΑΤΙΜΗΣΗΣ ΑΚΙΝΗΤΩΝ
'89': ΠΑΝΩΛΗΣ
'90': ΣΧΟΛΕΣ ΝΗΠΙΑΓΩΓΩΝ
'91': ΦΑΡΜΑΚΑΠΟΘΗΚΕΣ
'92': ΦΡΟΝΤΙΣΤΗΡΙΑ ΝΟΜΙΚΩΝ ΣΠΟΥΔΩΝ
'93': ΟΙΚΟΓΕΝΕΙΑΚΑ ΕΠΙΔΟΜΑΤΑ ΜΙΣΘΩΤΩΝ
'94': ΗΛΕΚΤΡΟΚΙΝΗΤΑ ΛΕΩΦΟΡΕΙΑ ΑΘΗΝΩΝ – ΠΕΙΡΑΙΩΣ (Η.Λ.Π.Α.Π.)
'95': ΑΣΤΙΚΑ ΔΙΚΑΙΩΜΑΤΑ ΑΛΛΟΔΑΠΩΝ
'96': ΠΟΛΙΤΙΚΟ ΠΡΟΣΩΠΙΚΟ ΑΕΡΟΠΟΡΙΑΣ
'97': ΔΙΚΑΣΤΙΚΗ ΕΚΠΡΟΣΩΠΗΣΗ Ι.Κ.Α
'98': ΥΓΕΙΟΝΟΜΙΚΗ ΥΠΗΡΕΣΙΑ Π.Σ
'99': ΥΓΕΙΟΝΟΜΙΚΟΙ ΣΤΑΘΜΟΙ
'100': ΙΕΡΑΡΧΙΑ ΚΑΙ ΠΡΟΑΓΩΓΕΣ ΜΟΝΙΜΩΝ ΥΠΑΞΙΩΜΑΤΙΚΩΝ ΚΑΙ ΑΝΘΥΠΑΣΠΙΣΤΩΝ
'101': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΕΡΓΑΤΟΤΕΧΝΙΤΩΝ ΚΑΙ ΥΠΑΛΛΗΛΩΝ ΔΕΡΜΑΤΟΣ
ΕΛΛΑΔΑΣ (Τ.Ε.Α.Ε.Υ.Δ.Ε.)
'102': ΠΡΑΤΗΡΙΑ ΑΡΤΟΥ
'103': ΠΛΗΡΩΜΗ ΜΕ ΕΠΙΤΑΓΗ
'104': ΤΕΧΝΙΚΗ ΕΚΜΕΤΑΛΛΕΥΣΗ ΕΛΙΚΟΠΤΕΡΩΝ
'105': ΔΙΕΘΝΕΙΣ ΤΑΧΥΔΡΟΜΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'106': ΔΙΚΑΣΤΙΚΟΙ ΑΝΤΙΠΡΟΣΩΠΟΙ ΤΟΥ ΔΗΜΟΣΙΟΥ
'107': ΩΡΕΣ ΕΡΓΑΣΙΑΣ ΣΕ ΔΙΑΦΟΡΑ ΕΠΑΓΓΕΛΜΑΤΑ
'108': ΔΙΕΥΘΥΝΣΗ ΚΤΗΝΟΤΡΟΦΙΑΣ
'109': ΕΠΙΘΕΩΡΗΣΗ ΣΦΑΓΙΩΝ
'110': ΠΛΩΙΜΟΤΗΤΑ ΑΕΡΟΣΚΑΦΩΝ
'111': ΑΓΟΡΑΝΟΜΙΚΟΣ ΚΩΔΙΚΑΣ
'112': ΔΙΕΘΝΕΙΣ ΜΕΤΑΦΟΡΕΣ ΕΠΙΒΑΤΩΝ ΚΑΙ ΕΜΠΟΡΕΥΜΑΤΩΝ
'113': ΠΡΟΜΗΘΕΙΕΣ
'114': ΔΙΑΦΟΡΕΣ ΔΙΑΤΑΞΕΙΣ
'115': ΔΙΑΙΤΗΣΙΑ ΣΥΛΛΟΓΙΚΩΝ ΔΙΑΦΟΡΩΝ - ΜΕΣΟΛΑΒΗΤΕΣ ΔΙΑΙΤΗΤΕΣ
'116': ΣΟΥΛΤΑΝΙΝΑ
'117': ΜΕΤΑΓΡΑΦΗ
'118': ΕΙΣΑΓΩΓΗ ΕΠΙΣΤΗΜΟΝΙΚΟΥ ΥΛΙΚΟΥ
'119': ΔΙΑΡΘΡΩΣΗ ΥΠΗΡΕΣΙΩΝ Ο.Γ.Α
'120': ΔΙΚΑΣΤΙΚΟΙ ΛΕΙΤΟΥΡΓΟΙ - ΕΘΝΙΚΗ ΣΧΟΛΗ ΔΙΚΑΣΤΩΝ
'121': ΠΙΣΤΟΠΟΙΗΤΙΚΑ ΚΑΙ ΔΙΚΑΙΟΛΟΓΗΤΙΚΑ
'122': ΑΣΚΗΣΗ ΙΑΤΡΙΚΟΥ ΕΠΑΓΓΕΛΜΑΤΟΣ
'123': ΣΥΝΕΤΑΙΡΙΣΜΟΙ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'124': ΣΧΟΛΗ ΕΠΙΣΤΗΜΩΝ ΥΓΕΙΑΣ ΠΑΝΜΙΟΥ ΠΑΤΡΩΝ
'125': ΑΛΛΟΔΑΠΕΣ ΝΑΥΤΙΛΙΑΚΕΣ ΕΠΙΧΕΙΡΗΣΕΙΣ
'126': ΛΑΤΟΜΕΙΑ
'127': ΕΚΜΕΤΑΛΛΕΥΣΗ ΙΑΜΑΤΙΚΩΝ ΠΗΓΩΝ
'128': ΠΩΛΗΣΗ ΧΡΕΩΓΡΑΦΩΝ ΜΕ ΔΟΣΕΙΣ
'129': ΝΟΜΟΘΕΣΙΑ ΠΕΡΙ ΤΡΑΠΕΖΩΝ (ΓΕΝΙΚΑ)
'130': ΕΙΔΙΚΑ ΜΕΤΑΛΛΕΙΑ
'131': YΠΟΥΡΓΕΙΟ ΥΓΙΕΙΝΗΣ
'132': ΛΗΞΙΑΡΧΙΚΕΣ ΠΡΑΞΕΙΣ
'133': ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΓΙΑ ΤΟΝ ΤΥΠΟ
'134': ΕΘΝΙΚΟ ΣΥΣΤΗΜΑ ΕΠΑΓΓΕΛΜΑΤΙΚΗΣ ΕΚΠΑΙΔΕΥΣΗΣ-ΚΑΤΑΡΤΙΣΗΣ
'135': ΑΡΟΥΡΑΙΟΙ ΚΑΙ ΑΚΡΙΔΕΣ
'136': ΠΡΟΣΤΑΣΙΑ ΦΥΜΑΤΙΚΩΝ ΝΑΥΤΙΚΩΝ
'137': ΑΠΟΡΡΗΤΟ ΕΠΙΣΤΟΛΩΝ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΩΝ
'138': ΠΟΡΘΜΕΙΑ ΚΑΙ ΟΧΗΜΑΤΑΓΩΓΑ
'139': ΜΕΤΡΑ ΕΞΟΙΚΟΝΟΜΗΣΗΣ ΕΝΕΡΓΕΙΑΣ
'140': ΣΤΟΙΧΕΙΑ ΠΡΟΣΩΠΙΚΟΥ ΔΗΜΟΣΙΩΝ ΥΠΗΡΕΣΙΩΝ ΚΑΙ Ν.Π.Δ.Δ
'141': ΠΑΓΙΕΣ ΑΜΟΙΒΕΣ ΔΙΚΗΓΟΡΩΝ
'142': ΟΡΓΑΝΙΣΜΟΣ ΣΧΟΛΗΣ ΕΥΕΛΠΙΔΩΝ
'143': ΟΙΚΟΝΟΜΙΚΟ ΕΠΙΜΕΛΗΤΗΡΙΟ ΤΗΣ ΕΛΛΑΔΑΣ
'144': ΓΡΑΦΕΙΑ ΕΥΡΕΣΕΩΣ ΕΡΓΑΣΙΑΣ
'145': ΔΙΑΦΗΜΙΣΕΙΣ
'146': ΔΙΑΦΟΡΕΣ ΥΠΟΤΡΟΦΙΕΣ
'147': ΦΟΡΤΗΓΑ ΑΚΤΟΠΛΟΙΚΑ ΠΛΟΙΑ (ΜS) ΜΕΧΡΙ 500 Κ.Ο.Χ
'148': ΕΠΙΤΡΟΠΗ ΣΥΝΕΡΓΑΣΙΑΣ UNICEF
'149': ΥΓΙΕΙΝΗ ΘΕΡΕΤΡΩΝ
'150': ΕΠΙΣΤΗΜΟΝΙΚΗ ΕΡΕΥΝΑ ΚΑΙ ΤΕΧΝΟΛΟΓΙΑ
'151': ΑΠΑΓΟΡΕΥΣΕΙΣ ΕΞΑΓΩΓΗΣ
'152': ΑΜΠΕΛΟΥΡΓΙΚΟ ΚΤΗΜΑΤΟΛΟΓΙΟ
'153': ΥΠΟΥΡΓΕΙΟ ΥΓΕΙΑΣ ΚΑΙ ΠΡΟΝΟΙΑΣ
'154': ΔΙΕΘΝΗΣ ΝΑΥΤΙΛΙΑΚΟΣ ΟΡΓΑΝΙΣΜΟΣ
'155': ΔΙΕΥΘΥΝΣΗ ΤΕΛΩΝΕΙΑΚΟΥ ΕΛΕΓΧΟΥ
'156': ΔΕΛΤΙΑ ΤΑΥΤΟΤΗΤΟΣ Π. ΝΑΥΤΙΚΟΥ
'157': ΑΝΩΤΑΤΗ ΥΓΕΙΟΝΟΜΙΚΗ ΕΠΙΤΡΟΠΗ
'158': ΠΡΟΣΤΑΣΙΑ ΕΦΕΔΡΩΝ ΑΞΙΩΜΑΤΙΚΩΝ, ΑΝΑΠΗΡΩΝ ΠΟΛΕΜΟΥ ΚΑΙ ΑΓΩΝΙΣΤΩΝ ΕΘΝ.
ΑΝΤΙΣΤΑΣΗΣ
'159': ΦΟΡΟΙ ΥΠΕΡ ΤΡΙΤΩΝ
'160': ΑΓΡΟΛΗΨΙΕΣ ΙΟΝΙΩΝ ΝΗΣΙΩΝ
'161': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΥΠΑΛΛΗΛΩΝ ΕΜΠΟΡΙΟΥ ΤΡΟΦΙΜΩΝ (Τ.Ε.Α.Υ.Ε.Τ)
'162': ΑΝΩΤΑΤΟ ΕΙΔΙΚΟ ΔΙΚΑΣΤΗΡΙΟ
'163': ΕΙΣΑΓΩΓΗ ΓΥΝΑΙΚΩΝ ΣΤΙΣ ΑΝΩΤΑΤΕΣ ΣΤΡΑΤΙΩΤΙΚΕΣ ΣΧΟΛΕΣ
'164': ΣΧΟΛΗ ΑΞΙΩΜΑΤΙΚΩΝ ΝΟΣΗΛΕΥΤΙΚΗΣ (Σ.Α.Ν.)
'165': ΔΙΑΔΙΚΑΣΙΑ ΔΙΟΙΚΗΤΙΚΩΝ ΔΙΚΑΣΤΗΡΙΩΝ
'166': ΠΡΟΣΤΑΣΙΑ ΕΡΓΑΖΟΜΕΝΟΥ ΠΑΙΔΙΟΥ
'167': ΑΜΝΗΣΤΙΑ
'168': ΣΧΟΛΕΣ ΚΑΛΛΙΤΕΧΝΙΚΗΣ ΕΚΠΑΙΔΕΥΣΗΣ
'169': ΧΑΡΗ ΚΑΙ ΜΕΤΡΙΑΣΜΟΣ
'170': ΤΥΦΛΟΙ
'171': ΣΥΜΒΟΥΛΙΟ ΤΗΣ ΕΥΡΩΠΗΣ
'172': ΕΡΓΟΣΤΑΣΙΑ ΕΚΡΗΚΤΙΚΩΝ ΥΛΩΝ
'173': ΜΗΤΡΩΑ Π. ΝΑΥΤΙΚΟΥ
'174': ΥΓΡΗ ΑΜΜΩΝΙΑ
'175': ΠΕΙΡΑΜΑΤΙΚΑ ΣΧΟΛΕΙΑ
'176': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΑΞΙΩΜΑΤΙΚΩΝ Ε.Ν
'177': ΕΠΑΓΓΕΛΜΑΤΙΚΟΣ ΠΡΟΣΑΝΑΤΟΛΙΣΜΟΣ ΚΑΙ ΚΑΤΑΡΤΙΣΗ
'178': ΤΕΛΩΝΕΙΑΚΗ ΕΠΙΒΛΕΨΗ
'179': ΠΡΟΣΩΡΙΝΕΣ ΕΥΡΩΠΑΙΚΕΣ ΣΥΜΦΩΝΙΕΣ
'180': ΜΟΝΟΠΩΛΙΟ ΠΑΙΓΝΙΟΧΑΡΤΩΝ
'181': ΛΕΙΤΟΥΡΓΙΑ ΤΟΥΡΙΣΤΙΚΗΣ ΑΣΤΥΝΟΜΙΑΣ
'182': ΕΚΠΟΙΗΣΗ ΕΚΚΛΗΣΙΑΣΤΙΚΩΝ ΚΙΝΗΤΩΝ ΚΑΙ ΑΚΙΝΗΤΩΝ
'183': ΣΥΛΛΟΓΙΚΕΣ ΣΥΜΒΑΣΕΙΣ (ΓΕΝΙΚΑ)
'184': ΟΔΟΙΠΟΡΙΚΑ ΚΑΙ ΑΠΟΖΗΜΙΩΣΕΙΣ ΕΚΤΟΣ ΕΔΡΑΣ
'185': ΣΤΕΓΑΣΤΙΚΗ ΑΠΟΚΑΤΑΣΤΑΣΗ ΠΡΟΣΦΥΓΩΝ
'186': ΑΝΩΤΑΤΑ ΣΥΜΒΟΥΛΙΑ ΕΚΠΑΙΔΕΥΣΕΩΣ
'187': ΑΡΧΕΙΑ ΥΠΟΥΡΓΕΙΟΥ ΟΙΚΟΝΟΜΙΚΩΝ
'188': ΓΕΝΙΚΗ ΓΡΑΜΜΑΤΕΙΑ ΥΠΟΥΡΓΙΚΟΥ ΣΥΜΒΟΥΛΙΟΥ
'189': ΠΕΡΙΠΤΕΡΑ ΑΝΑΠΗΡΩΝ ΠΟΛΕΜΟΥ
'190': ΕΠΑΓΓΕΛΜΑΤΙΚΕΣ ΟΡΓΑΝΩΣΕΙΣ ΕΜΠΟΡΩΝ, ΒΙΟΤΕΧΝΩΝ ΚΑΙ ΛΟΙΠΩΝ ΕΠΑΓΓΕΛΜΑΤΙΩΝ
'191': ΙΔΙΩΤΙΚΟΙ ΣΤΑΘΜΟΙ ΠΑΡΑΓΩΓΗΣ ΗΛΕΚΤΡΙΚΗΣ ΕΝΕΡΓΕΙΑΣ
'192': ΘΕΑΤΡΙΚΑ ΕΡΓΑ
'193': ΜΕ ΤΗ ΝΕΑ ΖΗΛΑΝΔΙΑ
'194': ΦΟΡΟΣ ΚΑΤΑΝΑΛΩΣΕΩΣ ΣΑΚΧΑΡΕΩΣ
'195': ΝΟΜΑΡΧΙΑΚΑ ΤΑΜΕΙΑ
'196': ΑΓΩΓΕΣ ΚΑΚΟΔΙΚΙΑΣ
'197': ΚΩΔΙΚΑΣ ΦΟΡΟΛΟΓΙΚΗΣ ΔΙΚΟΝΟΜΙΑΣ
'198': ΑΤΟΜΑ ΒΑΡΙΑ ΝΟΗΤΙΚΑ ΚΑΘΥΣΤΕΡΗΜΕΝΑ
'199': ΜΕ ΤΗ ΣΟΥΗΔΙΑ
'200': ΑΕΡΟΝΑΥΤΙΚΗ ΜΕΤΕΩΡΟΛΟΓΙΑ
'201': ΙΔΙΩΤΙΚΕΣ ΣΧΟΛΕΣ ΓΥΜΝΑΣΤΙΚΗΣ
'202': ΠΕΡΙΟΥΣΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ
'203': ΑΓΟΡΑΠΩΛΗΣΙΕΣ ΚΑΤΟΧΗΣ
'204': ΕΚΚΛΗΣΙΑ ΠΑΡΙΣΙΩΝ
'205': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΠΡΟΣΤΑΣΙΑΣ ΦΥΤΩΝ
'206': ΚΑΤΟΧΥΡΩΣΗ ΘΡΗΣΚΕΥΤΙΚΗΣ ΕΛΕΥΘΕΡΙΑΣ
'207': ΥΓΕΙΟΝΟΜΙΚΗ ΕΞΕΤΑΣΗ ΜΗ ΙΠΤΑΜΕΝΟΥ ΠΡΟΣΩΠΙΚΟΥ
'208': ΣΥΝΤΑΞΕΙΣ ΘΥΜΑΤΩΝ ΠΟΛΕΜΟΥ 1940
'209': ΥΔΡΑΥΛΙΚΕΣ ΕΓΚΑΤΑΣΤΑΣΕΙΣ
'210': ΚΟΙΝΩΝΙΚΟΙ ΛΕΙΤΟΥΡΓΟΙ - ΚΟΙΝΩΝΙΚΟΙ ΣΥΜΒΟΥΛΟΙ
'211': ΔΙΑΦΟΡΕΣ ΠΡΟΣΩΡΙΝΕΣ ΑΤΕΛΕΙΕΣ
'212': ΟΙΚΟΝΟΜΙΚΗ ΔΙΑΧΕΙΡΙΣΗ ΚΑΙ ΛΟΓΙΣΤΙΚΟ
'213': ΕΞΗΛΕΚΤΡΙΣΜΟΣ ΝΗΣΩΝ
'214': ΕΚΠΑΙΔΕΥΣΗ ΣΤΕΛΕΧΩΝ
'215': ΩΡΕΣ ΕΡΓΑΣΙΑΣ ΚΑΤΑΣΤΗΜΑΤΩΝ ΚΑΙ ΓΡΑΦΕΙΩΝ
'216': ΗΜΕΡΟΛΟΓΙΟ ΓΕΦΥΡΑΣ
'217': ΠΡΟΣΤΑΣΙΑ ΤΗΣ ΣΤΑΦΙΔΑΣ
'218': ΠΑΛΑΙΟΙ ΔΙΚΟΝΟΜΙΚΟΙ ΝΟΜΟΙ
'219': ΤΑΜΕΙΟ ΕΠΙΚ. ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΟΡΓΑΝΙΣΜΩΝ ΚΟΙΝΩΝΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ
(Τ.Ε.Α.Π.Ο.Κ.Α.)
'220': ΠΑΡΟΧΕΣ ΥΓΕΙΑΣ ΑΣΦΑΛΙΣΤΙΚΩΝ ΟΡΓΑΝΙΣΜΩΝ
'221': ΠΛΑΝΟΔΙΟΙ ΙΧΘΥΟΠΩΛΕΣ
'222': ΔΙΑΦΟΡΟΙ ΝΟΜΟΙ ΠΕΡΙ ΑΞΙΩΜΑΤΙΚΩΝ Π.Ν
'223': ΥΠΟΧΡΕΩΣΕΙΣ ΕΦΟΠΛΙΣΤΩΝ ΣΕ ΑΣΘΕΝΕΙΑ Η ΘΑΝΑΤΟ ΝΑΥΤΙΚΩΝ
'224': ΠΡΟΣΤΑΣΙΑ ΚΑΤΑ ΤΗΣ ΑΣΘΕΝΕΙΑΣ
'225': ΓΕΝΙΚΑ ΠΕΡΙ ΣΧΕΔΙΩΝ ΠΟΛΕΩΝ
'226': ΕΞΑΙΡΕΣΕΙΣ ΑΠΟ ΤΗΝ ΕΡΓΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
'227': ΑΓΡΟΤΙΚΟ ΚΤΗΜΑΤΟΛΟΓΙΟ
'228': ΣΥΝΤΑΓΜΑΤΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΕΚΚΛΗΣΙΑΣ ΤΗΣ ΕΛΛΑΔΟΣ
'229': ΠΑΝΑΓΙΟΣ ΤΑΦΟΣ
'230': ΣΥΝΕΡΓΕΙΑ Π. ΝΑΥΤΙΚΟΥ
'231': ΕΠΙΘΕΩΡΗΣΙΣ ΣΤΡΑΤΟΥ
'232': ΣΥΝΘΕΣΗ ΠΛΗΡΩΜΑΤΩΝ
'233': ΟΡΓΑΝΙΣΜΟΣ ΕΡΓΑΤΙΚΗΣ ΕΣΤΙΑΣ
'234': ΔΙΑΦΟΡΑ ΥΔΡΑΥΛΙΚΑ ΕΡΓΑ
'235': ΔΙΚΑΙΩΜΑ ΤΟΥ ΣΥΝΕΡΧΕΣΘΑΙ
'236': ΚΟΙΝΩΝΙΚΟΠΟΙΗΣΗ - ΑΠΟΚΡΑΤΙΚΟΠΟΙΗΣΗ ΕΠΙΧΕΙΡΗΣΕΩΝ ΔΗΜΟΣΙΟΥ ΧΑΡΑΚΤΗΡΑ
'237': ΛΑΙΚΗ ΚΑΤΟΙΚΙΑ
'238': ΦΟΡΟΛΟΓΙΑ ΚΕΡΔΩΝ
'239': ΤΕΧΝΙΚΗ ΥΠΗΡΕΣΙΑ
'240': ΜΕΤΕΚΠΑΙΔΕΥΣΗ ΔΗΜΟΔΙΔΑΣΚΑΛΩΝ
'241': ΣΥΝΤΑΞΕΙΣ ΥΠΟΥΡΓΩΝ ΚΑΙ ΒΟΥΛΕΥΤΩΝ
'242': ΟΡΙΟ ΗΛΙΚΙΑΣ
'243': ΣΤΡΑΤΙΩΤΙΚΕΣ ΠΡΟΜΗΘΕΙΕΣ
'244': ΑΠΟΣΤΟΛΑΙ ΕΞΩΤΕΡΙΚΟΥ
'245': ΦΟΡΟΛΟΓΙΑ ΑΚΙΝΗΤΗΣ ΠΕΡΙΟΥΣΙΑΣ
'246': ΧΡΟΝΟΣ ΕΡΓΑΣΙΑΣ - ΑΔΕΙΕΣ ΠΡΟΣΩΠΙΚΟΥ ΕΛΛΗΝΙΚΗΣ ΑΣΤΥΝΟΜΙΑΣ
'247': ΝΑΥΤΙΚΑ ΕΡΓΑ ΚΑΙ ΠΡΟΜΗΘΕΙΕΣ
'248': ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ ΚΑΙ ΛΟΓΙΣΤΙΚΟ
'249': ΔΑΣΜΟΛΟΓΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'250': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΧΡΗΜΑΤΙΣΤΩΝ ,ΜΕΣΙΤΩΝ,ΑΝΤΙΚΡΥΣΤΩΝ ΚΑΙ ΥΠΑΛΛΗΛΩΝ
ΧΡΗΜΑΤΙΣΤΗΡΙΟΥ ΑΘΗΝΩΝ (Τ.Α.Χ.Μ.Α.)
'251': ΚΡΑΤΙΚΗ ΣΧΟΛΗ ΟΡΧΗΣΤΙΚΗΣ ΤΕΧΝΗΣ
'252': ΕΘΝΙΚΗ ΛΥΡΙΚΗ ΣΚΗΝΗ
'253': ΑΕΡΟΝΑΥΤΙΚΕΣ ΤΗΛΕΠΙΚΟΙΝΩΝΙΕΣ
'254': ΚΕΝΤΡΟ ΒΙΟΤΕΧΝΙΚΗΣ ΑΝΑΠΤΥΞΗΣ
'255': ΑΡΧΑΙΟΛΟΓΙΚΟ ΜΟΥΣΕΙΟ
'256': ΥΠΕΡΩΚΕΑΝΕΙΑ
'257': ΔΑΣΗ
'258': ΑΣΚΗΣΗ ΚΤΗΝΙΑΤΡΙΚΟΥ ΕΠΑΓΓΕΛΜΑΤΟΣ
'259': ΚΤΗΣΗ ΚΑΙ ΑΠΩΛΕΙΑ
'260': ΡΑΔΙΟΤΗΛΕΓΡΑΦΙΚΗ ΥΠΗΡΕΣΙΑ
'261': ΑΕΡΟΛΙΜΕΝΑΣ ΑΘΗΝΩΝ
'262': ΠΡΩΤΟΒΑΘΜΙΑ ΕΚΠΑΙΔΕΥΣΗ
'263': ΣΤΕΛΕΧΟΣ ΕΦΕΔΡΩΝ ΑΞΙΩΜΑΤΙΚΩΝ
'264': ΠΤΩΧΕΥΣΗ ΚΑΙ ΣΥΜΒΙΒΑΣΜΟΣ
'265': ΠΟΛΙΤΙΚΟΣ ΓΑΜΟΣ
'266': ΙΔΙΩΤΙΚΗ ΕΠΙΧΕΙΡΗΣΗ ΑΣΦΑΛΙΣΕΩΣ
'267': ΠΛΟΙΑ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'268': ΙΑΤΡΙΚΕΣ ΑΜΟΙΒΕΣ
'269': ΕΛΛΗΝΙΚΟΣ ΕΡΥΘΡΟΣ ΣΤΑΥΡΟΣ
'270': ΑΝΩΜΑΛΕΣ ΚΑΤΑΘΕΣΕΙΣ ΣΕ ΧΡΥΣΟ
'271': ΣΥΜΒΟΥΛΙΟ ΤΙΜΗΣ ΑΞΙΩΜΑΤΙΚΩΝ Π.Ν
'272': ΔΙΑΦΟΡΟΙ ΑΡΔΕΥΤΙΚΟΙ ΝΟΜΟΙ
'273': ΚΥΒΕΡΝΗΤΙΚΟΣ ΕΠΙΤΡΟΠΟΣ
'274': ΕΚΤΕΛΕΣΗ ΣΥΓΚΟΙΝΩΝΙΑΚΩΝ ΕΡΓΩΝ
'275': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΚΑΙ ΑΡΩΓΗΣ
'276': ΔΑΣΙΚΕΣ ΜΕΤΑΦΟΡΕΣ
'277': ΜΕ ΤΗ ΔΗΜΟΚΡΑΤΙΑ ΤΟΥ ΚΕΜΠΕΚ
'278': ΕΠΑΝΕΞΑΓΟΜΕΝΑ ΜΕ ΕΓΓΥΗΣΗ
'279': ΔΙΑΝΟΜΗ ΗΛΕΚΤΡΙΚΗΣ ΕΝΕΡΓΕΙΑΣ
'280': ΑΡΣΗ ΣΥΓΚΡΟΥΣΕΩΣ ΚΑΘΗΚΟΝΤΩΝ
'281': ΕΚΠΑΙΔΕΥΤΙΚΑ ΠΛΟΙΑ
'282': ΚΕΝΤΡΟ ΜΕΤΑΦΡΑΣΗΣ
'283': ΕΙΣΦΟΡΕΣ ΚΑΙ ΝΑΥΛΩΣΕΙΣ
'284': ΜΕΤΕΓΓΡΑΦΕΣ ΦΟΙΤΗΤΩΝ ΑΝΩΤ. ΕΚΠΑΙΔΕΥΤΙΚΩΝ ΙΔΡΥΜΑΤΩΝ
'285': ΤΜΗΜΑΤΑ ΕΠΙΣΤΗΜΗΣ ΦΥΣΙΚΗΣ ΑΓΩΓΗΣ - ΑΘΛΗΤΙΣΜΟΥ
'286': ΨΥΧΙΑΤΡΕΙΑ
'287': ΦΟΡΟΛΟΓΙΑ ΚΕΦΑΛΑΙΟΥ ΑΝΩΝ. ΕΤΑΙΡΕΙΩΝ
'288': ΤΥΠΟΙ ΣΥΜΒΟΛΑΙΩΝ
'289': ΚΑΝΟΝΙΣΜΟΣ ΕΠΙΘΕΩΡΗΣΕΩΣ
'290': ΜΟΥΣΕΙΟ ΕΛΛΗΝΙΚΗΣ ΛΑΙΚΗΣ ΤΕΧΝΗΣ
'291': ΠΑΝΕΠΙΣΤΗΜΙΟ ΠΕΛΟΠΟΝΝΗΣΟΥ
'292': ΟΡΓΑΝΙΣΜΟΣ ΕΡΓΑΤΙΚΗΣ ΚΑΤΟΙΚΙΑΣ
'293': ΑΣΦΑΛΕΙΑ ΕΡΓΑΖΟΜΕΝΩΝ ΣΕ ΟΙΚΟΔΟΜΕΣ
'294': ΣΤΕΓΑΝΗ ΥΠΟΔΙΑΙΡΕΣΗ ΠΛΟΙΩΝ
'295': ΔΙΟΙΚΗΣΗ ΠΡΩΤΕΥΟΥΣΗΣ
'296': ΔΙΔΑΚΤΟΡΙΚΕΣ - ΜΕΤΑΠΤΥΧΙΑΚΕΣ ΣΠΟΥΔΕΣ ΕΘΝΙΚΟΥ ΜΕΤΣΟΒΙΟΥ
'297': ΕΙΣΦΟΡΑ ΚΑΤΟΧΩΝ ΕΙΔΩΝ ΠΡΩΤΗΣ ΑΝΑΓΚΗΣ
'298': ΔΙΑΦΟΡΟΙ ΔΙΚΟΝΟΜΙΚΟΙ ΝΟΜΟΙ
'299': ΔΙΕΘΝΕΙΣ ΛΙΜΕΝΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'300': ΥΓΕΙΟΝΟΜΙΚΗ ΥΠΗΡΕΣΙΑ ΕΛ.ΑΣ
'301': ΕΛΛΗΝΙΚΑ ΤΑΧΥΔΡΟΜΕΙΑ (ΕΛ.ΤΑ)
'302': ΜΙΣΘΟΙ ΚΑΙ ΕΠΙΔΟΜΑΤΑ Π. ΝΑΥΤΙΚΟΥ
'303': ΓΕΩΡΓΙΚΑ ΤΑΜΕΙΑ
'304': ΑΤΕΛΕΙΕΣ ΥΠΕΡ ΜΕΤΑΛΛΕΥΤΙΚΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ
'305': ΑΠΟΒΑΡΟ
'306': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΕΚΠΡΟΣΩΠΩΝ ΚΑΙ ΥΠΑΛΛΗΛΩΝ
'307': ΚΩΔΙΚΑΣ ΠΕΡΙ ΔΙΚΗΓΟΡΩΝ
'308': ΙΕΡΑΡΧΙΑ ΚΑΙ ΠΡΟΒΙΒΑΣΜΟΙ
'309': ΙΣΡΑΗΛΙΤΕΣ
'310': ΣΩΜΑ ΚΤΗΝΙΑΤΡΙΚΟ
'311': ΝΟΡΒΗΓΙΑ - ΝΕΑ ΖΗΛΑΝΔΙΑ – ΝΙΓΗΡΙΑ Κ.ΛΠ
'312': ΕΝΤΥΠΑ ΚΑΙ ΒΙΒΛΙΟΘΗΚΕΣ ΝΑΥΤΙΚΟΥ
'313': ΥΠΟΥΡΓΕΙΟ ΤΥΠΟΥ ΚΑΙ ΜΕΣΩΝ ΜΑΖΙΚΗΣ ΕΝΗΜΕΡΩΣΗΣ
'314': ΝΑΥΤΙΚΕΣ ΠΕΙΘΑΡΧΙΚΕΣ ΠΟΙΝΕΣ
'315': ΜΙΣΘΩΣΕΙΣ ΑΓΡΟΤΙΚΩΝ ΑΚΙΝΗΤΩΝ
'316': ΔΙΑΦΟΡΟΙ ΣΥΝΕΤΑΙΡΙΣΜΟΙ
'317': ΑΓΡΟΤΙΚΗ ΠΙΣΤΗ
'318': ΛΑΙΚΕΣ ΑΓΟΡΕΣ-ΤΑΜΕΙΟ ΛΑΙΚΩΝ ΑΓΟΡΩΝ
'319': ΚΑΝΟΝΙΣΜΟΣ ΠΕΙΘΑΡΧΙΑΣ ΧΩΡΟΦΥΛΑΚΗΣ
'320': ΑΔΙΚΗΜΑΤΑ ΚΑΤΑ ΤΗΣ ΔΗΜΟΣΙΑΣ ΑΣΦΑΛΕΙΑΣ
'321': ΕΝΟΙΚΙΑΣΗ ΦΟΡΟΥ ΔΗΜΟΣΙΩΝ ΘΕΑΜΑΤΩΝ
'322': ΕΥΡΩΠΑΙΚΗ ΣΥΜΒΑΣΗ ΚΟΙΝΩΝΙΚΗΣ ΚΑΙ ΙΑΤΡΙΚΗΣ ΑΝΤΙΛΗΨΕΩΣ
'323': ΕΠΙΒΑΤΗΓΑ ΑΕΡΟΣΤΡΩΜΝΑ ΟΧΗΜΑΤΑ
'324': ΕΦΕΔΡΟΙ
'325': ΣΤΡΑΤΙΩΤΙΚΕΣ ΛΕΣΧΕΣ
'326': ΠΡΟΣΩΠΙΚΟ ΦΥΛΑΚΩΝ
'327': ΑΝΑΘΕΩΡΗΣΗ ΤΙΜΩΝ
'328': ΜΑΛΑΚΙΑ ΚΑΙ ΜΑΛΑΚΟΣΤΡΑΚΑ
'329': ΚΩΔΙΚΑΣ ΔΗΜΟΣΙΟΥ ΝΑΥΤΙΚΟΥ ΔΙΚΑΙΟΥ
'330': ΔΙΑΦΟΡΑ ΣΩΜΑΤΕΙΑ
'331': ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ
'332': ΚΩΔΙΚΟΠΟΙΗΣΗ ΑΓΟΡΑΝΟΜΙΚΩΝ ΔΙΑΤΑΞΕΩΝ
'333': ΕΚΠΑΙΔΕΥΣΗ ΣΤΗΝ ΑΛΛΟΔΑΠΗ
'334': ΔΙΔΑΚΤΙΚΑ ΒΙΒΛΙΑ
'335': ΣΥΝΤΑΞΙΟΔΟΤΙΚΑ ΚΑΙ ΑΣΦΑΛΙΣΤΙΚΑ ΘΕΜΑΤΑ ΠΡΟΣΩΠΙΚΟΥ Ν.Π.Δ.Δ
'336': ΕΠΙΔΟΜΑ ΟΙΚΟΓΕΝΕΙΩΝ ΣΤΡΑΤΙΩΤΙΚΩΝ ΕΞΑΦΑΝΙΣΘΕΝΤΩΝ ΚΑΙ ΑΙΧΜΑΛΩΤΩΝ
'337': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ
'338': ΚΕΝΤΡΟ ΔΙΠΛΩΜΑΤΙΚΩΝ ΣΠΟΥΔΩΝ
'339': ΓΕΝ. ΔΙΕΥΘΥΝΣΗ ΤΥΠΟΥ ΚΑΙ ΠΛΗΡΟΦΟΡΙΩΝ
'340': ΑΡΧΕΙΑ ΤΕΛΩΝΕΙΑΚΩΝ ΑΡΧΩΝ
'341': ΕΙΔΙΚΕΣ ΤΙΜΕΣ ΚΑΥΣΙΜΩΝ
'342': ΣΤΕΓΗ ΥΓΕΙΟΝΟΜΙΚΩΝ
'343': ΓΕΝΙΚΑ ΠΕΡΙ ΣΥΜΒΟΛΑΙΟΓΡΑΦΩΝ
'344': ΒΟΥΛΗ
'345': ΕΠΙΛΟΓΗ & ΑΞΙΟΛΟΓΗΣΗ ΑΣΤΥΝΟΜΙΚΟΥ ΠΡΟΣΩΠΙΚΟΥ ΕΛ.ΑΣ
'346': ΧΟΙΡΟΤΡΟΦΙΑ
'347': ΦΟΡΟΣ ΚΑΤΑΝΑΛΩΣΕΩΣ ΠΕΤΡΕΛΑΙΟΕΙΔΩΝ
'348': ΕΠΙΒΟΛΗ ΤΕΛΩΝΙΑΚΩΝ ΔΑΣΜΩΝ
'349': ΑΕΡΟΠΟΡΙΚΗ ΣΤΡΑΤΟΛΟΓΙΑ
'350': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΓΙΑ ΤΑ ΝΑΡΚΩΤΙΚΑ
'351': ΔΙΑΦΟΡΕΣ ΤΡΑΠΕΖΕΣ
'352': ΟΙΝΟΛΟΓΟΙ
'353': ΤΕΛΩΝΟΦΥΛΑΚΗ
'354': ΤΑΜΕΙΟ ΕΘΝΙΚΗΣ ΑΜΥΝΑΣ (T.EΘ.A.) - ΕΘΝΙΚΗ ΕΠΙΤΡΟΠΗ ΕΞΟΠΛΙΣΜΟΥ ΕΝΟΠΛΩΝ
ΔΥΝΑΜΕΩΝ (Ε.Ε.Ε.Ε.Δ.)
'355': ΕΚΤΕΛΕΣΗ ΤΗΣ ΠΟΙΝΗΣ
'356': ΙΣΟΛΟΓΙΣΜΟΙ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ
'357': ΑΡΧΙΤΕΚΤΟΝΙΚΟΙ ΔΙΑΓΩΝΙΣΜΟΙ
'358': ΚΑΤΑΡΓΗΣΗ ΦΥΛΕΤΙΚΩΝ ΔΙΑΚΡΙΣΕΩΝ
'359': ΕΠΑΓΓΕΛΜΑΤΙΚΑ ΔΙΚΑΙΩΜΑΤΑ ΑΠΟΦΟΙΤΩΝ
'360': ΜΟΝΑΣΤΗΡΙΑΚΗ ΠΕΡΙΟΥΣΙΑ ΣΑΜΟΥ
'361': ΣΥΝΤΑΞΗ ΔΗΜΟΤΙΚΩΝ ΚΑΙ ΚΟΙΝΟΤΙΚΩΝ ΥΠΑΛΛΗΛΩΝ
'362': ΟΙΚΟΝΟΜΙΚΕΣ ΕΦΟΡΙΕΣ
'363': ΦΡΟΝΤΙΣΤΗΡΙΑ ΕΦΑΡΜΟΓΩΝ
'364': ΝΟΜΑΡΧΙΕΣ ΑΤΤΙΚΗΣ
'365': ΦΥΜΑΤΙΩΣΗ
'366': ΕΛΕΓΧΟΣ ΑΝΑΤΙΜΗΣΕΩΝ
'367': ΑΤΕΛΕΙΕΣ ΥΠΕΡ ΤΗΣ ΝΑΥΤΙΛΙΑΣ
'368': ΚΩΦΑΛΑΛΟΙ
'369': ΙΑΤΡΙΚΗ ΔΕΟΝΤΟΛΟΓΙΑ
'370': ΕΞΟΔΑ ΔΗΜΟΣΙΑΣ ΑΣΦΑΛΕΙΑΣ
'371': ΜΕ ΤΗΝ ΑΡΓΕΝΤΙΝΗ
'372': ΚΛΑΔΟΣ ΥΓΕΙΟΝΟΜΙΚΗΣ ΠΕΡΙΘΑΛΨΗΣ Τ.Α.Ε
'373': ΥΠΗΡΕΣΙΑ ΕΚΚΑΘΑΡΙΣΕΩΣ ΝΑΡΚΟΠΕΔΙΩΝ
'374': ΤΑΜΕΙΟ ΑΡΩΓΗΣ ΥΠΑΛΛΗΛΩΝ ΑΣΤΥΝΟΜΙΑΣ ΠΟΛΕΩΝ Τ.Α.Υ.Α.Π
'375': ΠΡΟΣΤΑΣΙΑ ΔΗΜΟΣΙΩΝ ΚΤΗΜΑΤΩΝ
'376': ΒΙΒΛΙΑ ΕΝΔΙΚΩΝ ΜΕΣΩΝ
'377': ΕΛΛΗΝΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ ΜΙΚΡΟΜΕΣΑΙΩΝ ΜΕΤΑΠΟΙΗΤΙΚΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ ΚΑΙ
ΧΕΙΡΟΤΕΧΝΙΑΣ
'378': ΔΗΜΟΣΙΟΓΡΑΦΙΚΟΣ ΧΑΡΤΗΣ
'379': ΦΟΡΟΣ ΓΑΜΙΚΩΝ ΣΥΜΦΩΝΩΝ ΙΣΡΑΗΛΙΤΩΝ
'380': ΥΠΟΤΡΟΦΙΑΙ ΚΤΗΝΙΑΤΡΙΚΗΣ
'381': ΑΠΟΔΟΧΕΣ ΠΡΟΣΩΠΙΚΟΥ ΙΔΙΩΤΙΚΟΥ ΔΙΚΑΙΟΥ
'382': ΕΠΙΒΑΤΗΓΑ ΑΚΤΟΠΛΟΙΚΑ ΠΛΟΙΑ
'383': ΠΑΛΑΙΟΙ ΔΗΜΟΣΙΟΥΠΑΛΛΗΛΙΚΟΙ ΝΟΜΟΙ
'384': ΚΩΔΙΚΑΣ ΠΕΡΙ ΚΛΗΡΟΔΟΤΗΜΑΤΩΝ
'385': ΟΙΚΟΝΟΜΙΚΗ ΕΠΙΘΕΩΡΗΣΗ
'386': ΚΤΗΜΑΤΟΓΡΑΦΗΣΗ ΔΑΣΩΝ
'387': ΟΡΓΑΝΙΚΕΣ ΘΕΣΕΙΣ
'388': ΠΕΡΙΟΡΙΣΜΟΣ ΧΡΗΣΗΣ ΟΡΙΣΜΕΝΩΝ ΣΥΜΒΑΤΙΚΩΝ ΟΠΛΩΝ
'389': ΑΓΙΟΝ ΟΡΟΣ
'390': ΚΥΡΩΣΕΙΣ ΦΟΡΟΛΟΓΙΚΩΝ ΠΑΡΑΒΑΣΕΩΝ
'391': ΚΑΤΑΣΤΑΣΗ ΠΡΟΣΩΠΙΚΟΥ Ο.Γ.Α
'392': ΕΠΑΝΑΠΑΤΡΙΣΜΟΣ ΚΕΦΑΛΑΙΩΝ
'393': ΜΑΘΗΤΕΣ ΤΕΧΝΙΤΕΣ
'394': ΔΙΑΒΙΒΑΣΕΙΣ
'395': ΕΜΜΙΣΘΟΙ ΚΑΙ ΠΟΙΝΙΚΟΙ ΔΙΚ. ΕΠΙΜΕΛΗΤΕΣ
'396': ΣΥΜΒΑΣΕΙΣ ΔΙΚΑΣΤΙΚΗΣ ΣΥΝΔΡΟΜΗΣ
'397': ΔΗΜΟΣΙΑ ΕΠΙΧΕΙΡΗΣΗ ΠΕΤΡΕΛΑΙΟΥ
'398': ΕΛΛΗΝΙΚΗ ΤΡΑΠΕΖΑ ΒΙΟΜΗΧΑΝΙΚΗΣ ΑΝΑΠΤΥΞΕΩΣ ΑΝΩΝΥΜΟΣ ΕΤΑΙΡΕΙΑ (Ε.Τ.Β.Α. Α.Ε.)
'399': ΕΙΔΙΚΟΤΗΤΕΣ ΚΑΙ ΤΡΟΠΟΣ ΕΙΣΟΔΟΥ ΣΤΕΛΕΧΩΝ
'400': ΠΡΟΣΤΑΣΙΑ ΕΡΓΑΖΟΜΕΝΩΝ ΣΤΗΝ ΗΜΕΔΑΠΗ - ΣΩΜΑ ΕΠΙΘΕΩΡΗΣΗΣ ΕΡΓΑΣΙΑΣ
'401': ΙΝΣΤΙΤΟΥΤΟ ΩΚΕΑΝΟΓΡΑΦΙΚΩΝ ΚΑΙ ΑΛΙΕΥΤΙΚΩΝ ΕΡΕΥΝΩΝ
'402': ΕΛΕΓΧΟΣ ΑΠΟΛΥΣΕΩΝ ΜΙΣΘΩΤΩΝ
'403': ΠΑΝΕΛΛΗΝΙΑ ΕΚΘΕΣΗ ΛΑΜΙΑΣ
'404': ΚΥΡΙΑΚΗ ΑΡΓΙΑ ΚΑΙ ΑΛΛΕΣ ΥΠΟΧΡΕΩΤΙΚΕΣ ΑΡΓΙΕΣ
'405': ΚΛΑΔΟΣ ΥΓΕΙΑΣ Ο.Α.Ε.Ε
'406': ΟΡΚΟΣ ΣΤΡΑΤΙΩΤΙΚΩΝ
'407': ΕΜΠΟΡΙΚΑ ΒΙΒΛΙΑ
'408': ΥΓΕΙΟΝΟΜΙΚΕΣ ΕΠΙΤΡΟΠΕΣ ΕΝΟΠΛΩΝ ΔΥΝΑΜΕΩΝ
'409': ΑΓΙΟΣ ΒΙΚΕΝΤΙΟΣ-ΓΡΕΝΑΔΙΝΟΙ, ΑΓΙΟΣ ΜΑΡΙΝΟΣ Κ.ΛΠ
'410': ΑΠΟΖΗΜΙΩΣΗ ΔΙΑΤΕΛΕΣΑΝΤΩΝ ΠΡΩΘΥΠΟΥΡΓΩΝ
'411': ΑΣΦΑΛΙΣΗ ΛΟΓΟΤΕΧΝΩΝ ΚΑΙ ΚΑΛΛΙΤΕΧΝΩΝ
'412': ΠΕΙΘΑΡΧΙΚΑ ΣΥΜΒΟΥΛΙΑ
'413': ΕΤΑΙΡΙΕΣ ΧΡΗΜΑΤΟΔΟΤΙΚΗΣ ΜΙΣΘΩΣΗΣ
'414': ΚΟΙΝΩΝΙΚΗ ΥΠΗΡΕΣΙΑ ΦΥΛΑΚΩΝ
'415': ΚΑΝΟΝΙΣΜΟΣ ΥΠΗΡΕΣΙΩΝ ΑΓΡΟΦΥΛΑΚΗΣ
'416': ΑΣΦΑΛΙΣΗ ΣΤΟ ΙΚΑ
'417': ΕΜΠΟΡΙΚΟΙ ΣΥΜΒΟΥΛΟΙ ΚΑΙ ΑΚΟΛΟΥΘΟΙ
'418': ΕΠΙΚΟΥΡΟΙ ΠΑΡΑΤΗΡΗΤΕΣ
'419': ΥΠΟΤΡΟΦΙΕΣ
'420': ΚΕΝΤΡΟ ΠΡΟΓΡΑΜΜΑΤΙΣΜΟΥ
'421': ΠΡΩΤΕΣ ΥΛΕΣ ΣΟΚΟΛΑΤΟΠΟΙΙΑΣ
'422': ΕΠΙΤΡΟΠΗ ΚΗΠΩΝ ΚΑΙ ΔΕΝΔΡΟΣΤΟΙΧΙΩΝ
'423': ΚΙΝΗΤΟ ΕΠΙΣΗΜΑ
'424': ΣΥΝΔΙΚΑΛΙΣΜΟΣ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'425': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ Π.Ν
'426': ΟΡΓΑΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΤΑΜΕΙΟΥ ΠΑΡΑΚΑΤΑΘΗΚΩΝ ΚΑΙ ΔΑΝΕΙΩΝ
'427': ΑΔΕΙΕΣ ΗΝΙΟΧΙΑΣ
'428': ΥΠΗΡΕΣΙΑ ΠΡΟΓΡΑΜΜΑΤΙΣΜΟΥ ΚΑΙ ΜΕΛΕΤΩΝ
'429': ΚΡΑΤΙΚΑ ΑΥΤΟΚΙΝΗΤΑ
'430': ΑΤΟΜΙΚΗ ΚΑΤΑΓΓΕΛΙΑ ΣΥΜΒΑΣΕΩΣ ΕΡΓΑΣΙΑΣ
'431': ΠΟΛΥΤΕΚΝΟΙ
'432': ΙΣΤΟΡΙΚΟ ΑΡΧΕΙΟ ΜΑΚΕΔΟΝΙΑΣ
'433': ΑΣΦΑΛΙΣΗ ΑΥΤΟΚΙΝΗΤΙΚΩΝ ΑΤΥΧΗΜΑΤΩΝ
'434': ΔΑΝΕΙΑ ΕΣΩΤΕΡΙΚΑ
'435': ΕΚΚΛΗΣΙΑ ΚΡΗΤΗΣ
'436': ΦΟΡΟΛΟΓΙΑ ΣΤΑΦΙΔΑΣ
'437': ΕΚΠΑΙΔΕΥΤΙΚΕΣ ΑΔΕΙΕΣ
'438': ΑΕΡΟΔΙΚΕΙΑ
'439': ΕΠΙΔΟΜΑ ΑΣΘΕΝΕΙΑΣ
'440': ΘΕΣΕΙΣ ΣΥΜΒΟΛΑΙΟΓΡΑΦΩΝ
'441': ΑΓΟΡΑ ΣΥΝΑΛΛΑΓΜΑΤΟΣ
'442': ΝΟΜΙΚΟ ΣΥΜΒΟΥΛΙΟ ΤΟΥ ΚΡΑΤΟΥΣ (Ν.Σ.Κ.)
'443': ΦΟΡΟΛΟΓΙΑ ΜΕΤΑΒΙΒΑΣΗΣ
'444': ΣΥΜΒΟΥΛΙΑ - ΕΠΙΤΡΟΠΕΣ - ΙΝΣΤΙΤΟΥΤΑ ΕΡΓΑΣΙΑΣ ΚΑΙ ΚΟΙΝΩΝΙΚΩΝ ΑΣΦΑΛΙΣΕΩΝ
'445': ΤΕΛΗ ΕΙΣΙΤΗΡΙΩΝ ΚΑΙ ΚΟΜΙΣΤΡΩΝ
'446': ΟΙΚΟΝΟΜΙΚΗ ΥΠΗΡΕΣΙΑ ΥΓΕΙΟΝΟΜΙΚΟΥ ΣΩΜΑΤΟΣ
'447': ΠΡΟΣΩΠΙΚΟ ΣΩΜΑΤΩΝ ΑΣΦΑΛΕΙΑΣ ΜΕ ΣΧΕΣΗ ΙΔΙΩΤΙΚΟΥ ΔΙΚΑΙΟΥ
'448': ΑΡΤΕΡΓΑΤΕΣ
'449': ΕΥΚΟΛΙΕΣ ΣΕ ΦΟΙΤΗΤΕΣ
'450': ΣΥΝΕΤΑΙΡΙΣΜΟΙ ΚΟΙΝΗΣ ΧΟΡΤΟΝΟΜΗΣ ΚΑΙ ΣΥΝΙΔΙΟΚΤΗΣΙΑΣ
'451': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΠΕΡΙΦΕΡΕΙΑΚΟΥ ΓΕΝΙΚΟΥ ΝΟΣΟΚΟΜΕΙΟΥ Ο
ΕΥΑΓΓΕΛΙΣΜΟΣ
'452': ΠΡΟΣΚΟΠΙΣΜΟΣ
'453': ΣΥΜΒΟΥΛΙΑ ΕΠΑΓΓΕΛΜΑΤΙΚΗΣ ΚΑΙ ΤΕΧΝΙΚΗΣ ΕΚΠΑΙΔΕΥΣΕΩΣ
'454': ΚΡΑΤΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ ΜΗΧΑΝΗΜΑΤΩΝ ΔΗΜΟΣΙΩΝ ΕΡΓΩΝ
'455': ΑΤΟΜΙΚΑ ΕΓΓΡΑΦΑ ΑΝΘΥΠΑΣΠΙΣΤΩΝ-ΥΠΑΞΙΩΜΑΤΙΚΩΝ
'456': ΔΙΑΦΟΡΕΣ ΣΧΟΛΕΣ
'457': ΒΙΒΛΙΑ ΔΗΜΟΣΙΕΥΣΕΩΣ ΔΙΑΘΗΚΩΝ
'458': ΚΑΝΟΝΙΣΜΟΙ ΠΡΟΣΩΠΙΚΟΥ ΣΥΓΚΟΙΝΩΝΙΑΚΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ
'459': ΤΟΥΡΙΣΤΙΚΟΙ ΤΟΠΟΙ
'460': ΙΝΣΤΙΤΟΥΤΟ ΞΕΝΩΝ ΓΛΩΣΣΩΝ ΚΑΙ ΦΙΛΟΛΟΓΙΩΝ
'461': ΚΑΠΝΟΠΩΛΕΣ
'462': ΑΓΩΓΕΣ ΓΙΑΤΡΩΝ
'463': ΣΥΣΤΑΣΗ ΚΑΙ ΑΠΟΔΟΣΗ ΠΑΡΑΚΑΤΑΘΗΚΩΝ ΑΠΟ Τ.Π. ΚΑΙ Δ
'464': ΑΔΙΚΗΜΑΤΑ ΔΙΑΠΡΑΤΤΟΜΕΝΑ ΣΤΑ ΚΡΑΤΗ-ΜΕΛΗ
'465': ΑΝΑΣΤΟΛΕΣ ΤΟΥ ΣΥΝΤΑΓΜΑΤΟΣ - ΚΑΤΑΣΤΑΣΗ ΠΟΛΙΟΡΚΙΑΣ
'466': ΣΥΜΒΑΣΕΙΣ ΠΑΡΟΧΗΣ ΑΣΦΑΛΕΙΑΣ (ΕΝΕΧΥΡΟ, ΥΠΟΘΗΚΗ Κ.ΛΠ.)
'467': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣΝΑΥΤΙΚΩΝ ΠΡΑΚΤΟΡΩΝ ΚΑΙ ΥΠΑΛΛΗΛΩΝ (Τ.Α.Ν.Π.Υ.)
'468': ΑΝΩΤΑΤΟ ΣΥΓΚΟΙΝΩΝΙΑΚΟ ΣΥΜΒΟΥΛΙΟ
'469': ΠΡΕΒΕΝΤΟΡΙΑ
'470': ΑΝΑΒΟΛΗ ΣΤΡΑΤΕΥΣΕΩΣ
'471': ΕΙΔΙΚΑ ΛΗΞΙΑΡΧΕΙΑ
'472': ΓΕΩΤΕΧΝΙΚΟ ΕΠΙΜΕΛΗΤΗΡΙΟ
'473': ΥΓΕΙΟΝΟΜΙΚΑ ΔΙΚΑΙΩΜΑΤΑ
'474': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΗΣ ΕΚΠΑΙΔΕΥΤΙΚΩΝ
'475': ΚΑΖΑΚΣΤΑΝ – ΚΑΜΕΡΟΥΝ – ΚΑΝΑΔΑΣ Κ.ΛΠ
'476': ΣΥΝΤΑΞΕΙΣ ΘΥΜΑΤΩΝ ΑΠΟ ΤΟΝ ΑΜΑΧΟ ΠΛΗΘΥΣΜΟ
'477': ΦΙΛΟΣΟΦΙΚΗ ΣΧΟΛΗ
'478': ΕΚΤΕΛΩΝΙΣΜΟΣ ΤΑΧΥΔΡΟΜΙΚΩΝ ΔΕΜΑΤΩΝ
'479': ΥΔΡΕΥΣΗ ΘΕΣΣΑΛΟΝΙΚΗΣ
'480': ΣΥΜΦΩΝΙΕΣ ΠΕΡΙ ΠΛΩΤΩΝ ΟΔΩΝ
'481': ΑΝΑΚΗΡΥΞΗ ΤΗΣ ΑΝΕΞΑΡΤΗΣΙΑΣ
'482': ΕΠΙΤΡΟΠΗ ΟΛΥΜΠΙΑΚΩΝ ΑΓΩΝΩΝ
'483': ΟΙΝΟΠΑΡΑΓΩΓΗ ΑΤΤΙΚΟΒΟΙΩΤΙΑΣ
'484': ΕΚΠΤΩΣΕΙΣ ΥΠΕΡ ΕΞΑΓΩΓΕΩΝ
'485': ΦΟΡΟΛΟΓΙΑ ΚΛΗΡΟΝΟΜΙΩΝ, ΔΩΡΕΩΝ, ΓΟΝΙΚΩΝ ΠΑΡΟΧΩΝ
'486': ΟΡΦΑΝΟΤΡΟΦΕΙΑ ΚΑΙ ΟΙΚΟΤΡΟΦΕΙΑ
'487': ΜΕ ΤΗΝ ΟΥΡΑΓΟΥΑΗ
'488': ΜΕ ΤΗΝ ΑΥΣΤΡΙΑΚΗ
'489': ΔΙΑΦΟΡΟΙ ΦΟΡΟΙ ΚΑΤΑΝΑΛΩΣΕΩΣ
'490': ΔΙΕΥΘΥΝΣΗ ΕΦΕΔΡΩΝ - ΠΟΛΕΜΙΣΤΩΝ - ΑΓΩΝΙΣΤΩΝ
'491': ΑΓΡΟΤΙΚΕΣ ΟΙΚΟΚΥΡΙΚΕΣ ΣΧΟΛΕΣ
'492': ΞΥΛΕΙΑ
'493': ΒΙΒΛΙΑΡΙΑ ΥΓΕΙΑΣ ΕΡΓΑΤΩΝ
'494': ΣΧΟΛΗ ΑΞΙΩΜΑΤΙΚΩΝ ΣΤΡΑΤΙΩΤΙΚΩΝ ΥΠΗΡΕΣΙΩΝ
'495': ΝΟΜΑΡΧΙΑΚΕΣ ΚΑΙ ΔΗΜΟΤΙΚΕΣ ΕΚΛΟΓΕΣ
'496': ΕΓΓΥΗΣΕΙΣ ΚΑΙ ΔΑΝΕΙΑ ΤΟΥ ΔΗΜΟΣΙΟΥ
'497': ΥΠΟΥΡΓΕΙΟ ΑΝΑΠΤΥΞΗΣ
'498': ΤΑΚΤΙΚΑ ΔΙΟΙΚΗΤΙΚΑ ΔΙΚΑΣΤΗΡΙΑ - ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ
'499': ΤΡΟΦΟΔΟΣΙΑ ΠΛΗΡΩΜΑΤΩΝ ΠΛΟΙΩΝ
'500': ΔΙΑΦΟΡΟΙ ΛΙΜΕΝΕΣ ΚΑΙ ΛΙΜΕΝΙΚΑ ΤΑΜΕΙΑ
'501': ΗΛΕΚΤΡΙΚΕΣ ΕΚΜΕΤΑΛΛΕΥΣΕΙΣ
'502': ΠΡΟΥΠΟΘΕΣΕΙΣ ΑΣΚΗΣΗΣ ΔΙΑΦΟΡΩΝ ΕΠΑΓΓΕΛΜΑΤΩΝ
'503': ΤΕΛΩΝΕΙΑΚΗ ΥΠΗΡΕΣΙΑ ΑΕΡΟΣΚΑΦΩΝ
'504': ΕΠΙΤΡΟΠΗ ΔΑΣΜΟΛΟΓΙΟΥ
'505': ΝΑΥΠΗΓΕΙΑ Π. ΝΑΥΤΙΚΟΥ
'506': ΒΙΟΜΗΧΑΝΙΚΕΣ ΚΑΙ ΕΠΙΧΕΙΡΗΜΑΤΙΚΕΣ ΠΕΡΙΟΧΕΣ
'507': ΙΑΤΡΟΔΙΚΑΣΤΕΣ
'508': ΑΘΛΗΤΙΣΜΟΣ ΕΝΟΠΛΩΝ ΔΥΝΑΜΕΩΝ
'509': ΟΡΓΑΝΙΣΜΟΣ ΣΥΚΩΝ
'510': ΚΑΝΟΝΙΣΜΟΣ ΑΣΘΕΝΕΙΑΣ ΤΑΜΕΙΟΥ ΣΥΝΤΑΞΕΩΝ ΕΦΗΜΕΡΙΔΟΠΩΛΩΝ ΚΑΙ ΥΠΑΛΛΗΛΩΝ
ΠΡΑΚΤΟΡΕΙΩΝ (Τ.Σ.Ε.Υ.Π.)
'511': ΑΔΕΙΕΣ ΜΙΣΘΩΤΩΝ
'512': ΠΡΟΣΤΑΣΙΑ ΚΕΦΑΛΑΙΩΝ ΕΞΩΤΕΡΙΚΟΥ
'513': ΑΠΟΔΕΙΚΤΙΚΑ ΦΟΡΟΛΟΓΙΚΗΣ ΕΝΗΜΕΡΟΤΗΤΑΣ
'514': ΟΡΓΑΝΩΣΗ ΚΑΙ ΛΕΙΤΟΥΡΓΙΑ ΤΩΝ ΤΗΛΕΠΙΚΟΙΝΩΝΙΩΝ ΕΘΝΙΚΗ ΕΠΙΤΡΟΠΗ ΤΗΛΕΠΙΚΟΙΝΩΝΙΩΝ
ΚΑΙ ΤΑΧΥΔΡΟΜΕΙΩΝ (Ε.Ε.Τ.Τ.)
'515': ΠΡΟΣΩΠΙΚΟ Ο.Τ.Ε
'516': ΒΑΣΙΛΙΚΑ ΙΔΡΥΜΑΤΑ
'517': ΑΠΟΚΑΤΑΣΤΑΣΗ ΠΛΗΓΕΝΤΩΝ ΑΠΟ ΕΚΡΗΞΗ ΠΛΟΙΟΥ ΣΤΗΝ ΚΡΗΤΗ
'518': ΕΚΜΕΤΑΛΛΕΥΣΗ ΔΥΝΑΜΕΩΣ ΡΕΟΝΤΩΝ ΥΔΑΤΩΝ
'519': ΚΑΚΟΥΡΓΙΟΔΙΚΕΙΑ
'520': ΚΕΝΤΡΙΚΕΣ ΑΓΟΡΕΣ ΑΛΛΩΝ ΠΟΛΕΩΝ
'521': ΤΑΜΕΙΟ ΑΛΛΗΛΟΒΟΗΘΕΙΑΣ Π.Ν
'522': ΕΚΛΟΓΙΚΟΙ ΚΑΤΑΛΟΓΟΙ ΚΑΙ ΒΙΒΛΙΑΡΙΑ
'523': ΥΠΗΡΕΣΙΑ ΕΓΓΕΙΩΝ ΒΕΛΤΙΩΣΕΩΝ
'524': ΤΟΥΡΙΣΤΙΚΗ ΑΝΑΠΤΥΞΗ
'525': ΝΟΜΟΘΕΣΙΑ ΠΕΡΙ ΣΥΜΒΑΣΕΩΣ ΕΡΓΑΣΙΑΣ
'526': ΕΛΕΓΧΟΣ ΕΚΡΗΚΤΙΚΩΝ ΥΛΩΝ
'527': ΜΑΚΕΔΟΝΙΚΟΙ ΣΙΔΗΡΟΔΡΟΜΟΙ
'528': ΔΙΕΥΚΟΛΥΝΣΕΙΣ ΣΕ ΔΗΜΟΣΙΟΥΣ ΥΠΑΛΛΗΛΟΥΣ
'529': ΣΤΡΑΤΙΩΤΙΚΕΣ ΥΠΟΧΡΕΩΣΕΙΣ ΕΠΑΝΑΠΑΤΡΙΖΟΜΕΝΩΝ
'530': ΔΙΑΚΡΙΣΗ ΕΜΠΟΡΙΚΩΝ ΠΡΑΞΕΩΝ
'531': ΟΡΓΑΝΙΣΜΟΣ ΕΛΛΗΝΙΚΩΝ ΓΕΩΡΓΙΚΩΝ ΑΣΦΑΛΙΣΕΩΝ (Ε.Λ.Γ.Α.)
'532': ΕΞΩΣΧΟΛΙΚΗ ΣΩΜΑΤΙΚΗ ΑΓΩΓΗ
'533': ΔΡΑΧΜΟΠΟΙΗΣΗ
'534': ΜΕ ΤΗ ΒΡΑΖΙΛΙΑ
'535': ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΑΚΑΔΗΜΙΑ
'536': ΑΝΤΑΛΛΑΓΗ ΘΕΡΑΠΕΥΤΙΚΩΝ ΟΥΣΙΩΝ
'537': ΓΑΛΛΙΑ, ΓΕΡΜΑΝΙΑ Κ.ΛΠ
'538': ΝΟΜΟΠΑΡΑΣΚΕΥΑΣΤΙΚΕΣ ΕΠΙΤΡΟΠΕΣ
'539': ΚΥΒΕΡΝΕΙΟ ΘΕΣΣΑΛΟΝΙΚΗΣ
'540': ΣΤΡΑΤΙΩΤΙΚΟΙ ΑΚΟΛΟΥΘΟΙ
'541': ΔΙΑΘΕΣΗ ΑΠΟΣΤΡΑΓΓΙΖΟΜΕΝΩΝ ΓΑΙΩΝ
'542': ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΓΙΑ ΡΑΔΙΟΦΩΝΙΑ – ΤΗΛΕΟΡΑΣΗ
'543': ΓΝΩΜΟΔΟΤΙΚΟ ΣΥΜΒΟΥΛΙΟ ΦΑΡΜΑΚΩΝ
'544': ΣΥΜΒΑΣΕΙΣ ΔΙΑΦΟΡΕΣ
'545': ΠΡΑΞΕΙΣ ΚΑΤΑ ΤΗΣ ΑΣΦΑΛΕΙΑΣ ΤΗΣ ΑΕΡΟΠΟΡΙΑΣ
'546': ΙΑΤΡΟΙ ΙΑΜΑΤΙΚΩΝ ΠΗΓΩΝ
'547': ΚΕΝΤΡΙΚΟ ΣΥΜΒΟΥΛΙΟ ΥΓΕΙΑΣ (ΚΕ.Σ.Υ.)
'548': ΑΝΩΤΑΤΟ ΣΥΜΒΟΥΛΙΟ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'549': ΥΠΟΥΡΓΕΙΟ ΕΝΕΡΓΕΙΑΣ ΚΑΙ ΦΥΣΙΚΩΝ ΠΟΡΩΝ
'550': ΤΕΧΝΙΚΗ ΕΚΜΕΤΑΛΛΕΥΣΗ ΕΛΑΦΡΩΝ ΑΕΡΟΠΛΑΝΩΝ Δ.Χ
'551': ΠΟΛΥΕΘΝΕΙΣ ΜΟΡΦΩΤΙΚΕΣ ΣΥΜΦΩΝΙΕΣ
'552': ΕΚΠΑΙΔΕΥΣΗ Λ.Σ
'553': ΠΡΟΣΤΑΣΙΑ ΕΛΕΥΘΕΡΟΥ ΑΝΤΑΓΩΝΙΣΜΟΥ
'554': ΕΘΝΙΚΗ ΕΠΙΤΡΟΠΗ ΔΙΕΘΝΟΥΣ ΕΜΠΟΡΙΚΟΥ ΕΠΙΜΕΛΗΤΗΡΙΟΥ
'555': ΟΡΓΑΝΙΣΜΟΣ
'556': ΤΕΛΩΝΕΙΑΚΕΣ ΠΑΡΑΚΑΤΑΘΗΚΕΣ
'557': ΕΛΕΓΧΟΣ ΟΡΓΑΝΙΣΜΩΝ ΚΟΙΝΩΝΙΚΗΣ ΠΟΛΙΤΙΚΗΣ
'558': ΕΝΩΣΕΙΣ ΑΠΟΣΤΡΑΤΩΝ ΑΞΙΩΜΑΤΙΚΩΝ ΕΝΟΠΛΩΝ ΔΥΝΑΜΕΩΝ
'559': ΦΥΛΛΑ ΠΟΙΟΤΗΤΑΣ ΑΞΙΩΜΑΤΙΚΩΝ Π.Ν
'560': ΙΝΣΤΙΤΟΥΤΟ ΓΕΩΛΟΓΙΚΩΝ ΚΑΙ ΜΕΤΑΛΛΕΥΤΙΚΩΝ ΕΡΕΥΝΩΝ
'561': ΛΑΟΓΡΑΦΙΚΟ ΚΑΙ ΕΘΝΟΛΟΓΙΚΟ ΜΟΥΣΕΙΟ ΜΑΚΕΔΟΝΙΑΣ - ΘΡΑΚΗΣ
'562': ΠΡΩΤΕΣ ΥΛΕΣ ΤΑΠΗΤΟΥΡΓΙΑΣ
'563': ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ
'564': ΚΩΔΙΚΑΣ ΟΔΙΚΗΣ ΚΥΚΛΟΦΟΡΙΑΣ
'565': ΦΑΡΜΑΚΕΥΤΙΚΗ ΠΕΡΙΘΑΛΨΗ
'566': ΜΕΛΕΤΕΣ ΠΡΟΓΡΑΜΜΑΤΟΣ ΔΗΜΟΣΙΩΝ ΕΠΕΝΔΥΣΕΩΝ
'567': ΕΠΙΔΟΣΗ ΔΙΑ ΤΟΥ ΤΑΧΥΔΡΟΜΕΙΟΥ
'568': ΠΑΝΕΠΙΣΤΗΜΙΟ ΘΡΑΚΗΣ
'569': ΗΘΙΚΕΣ ΑΜΟΙΒΕΣ
'570': ΔΗΜΟΣΙΑ ΚΤΗΜΑΤΑ ΣΤΗ ΔΩΔΕΚΑΝΗΣΟ
'571': ΣΥΜΒΑΣΕΙΣ ΔΙΚΑΣΤΙΚΗΣ ΑΝΤΙΛΗΨΕΩΣ
'572': ΠΕΡΙΟΡΙΣΜΟΙ ΑΛΙΕΙΑΣ
'573': ΠΥΡΗΝΙΚΕΣ ΕΓΚΑΤΑΣΤΑΣΕΙΣ
'574': ΩΡΕΣ ΕΡΓΑΣΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΑΥΤΟΚΙΝΗΤΩΝ
'575': ΕΓΓΡΑΦΕΣ, ΕΞΕΤΑΣΕΙΣ, ΑΝΑΛΥΤΙΚΑ ΠΡΟΓΡΑΜΜΑΤΑ
'576': ΔΙΚΑΙΩΜΑΤΑ ΤΕΛΩΝΕΙΑΚΩΝ ΕΡΓΑΣΙΩΝ
'577': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΑΥΤΟΚΙΝΗΤΙΣΤΩΝ (Τ.Σ.Α.)
'578': ΤΗΛΕΦΩΝΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ
'579': ΦΟΡΟΛΟΓΙΑ ΑΣΦΑΛΙΣΤΡΩΝ
'580': ΔΙΕΘΝΗΣ ΥΔΡΟΓΡΑΦΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ
'581': ΕΠΑΡΧΙΕΣ
'582': ΑΓΡΟΤ. ΑΠΟΚΑΤΑΣΤΑΣΗ ΠΡΟΣΦΥΓΩΝ
'583': ΓΕΝΙΚΑ ΓΙΑ ΤΑ ΘΕΑΤΡΑ
'584': ΣΥΜΒΑΣΕΙΣ ΔΙΩΞΕΩΣ ΛΑΘΡΕΜΠΟΡΙΟΥ
'585': ΜΗΧΑΝΕΣ ΠΡΟΠΛΗΡΩΜΗΣ ΤΕΛΩΝ
'586': ΟΡΓΑΝΙΣΜΟΣ ΚΡΑΤΙΚΩΝ ΘΕΑΤΡΩΝ
'587': ΚΕΝΤΡΟ ΗΛΕΚΤΡΟΝΙΚΟΥ ΥΠΟΛΟΓΙΣΤΟΥ ΚΟΙΝΩΝΙΚΩΝ ΥΠΗΡΕΣΙΩΝ
'588': ΦΟΡΟΣ ΠΡΟΣΤΙΘΕΜΕΝΗΣ ΑΞΙΑΣ
'589': ΤΑΜΕΙΑ ΑΡΩΓΗΣ ΤΤΤ. ΥΠΑΛΛΗΛΩΝ
'590': ΣΩΜΑ ΟΡΚΩΤΩΝ ΕΛΕΓΚΤΩΝ ΛΟΓΙΣΤΩΝ (Σ.Ο.Ε.Λ.), ΕΠΙΤΡΟΠΗ ΛΟΓΙΣΤΙΚΗΣ ΤΥΠΟΠΟΙΗΣΗΣ
ΚΑΙ ΕΛΕΓΧΩΝ (Ε.Λ.Τ.Ε.)
'591': ΑΓΡΟΤΙΚΑ ΝΗΠΙΟΤΡΟΦΕΙΑ
'592': ΣΧΕΔΙΟ ΠΟΛΕΩΣ ΑΘΗΝΩΝ ΠΕΙΡΑΙΩΣ
'593': ΜΙΣΘΩΣΕΙΣ ΑΚΙΝΗΤΩΝ Ο.Δ.Ε.Π
'594': ΕΛΕΓΧΟΣ ΣΠΟΡΟΠΑΡΑΓΩΓΗΣ
'595': ΑΜΥΝΤΙΚΕΣ ΠΕΡΙΟΧΕΣ ΚΑΙ Ν. ΟΧΥΡΑ
'596': ΟΔΟΙΠΟΡΙΚΑ
'597': ΠΟΡΟΙ ΟΡΓΑΝΙΣΜΩΝ ΤΟΥΡΙΣΜΟΥ
'598': ΔΙΕΘΝΕΣ ΔΙΚΑΣΤΗΡΙΟ
'599': ΟΙΚΟΝΟΜΙΚΗ ΜΕΡΙΜΝΑ ΕΝΟΠΛΩΝ ΔΥΝΑΜΕΩΝ
'600': ΓΕΝΙΚΟ ΝΟΣΟΚΟΜΕΙΟ ΕΜΠΟΡΙΚΟΥ ΝΑΥΤΙΚΟΥ
'601': ΝΟΜΙΚΗ ΒΟΗΘΕΙΑ ΣΕ ΠΟΛΙΤΕΣ ΧΑΜΗΛΟΥ ΕΙΣΟΔΗΜΑΤΟΣ
'602': ΣΥΜΒΟΛΑΙΟΓΡΑΦΙΚΟΙ ΣΥΛΛΟΓΟΙ
'603': ΥΠΟΥΡΓΕΙΟ ΣΤΡΑΤΙΩΤΙΚΩΝ
'604': ΠΡΟΣΩΠΙΚΟ Ε.Μ.Π
'605': ΥΠΟΥΡΓΕΙΟ ΕΡΓΑΣΙΑΣ
'606': ΑΓΟΝΕΣ ΓΡΑΜΜΕΣ
'607': ΜΟΝΟΠΩΛΙΟ ΠΕΤΡΕΛΑΙΟΥ
'608': ΠΡΟΛΗΨΗ ΡΥΠΑΝΣΗΣ ΤΗΣ ΘΑΛΑΣΣΑΣ
'609': ΧΩΡΙΚΗ ΔΙΚΑΙΟΔΟΣΙΑ ΤΕΛΩΝΕΙΑΚΩΝ ΑΡΧΩΝ
'610': ΕΠΑΓΓΕΛΜΑΤΙΚΑ ΣΩΜΑΤΕΙΑ
'611': ΥΠΗΡΕΣΙΑ ΑΓΡΟΤΙΚΗΣ ΑΣΦΑΛΕΙΑΣ
'612': ΑΞΙΟΠΟΙΗΣΗ ΕΚΚΛΗΣΙΑΣΤΙΚΗΣ ΠΕΡΙΟΥΣΙΑΣ
'613': ΕΜΠΟΡΙΚΟΙ ΑΝΤΙΠΡΟΣΩΠΟΙ
'614': ΕΝΩΣΕΙΣ ΕΦΕΔΡΩΝ ΑΞΙΩΜΑΤΙΚΩΝ
'615': ΑΤΕΛΕΙΕΣ ΥΠΕΡ ΤΗΣ ΒΙΟΜΗΧΑΝΙΑΣ
'616': ΛΟΓΙΣΤΙΚΟ ΕΙΔΙΚΩΝ ΤΑΜΕΙΩΝ Ν.Π.Δ.Δ
'617': ΣΥΜΒΑΣΗ ΓΙΑ ΔΕΙΓΜΑΤΑ ΚΛΠ
'618': ΕΡΓΟΛΗΠΤΕΣ ΔΗΜΟΣΙΩΝ ΕΡΓΩΝ
'619': ΕΠΑΝΕΠΟΙΚΙΣΜΟΣ ΠΑΡΑΜΕΘΟΡΙΩΝ ΠΕΡΙΟΧΩΝ
'620': ΦΑΡΙΚΑ ΤΕΛΗ
'621': ΛΑΤΟΜΕΙΑ ΜΑΡΜΑΡΩΝ
'622': ΠΟΣΟΣΤΟ ΣΥΜΜΕΤΟΧΗΣ ΑΣΦΑΛΙΣΜΕΝΩΝ
'623': ΑΣΦΑΛΕΙΑ ΑΝΘΡΩΠΙΝΗΣ ΖΩΗΣ ΣΤΗ ΘΑΛΑΣΣΑ
'624': ΟΡΓΑΝΙΚΟΙ ΝΟΜΟΙ ΠΕΡΙ ΦΥΛΑΚΩΝ
'625': ΛΑΘΡΕΜΠΟΡΙΑ
'626': ΑΣΦΑΛΙΣΤΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΓΕΝΙΚΑ
'627': ΕΙΣΑΓΩΓΗ ΧΛΩΡΙΚΟΥ ΚΑΛΙΟΥ
'628': ΙΝΣΤΙΤΟΥΤΟ ΓΕΩΠΟΝΙΚΩΝ ΕΠΙΣΤΗΜΩΝ
'629': ΕΠΙΔΟΜΑ ΠΑΣΧΑ - ΧΡΙΣΤΟΥΓΕΝΝΩΝ
'630': ΓΕΩΡΓΙΚΟΙ ΣΥΝΕΤΑΙΡΙΣΜΟΙ ΑΛΛΗΛΑΣΦΑΛΕΙΑΣ
'631': ΟΡΓΑΝΙΣΜΟΣ ΦΟΡΟΛΟΓΙΚΩΝ ΔΙΚΑΣΤΗΡΙΩΝ
'632': ΕΠΙΔΟΣΗ
'633': ΙΔΡΥΜΑ ΚΡΑΤΙΚΩΝ ΥΠΟΤΡΟΦΙΩΝ
'634': ΥΓΕΙΟΝΟΜΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ ΑΕΡΟΥΓΕΙΟΝΟΜΕΙΩΝ
'635': ΟΦΕΙΛΕΣ ΠΡΟΣ ΤΟ ΔΗΜΟΣΙΟ
'636': ΠΡΑΚΤΟΡΕΙΑ ΕΙΔΗΣΕΩΝ
'637': ΕΛΕΓΧΟΣ ΚΑΙ ΕΠΟΠΤΕΙΑ ΞΕΝΟΔΟΧΕΙΩΝ ΚΛΠ
'638': ΚΟΙΝΑ ΤΑΜΕΙΑ ΕΚΜΕΤΑΛΛΕΥΣΕΩΣ ΛΕΩΦΟΡΕΙΩΝ (Κ.Τ.Ε.Λ.)
'639': ΚΑΤΩΤΑΤΑ ΟΡΙΑ ΜΙΣΘΩΝ ΚΑΙ ΗΜΕΡΟΜΙΣΘΙΩΝ
'640': ΣΥΝΤΗΡΗΤΙΚΗ ΚΑΤΑΣΧΕΣΗ ΠΛΟΙΩΝ
'641': ΥΠΗΡΕΣΙΑ ΠΡΟΣΤΑΣΙΑΣ ΕΡΓΑΖΟΜΕΝΩΝ ΣΤΗΝ ΑΛΛΟΔΑΠΗ
'642': ΕΥΡΩΠΑΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ ΠΥΡΗΝΙΚΩΝ ΕΡΕΥΝΩΝ
'643': ΒΙΒΛΙΑ ΓΕΩΡΓΙΚΩΝ ΣΥΝΕΤΑΙΡΙΣΜΩΝ
'644': ΠΟΛΙΤΙΚΕΣ ΚΑΙ ΣΤΡΑΤΙΩΤΙΚΕΣ ΣΥΝΤΑΞΕΙΣ
'645': ΜΕΤΑΤΡΟΠΗ ΜΕΤΟΧΩΝ ΣΕ ΟΝΟΜΑΣΤΙΚΕΣ
'646': ΕΙΔΙΚΟΙ ΦΡΟΥΡΟΙ
'647': ΥΠΗΡΕΣΙΑ ΕΘΝΙΚΗΣ ΑΣΦΑΛΕΙΑΣ
'648': ΡΥΘΜΙΣΤΙΚΟΣ ΦΟΡΟΣ
'649': ΛΙΜΑΝΙ ΗΡΑΚΛΕΙΟΥ ΚΡΗΤΗΣ ΚΑΙ
'650': ΕΚΚΛΗΣΙΑΣΤΙΚΕΣ ΥΠΟΤΡΟΦΙΕΣ
'651': ΦΟΡΟΛΟΓΙΑ ΟΙΝΟΥ
'652': ΔΙΕΘΝΗΣ ΥΓΕΙΟΝΟΜΙΚΗ ΣΥΜΒΑΣΗ ΑΕΡΟΝΑΥΤΙΛΙΑΣ
'653': ΤΑΜΕΙΟ ΑΡΩΓΗΣ ΥΠΑΛΛΗΛΩΝ
'654': ΚΟΙΝΩΝΙΚΗ ΑΣΦΑΛΙΣΗ ΑΓΡΟΤΩΝ
'655': ΚΥΡΟΣ ΣΥΜΒΟΛΑΙΟΓΡΑΦΙΚΩΝ ΠΡΑΞΕΩΝ
'656': ΦΟΡΟΛΟΓΙΑ ΥΠΕΡΑΞΙΑΣ ΑΚΙΝΗΤΩΝ
'657': ΝΗΠΙΑΓΩΓΕΙΑ
'658': ΕΚΘΕΜΑΤΑ ΚΑΙ ΔΕΙΓΜΑΤΑ
'659': ΥΓΕΙΟΝΟΜΙΚΟ ΣΩΜΑ ΑΕΡΟΠΟΡΙΑΣ
'660': ΠΛΗΡΩΜΗ ΜΙΣΘΩΝ ΚΑΙ ΗΜΕΡΟΜΙΣΘΙΩΝ
'661': ΚΩΔΙΚΑΣ ΦΟΡΟΛΟΓΙΑΣ ΚΑΠΝΟΥ
'662': ΟΡΙΑ
'663': ΔΙΚΑΙΟΣΤΑΣΙΑ ΣΕΙΣΜΟΠΑΘΩΝ, ΠΥΡΟΠΑΘΩΝ, ΠΡΟΣΦΥΓΩΝ ΚΛΠ
'664': ΧΡΕΗ ΚΛΗΡΟΝΟΜΙΩΝ
'665': ΠΡΟΣΩΠΙΚΟΝ ΙΔΡΥΜΑΤΩΝ ΠΑΙΔΙΚΗΣ ΠΡΟΣΤΑΣΙΑΣ
'666': ΜΙΣΘΩΣΕΙΣ ΚΑΙ ΑΓΟΡΕΣ
'667': ΠΑΛΑΙΟΤΕΡΑΙ ΕΚΚΑΘΑΡΙΣΕΙΣ
'668': ΟΙΚΟΝΟΜΙΚΗ ΑΠΟΚΑΤΑΣΤΑΣΗ ΑΓΡΟΤΩΝ
'669': ΑΠΑΛΛΟΤΡΙΩΣΕΙΣ ΓΙΑ ΔΗΜΟΤΙΚΑ ΚΑΙ ΚΟΙΝΟΤΙΚΑ ΕΡΓΑ
'670': ΜΗΤΡΩΟ ΑΓΡΟΤΩΝ
'671': ΚΑΝΟΝΙΣΜΟΣ ΔΙΕΥΚΟΛΥΝΣΕΩΝ
'672': ΚΡΑΤΙΚΟ ΕΡΓΟΣΤΑΣΙΟ ΑΕΡΟΠΛΑΝΩΝ
'673': ΕΠΑΓΓΕΛΜΑΤΙΚΑ ΕΝΔΕΙΚΤΙΚΑ
'674': ΑΥΘΑΙΡΕΤΕΣ ΚΑΤΑΣΚΕΥΕΣ
'675': ΕΓΚΑΤΑΛΕΛΕΙΜΜΕΝΕΣ ΕΚΤΑΣΕΙΣ
'676': ΥΠΟΥΡΓΕΙΟ ΔΗΜΟΣΙΩΝ ΄ΕΡΓΩΝ
'677': ΠΡΟΝΟΙΑ Β. ΕΛΛΑΔΟΣ
'678': ΔΙΚΑΣΤΙΚΟ ΕΝΣΗΜΟ - ΑΓΩΓΟΣΗΜΟ
'679': ΤΑΧΥΔΡΟΜΙΚΗ ΑΝΤΑΠΟΚΡΙΣΗ
'680': ΕΣΩΤΕΡΙΚΗ ΝΟΜΟΘΕΣΙΑ
'681': ΦΟΡΟΛΟΓΙΑ ΤΣΙΓΑΡΟΧΑΡΤΟΥ
'682': ΟΡΓΑΝΙΚΕΣ ΘΕΣΕΙΣ ΑΞΙΩΜΑΤΙΚΩΝ
'683': ΜΑΙΕΥΤΙΚΗ ΠΕΡΙΘΑΛΨΗ
'684': ΑΔΕΙΕΣ ΣΤΡΑΤΙΩΤΙΚΩΝ
'685': ΟΡΓΑΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΕΛΛΗΝΙΚΗΣ ΑΣΤΥΝΟΜΙΑΣ
'686': ΠΟΙΝΙΚΟΣ ΚΑΙ ΠΕΙΘΑΡΧΙΚΟΣ ΚΩΔΙΚΑΣ
'687': ΑΝΥΠΟΤΑΚΤΟΙ
'688': ΔΙΕΥΘΥΝΣΗ ΤΕΛΩΝΕΙΩΝ ΘΕΣΣΑΛΟΝΙΚΗΣ
'689': ΠΕΡΙΦΕΡΕΙΕΣ ΛΙΜΕΝΙΚΩΝ ΑΡΧΩΝ
'690': ΑΣΦΑΛΙΣΗ ΚΑΙ ΕΙΣΠΡΑΞΗ ΠΟΡΩΝ Τ.Ε.Β.Ε
'691': ΣΙΔΗΡΟΣ
'692': ΓΕΝΙΚΗ ΓΡΑΜΜΑΤΕΙΑ ΕΜΠΟΡΙΟΥ
'693': ΔΙΑΧΕΙΡΙΣΗ ΙΣΡΑΗΛΙΤΙΚΩΝ ΠΕΡΟΥΣΙΩΝ
'694': ΛΙΠΟΤΑΞΙΑ
'695': ΒΑΡΕΑ ΚΑΙ ΑΝΘΥΓΙΕΙΝΑ ΕΠΑΓΓΕΛΜΑΤΑ
'696': ΕΙΔΙΚΟ ΤΑΜΕΙΟ ΜΗΧΑΝΗΜΑΤΩΝ
'697': ΛΕΩΦΟΡΕΙΑ ΠΕΡΙΟΧΗΣ ΠΡΩΤΕΥΟΥΣΑΣ
'698': ΑΝΑΜΟΡΦΩΤΙΚΑ ΚΑΤΑΣΤΗΜΑΤΑ
'699': ΥΓΕΙΟΝΟΜΙΚΟ ΣΩΜΑ
'700': ΟΡΓΑΝΙΣΜΟΣ ΥΠΟΥΡΓΕΙΟΥ ΕΡΓΑΣΙΑΣ
'701': ΔΙΩΡΥΓΑ ΚΟΡΙΝΘΟΥ
'702': ΠΕΡΙΘΑΛΨΗ ΦΥΜΑΤΙΚΩΝ ΑΣΦΑΛΙΣΜΕΝΩΝ
'703': ΚΟΙΝΩΝΙΚΟΣ ΕΛΕΓΧΟΣ ΔΙΟΙΚΗΣΗΣ - ΑΝΤΙΓΡΑΦΕΙΟΚΡΑΤΙΚΑ ΜΕΤΡΑ -ΕΚΚΑΘΑΡΙΣΗ
ΑΡΧΕΙΩΝ
'704': ΒΙΒΛΙΑ ΥΠΟΘΕΣΕΩΝ ΕΚΟΥΣΙΑΣ ΔΙΚΑΙΟΔΟΣΙΑΣ
'705': ΖΑΧΑΡΗ
'706': ΒΟΡΕΙΟΑΤΛΑΝΤΙΚΗ ΑΜΥΝΤΙΚΗ ΟΡΓΑΝΩΣΗ (Ν.Α.Τ.Ο)
'707': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΠΡΟΣΩΠΙΚΟΥ ΕΤΑΙΡΕΙΑΣ ΓΕΝΙΚΩΝ ΑΠΟΘΗΚΩΝ
'708': ΝΟΜΙΚΗ ΚΑΤΑΣΤΑΣΗ ΠΡΟΣΦΥΓΩΝ
'709': ΔΙΚΑΣΤΗΡΙΟ ΛΕΙΩΝ
'710': ΔΙΕΘΝΗΣ ΟΡΓΑΝΩΣΗ ΕΡΓΑΣΙΑΣ
'711': ΠΡΟΜΗΘΕΙΕΣ–ΜΙΣΘΩΣΕΙΣ–ΕΡΓΑ Ο.Γ.Α
'712': ΠΕΡΙΘΑΛΨΗ ΠΡΟΣΩΠΙΚΟΥ Ο.Γ.Α
'713': ΧΟΡΗΓΗΣΗ ΔΑΝΕΙΩΝ ΑΠΟ Τ.Π. ΚΑΙ ΔΑΝΕΙΩΝ
'714': ΤΕΛΟΣ ΕΠΙΤΗΔΕΥΜΑΤΟΣ
'715': ΕΛΕΥΘΕΡΑ ΤΕΛΩΝΕΙΑΚΑ ΣΥΓΚΡΟΤΗΜΑΤΑ
'716': ΦΟΡΟΛΟΓΙΚΑ ΚΙΝΗΤΡΑ ΣΥΓΧΩΝΕΥΣΕΩΣ Η ΜΕΤΑΤΡΟΠΗΣ ΕΠΙΧΕΙΡΗΣΕΩΝ
'717': ΚΑΤΑΣΤΑΤΙΚΕΣ ΔΙΑΤΑΞΕΙΣ T.E.B.E
'718': ΝΑΥΤΙΚΟ ΕΠΙΜΕΛΗΤΗΡΙΟ
'719': ΠΡΟΣΩΠΙΚΟ Υ.Ε.Ν
'720': ΛΕΙΤΟΥΡΓΟΙ ΜΕΣΗΣ ΕΚΠΑΙΔΕΥΣΗΣ
'721': ΚΟΙΝΟΠΡΑΞΙΑ ΓΕΩΡΓΙΚΩΝ ΣΥΝΕΤΑΙΡΙΣΜΩΝ
'722': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΕΠΙΧΕΙΡΗΜΑΤΙΩΝ ΚΙΝΗΜΑΤΟΓΡΑΦΟΥ
'723': ΒΟΣΚΟΤΟΠΟΙ
'724': ΕΠΙΤΟΚΙΑ ΤΡΑΠΕΖΩΝ
'725': ΚΑΠΝΙΚΟΙ ΟΡΓΑΝΙΣΜΟΙ
'726': ΣΤΑΘΜΟΙ ΑΥΤΟΚΙΝΗΤΩΝ
'727': ΕΥΛΟΓΙΑ
'728': ΠΕΡΙΦΕΡΕΙΑΚΕΣ ΥΠΗΡΕΣΙΕΣ ΥΠΟΥΡΓΕΙΟΥ ΒΙΟΜΗΧΑΝΙΑΣ
'729': ΤΑΜΕΙΟ ΑΕΡΟΠΟΡΙΚΗΣ ΑΜΥΝΑΣ
'730': ΟΡΓΑΝΙΣΜΟΣ ΚΕΝΤΡΙΚΗΣ ΥΠΗΡΕΣΙΑΣ
'731': ΤΑΜΕΙΟ ΕΡΓΑΣΙΑΣ ΗΘΟΠΟΙΩΝ
'732': ΤΕΛΩΝΙΣΜΟΣ ΕΙΔΩΝ ΑΤΟΜΙΚΗΣ ΧΡΗΣΕΩΣ
'733': ΦΟΡΟΛΟΓΙΑ ΠΡΟΣΟΔΟΥ ΑΠΟ ΠΛΟΙΑ
'734': ΔΙΟΙΚΗΤΙΚΗ ΔΙΑΙΡΕΣΗΣ
'735': ΟΡΓΑΝΙΣΜΟΣ ΑΥΤΟΚΙΝΗΤΟΔΡΟΜΙΩΝ ΕΛΛΑΔΟΣ (Ο.Α.Ε.)
'736': ΕΘΝΙΚΟ ΚΕΝΤΡΟ ΑΜΕΣΗΣ ΒΟΗΘΕΙΑΣ (Ε.Κ.Α.Β.)
'737': ΓΝΩΜΟΔΟΤΙΚΟ ΣΥΜΒΟΥΛΙΟ ΟΙΚΟΝΟΜΙΚΗΣ ΑΝΑΠΤΥΞΗΣ
'738': ΔΙΑΘΗΚΗ
'739': ΑΓΩΓΕΣ ΔΙΑΤΡΟΦΗΣ
'740': ΦΑΡΜΑΚΕΥΤΙΚΟΙ ΣΥΛΛΟΓΟΙ
'741': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΚΑΙ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΓΕΩΡΓΙΚΩΝ
ΣΥΝΕΤΑΙΡΙΣΤΙΚΩΝ ΟΡΓΑΝΩΣΕΩΝ (Τ.Σ.Ε.Α.Π.Γ.Σ.Ο)
'742': ΕΠΙΔΟΜΑΤΑ ΔΙΑΦΟΡΑ
'743': ΠΕΙΘΑΡΧΙΚΟ ΔΙΚΑΙΟ
'744': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΧΗΜΙΚΩΝ (Τ.Ε.Α.Χ)
'745': ΠΡΟΑΓΩΓΕΣ ΚΑΙ ΠΡΟΣΟΝΤΑ ΠΥΡΟΣΒΕΣΤΙΚΟΥ ΠΡΟΣΩΠΙΚΟΥ
'746': ΟΔΟΙΠΟΡΙΚΑ ΕΞΟΔΑ ΠΡΟΣΩΠΙΚΟΥ ΣΩΜΑΤΩΝ ΑΣΦΑΛΕΙΑΣ
'747': ΝΟΣΗΛΕΥΤΙΚΑ ΙΔΡΥΜΑΤΑ ΚΑΤ’ ΙΔΙΑΝ
'748': ΠΡΟΣΤΑΣΙΑ ΚΑΤΑ ΤΗΣ ΦΥΛΛΟΞΗΡΑΣ
'749': ΟΡΓΑΝΙΣΜΟΣ ΤΑΜΕΙΟΥ ΝΟΜΙΚΩΝ
'750': ΠΡΑΤΗΡΙΑ ΥΓΡΩΝ ΚΑΥΣΙΜΩΝ
'751': ΘΡΗΣΚΕΥΤΙΚΟ ΣΩΜΑ ΕΝΟΠΛΩΝ ΔΥΝΑΜΕΩΝ
'752': ΔΙΑΔΙΚΑΣΙΑ ΑΝΑΓΚΑΣΤΙΚΩΝ ΑΠΑΛΛΟΤΡΙΩΣΕΩΝ ΑΚΙΝΗΤΩΝ
'753': ΔΙΕΡΜΗΝΕΙΣ
'754': ΣΧΕΔΙΑ ΑΛΛΩΝ ΠΟΛΕΩΝ
'755': ΤΑΜΕΙΟ ΑΛΛΗΛΟΒΟΗΘΕΙΑΣ ΣΤΡΑΤΙΩΤΙΚΩΝ ΑΕΡΟΠΟΡΙΑΣ
'756': ΗΜΕΡΟΛΟΓΙΟ ΜΗΧΑΝΗΣ
'757': ΚΕΝΤΡΟ ΕΛΛΗΝΙΚΗΣ ΓΛΩΣΣΑΣ
'758': ΩΡΕΣ ΕΡΓΑΣΙΑΣ ΣΕ ΑΡΤΟΠΟΙΕΙΑ
'759': ΓΕΝΙΚΗ ΓΡΑΜΜΑΤΕΙΑ
'760': ΜΕΤΑΦΡΑΣΤΙΚΑ ΓΡΑΦΕΙΑ
'761': ΠΡΟΔΙΑΓΡΑΦΕΣ ΜΕΛΕΤΩΝ
'762': ΣΥΝΤΑΞΕΙΣ ΘΥΜΑΤΩΝ ΕΘΝΙΚΗΣ
'763': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΣΥΜΒΟΛΑΙΟΓΡΑΦΩΝ
'764': ΙΑΤΡΟΔΙΚΑΣΤΙΚΗ ΑΜΟΙΒΗ
'765': ΕΦΟΡΙΕΣ ΚΑΠΝΟΥ – ΚΑΠΝΕΡΓΟΣΤΑΣΙΑ
'766': ΠΟΙΜΝΙΟΣΤΑΣΙΑ
'767': ΚΕΝΤΡΑ ΕΡΕΥΝΑΣ - ΕΡΕΥΝΗΤΙΚΑ ΙΝΣΤΙΤΟΥΤΑ
'768': ΤΑΜΕΙΑ ΠΡΟΝΟΙΑΣ ΔΙΚΗΓΟΡΩΝ
'769': ΟΙΝΟΠΑΡΑΓΩΓΗ ΣΑΜΟΥ
'770': ΙΜΑΤΙΣΜΟΣ Π. ΝΑΥΤΙΚΟΥ
'771': ΜΗΧΑΝΙΚΟΙ,ΑΡΧΙΤΕΚΤΟΝΕΣ,ΤΟΠΟΓΡΑΦΟΙ
'772': ΠΑΝΤΕΙΟ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΟΙΝΩΝΙΚΩΝ ΚΑΙ ΠΟΛΙΤΙΚΩΝ ΕΠΙΣΤΗΜΩΝ
'773': ΝΕΟΙ ΧΡΗΜΑΤΟΠΙΣΤΩΤΙΚΟΙ ΘΕΣΜΟΙ
'774': ΥΠΗΡΕΣΙΑ ΠΟΛΙΤΙΚΗΣ ΑΕΡΟΠΟΡΙΑΣ
'775': ΟΡΓΑΝΙΣΜΟΣ ΥΠΟΘΗΚΟΦΥΛΑΚΕΙΩΝ
'776': ΑΤΥΧΗΜΑΤΑ ΣΕ ΔΗΜΟΣΙΑ ΕΡΓΑ
'777': ΑΡΕΙΟΣ ΠΑΓΟΣ
'778': ΥΠΑΓΩΓΗ ΣΕ ΑΣΦΑΛΙΣΗ ΚΑΙ
'779': ΔΙΕΘΝΕΙΣ ΣΙΔΗΡΟΔΡΟΜΙΚΕΣ ΜΕΤΑΦΟΡΕΣΔΙΕΥΡΩΠΑΙΚΟ ΣΙΔΗΡΟΔΡΟΜΙΚΟ ΣΥΣΤΗΜΑ
'780': ΟΙΚΟΝΟΜΙΚΗ ΕΠΙΘΕΩΡΗΣΗ Π. ΝΑΥΤΙΚΟΥ
'781': ΑΝΑΠΤΥΞΙΑΚΗ ΚΑΙ ΒΙΟΜΗΧΑΝΙΚΗ ΠΟΛΙΤΙΚΗ
'782': ΒΕΒΑΙΩΣΗ ΚΑΙ ΕΙΣΠΡΑΞΗ ΠΟΙΝΙΚΩΝ ΕΞΟΔΩΝ
'783': ΝΑΥΤΙΚΟ ΧΗΜΕΙΟ
'784': ΛΑΧΕΙΑ
'785': ΤΡΟΧΙΟΔΡΟΜΟΙ ΑΘΗΝΩΝ – ΠΕΙΡΑΙΩΣ
'786': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΕΤΑΙΡΕΙΩΝ ΛΙΠΑΣΜΑΤΩΝ ΤΑ.Π.Π.Ε.Λ
'787': ΔΙΕΥΚΟΛΥΝΣΕΙΣ ΓΙΑ ΑΝΟΙΚΟΔΟΜΗΣΗ
'788': ΑΓΟΡΑΠΩΛΗΣΙΑ ΚΑΠΝΟΥ
'789': ΠΕΡΙ ΟΡΩΝ ΕΡΓΑΣΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΔΙΕΘΝΩΝ ΜΕΤΑΦΟΡΩΝ
'790': ΑΛΙΕΥΤΙΚΟΣ ΚΩΔΙΚΑΣ
'791': ΣΥΜΒΟΥΛΙΑ ΚΑΙ ΕΠΙΤΡΟΠΕΣ
'792': ΠΕΡΙΦΕΡΕΙΑΚΕΣ ΥΠΗΡΕΣΙΕΣ ΥΠΟΥΡΓΕΙΟΥ ΟΙΚΟΝΟΜΙΚΩΝ
'793': ΣΥΜΒΑΣΕΙΣ ΠΕΡΙ ΑΣΕΜΝΩΝ ΔΗΜΟΣΙΕΥΜΑΤΩΝ
'794': ΓΕΩΡΓΙΚΟΙ ΣΤΑΘΜΟΙ
'795': ΝΑΞΙΩΤΙΚΗ ΣΜΥΡΙΔΑ
'796': ΑΝΑΣΤΟΛΗ ΠΡΟΣΕΛΕΥΣΕΩΣ ΕΦΕΔΡΩΝ
'797': ΕΚΠΑΙΔΕΥΣΗ ΧΩΡΟΦΥΛΑΚΗΣ
'798': ΑΣΦΑΛΙΣΗ ΕΞΑΓΩΓΙΚΩΝ ΠΙΣΤΩΣΕΩΝ
'799': ΘΕΡΑΠΑΙΝΙΔΕΣ ΑΝΑΠΗΡΩΝ ΠΟΛΕΜΟΥ
'800': ΕΠΙΤΡΟΠΗ ΑΤΟΜΙΚΗΣ ΕΝΕΡΓΕΙΑΣ
'801': ΚΑΝΟΝΙΣΜΟΣ ΑΣΤΥΝΟΜΙΑΣ ΠΟΛΕΩΝ
'802': ΦΥΛΛΑ ΠΟΙΟΤΗΤΑΣ ΥΠΑΞΙΩΜΑΤΙΚΩΝ Π.Ν
'803': ΕΠΙΘΕΩΡΗΣΕΙΣ ΚΤΗΝΙΑΤΡΙΚΗΣ
'804': ΜΕΡΙΚΗ ΑΠΑΣΧΟΛΗΣΗ - ΦΑΣΟΝ - ΤΗΛΕΡΓΑΣΙΑ ΚΑΤ’ ΟΙΚΟΝ ΑΠΑΣΧΟΛΗΣΗ
'805': ΗΛΕΚΤΡΙΚΗ ΕΤΑΙΡΕΙΑ ΑΘΗΝΩΝ - ΠΕΙΡΑΙΩΣ
'806': ΠΡΟΚΑΤΑΣΚΕΥΑΣΜΕΝΑΙ ΟΙΚΙΑΙ
'807': ΤΡΑΠΕΖΑ ΤΗΣ ΕΛΛΑΔΟΣ
'808': ΣΥΜΦΩΝΙΕΣ ΠΡΟΣΤΑΣΙΑΣ ΤΟΥ ΠΕΡΙΒΑΛΛΟΝΤΟΣ
'809': ΛΙΓΝΙΤΗΣ
'810': ΤΑΜΕΙΟ ΕΠΑΓΓΕΛΜΑΤΙΚΗΣ ΑΣΦΑΛΙΣΗΣ ΠΡΟΣΩΠΙΚΟΥ ΕΛΤΑ
'811': ΜΕΛΕΤΕΣ ΤΕΧΝΙΚΩΝ ΕΡΓΩΝ
'812': ΠΛΗΡΩΜΑΤΑ ΑΕΡΟΣΚΑΦΩΝ
'813': ΕΞΑΓΩΓΗ ΣΤΑΦΙΔΑΣ
'814': ΤΑΜΕΙΟΝ ΠΡΟΝΟΙΑΣ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'815': ΔΙΑΧΕΙΡΙΣΗ ΠΕΡΙΟΥΣΙΑΣ
'816': ΟΡΓΑΝΙΚΟΙ ΝΟΜΟΙ
'817': ΥΠΗΡΕΣΙΕΣ ΑΙΜΟΔΟΣΙΑΣ
'818': ΣΩΜΑΤΕΙΑ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'819': ΠΕΖΟΔΡΟΜΙΑ
'820': ΔΙΑΘΕΣΗ ΑΠΟΡΡΙΜΜΑΤΩΝ
'821': ΤΡΟΧΙΟΔΡΟΜΟΙ ΘΕΣΣΑΛΟΝΙΚΗΣ
'822': ΓΕΝΙΚΗ ΔΙΕΥΘΥΝΣΗ ΔΗΜΟΣΙΟΥ ΛΟΓΙΣΤΙΚΟΥ
'823': ΡΥΜΟΥΛΚΑ - ΛΑΝΤΖΕΣ
'824': ΠΕΤΡΕΛΑΙΟΕΙΔΗ
'825': ΓΕΝΙΚΑ ΑΡΧΕΙΑ ΤΟΥ ΚΡΑΤΟΥΣ
'826': ΚΑΤΑΣΤΑΤΙΚΕΣ ΔΙΑΤΑΞΕΙΣ Ο.Τ.Ε. - ΣΧΕΣΕΙΣ Ο.Τ.Ε. ΜΕ ΑΛΛΟΥΣ ΠΑΡΟΧΟΥΣ
'827': ΥΠΗΡΕΣΙΑ ΑΥΤΟΚΙΝΗΤΩΝ
'828': ΑΚΑΔΗΜΙΑ ΑΘΗΝΩΝ
'829': ΜΟΝΟΠΩΛΙΟ ΖΑΧΑΡΙΝΗΣ
'830': ΟΙΚΙΣΤΙΚΕΣ ΠΕΡΙΟΧΕΣ
'831': ΑΤΕΛΕΙΕΣ ΥΠΕΡ ΤΗΣ ΑΛΙΕΙΑΣ
'832': ΔΙΑΦΟΡΕΣ ΕΚΤΑΚΤΕΣ ΦΟΡΟΛΟΓΙΕΣ
'833': ΒΙΒΛΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ
'834': ΕΡΓΑΤΙΚΑ ΑΤΥΧΗΜΑΤΑ
'835': ΝΟΣΗΛΕΥΤΕΣ
'836': ΣΥΝΔΙΚΑΛΙΣΤΙΚΕΣ ΕΛΕΥΘΕΡΙΕΣ
'837': ΕΘΝΙΚΟ ΣΥΜΒΟΥΛΙΟ ΕΝΕΡΓΕΙΑΣ
'838': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΕΡΓΑΤΟΤΕΧΝΙΤΩΝ ΥΑΛΟΥΡΓΩΝ
'839': ΑΓΩΓΕΣ ΑΣΦΑΛΙΣΤΡΩΝ
'840': ΣΩΜΑΤΕΜΠΟΡΙΑ ΓΥΝΑΙΚΩΝ
'841': ΑΤΕΛΕΙΕΣ ΕΡΓΩΝ ΑΜΥΝΤΙΚΟΥ ΠΡΟΓΡΑΜΜΑΤΟΣ
'842': ΤΕΧΝΙΚΗ ΕΚΠΑΙΔΕΥΣΗ ΑΞΙΩΜΑΤΙΚΩΝ ΣΕ ΑΝΩΤΑΤΕΣ ΣΧΟΛΕΣ
'843': ΔΙΚΑΙΩΜΑΤΑ ΚΗΡΥΚΩΝ ΚΛΠ
'844': ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΤΑΜΕΙΟΥ ΝΟΜΙΚΩΝ
'845': ΝΑΥΤΕΣ ΚΑΙ ΛΙΜΕΝΟΦΥΛΑΚΕΣ
'846': ΠΑΝΕΠΙΣΤΗΜΙΑΚΗ ΣΧΟΛΗ ΑΓΡΙΝΙΟΥ
'847': ΠΟΛΥΤΕΧΝΙΚΗ ΣΧΟΛΗ
'848': ΜΕΙΩΣΗ ΕΙΣΦΟΡΩΝ
'849': ΚΕΝΤΡΑ ΛΗΨΕΩΣ ΤΙΜΩΝ ΣΦΑΓΕΙΩΝ
'850': ΑΠΟΔΗΜΙΑ ΣΤΡΑΤΕΥΣΙΜΩΝ
'851': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΝΟΙΑΣ ΚΑΙ ΚΟΙΝΗΣ ΔΙΑΝΟΜΗΣ ΠΩΛΗΤΩΝ
ΒΕΝΖΙΝΗΣ ΑΘΗΝΩΝ - ΠΕΙΡΑΙΩΣ ΚΑΙ ΠΕΡΙΧΩΡΩΝ
'852': ΙΑΤΡΟΦΑΡΜΑΚΕΥΤΙΚΗ ΠΕΡΙΘΑΛΨΗ
'853': ΝΟΣΗΛΕΥΤΙΚΑ ΙΔΡΥΜΑΤΑ
'854': ΓΕΝΙΚΑ ΠΕΡΙ ΜΟΥΣΕΙΩΝ
'855': ΑΣΦΑΛΕΙΑ ΟΧΥΡΩΝ ΘΕΣΕΩΝ
'856': ΓΕΩΡΓΙΚΑ ΜΗΧΑΝΗΜΑΤΑ
'857': ΤΑΜΕΙΑ ΣΥΝΕΡΓΑΣΙΑΣ
'858': ΙΔΙΩΤΙΚΕΣ ΚΛΙΝΙΚΕΣ ΚΑΙ ΕΡΓΑΣΤΗΡΙΑ
'859': ΥΓΕΙΟΝΟΜΙΚΗ ΕΞΕΤΑΣΗ ΙΠΤΑΜΕΝΩΝ
'860': ΔΙΑΦΟΡΕΣ ΑΕΡΟΠΟΡΙΚΕΣ ΣΧΟΛΕΣ
'861': ΓΥΝΑΙΚΕΣ ΝΟΣΟΚΟΜΟΙ
'862': ΦΟΙΤΗΣΗ, ΒΑΘΜΟΛΟΓΙΑ, ΕΞΕΤΑΣΕΙΣ ΚΛΠ. Α.Σ.Κ.Τ
'863': ΣΕΙΣΜΟΠΛΗΚΤΟΙ ΔΙΑΦΟΡΟΙ
'864': ΟΡΓΑΝΙΣΜΟΣ ΥΠΟΥΡΓΕΙΟΥ ΓΕΩΡΓΙΑΣ
'865': ΚΩΔΙΚΟΠΟΙΗΣΗ ΤΗΣ ΝΟΜΟΘΕΣΙΑΣ
'866': ΜΕΤΑ ΤΗΣ ΓΑΛΛΙΑΣ
'867': ΓΕΩΓΡΑΦΙΚΗ ΥΠΗΡΕΣΙΑ ΣΤΡΑΤΟΥ
'868': ΕΙΔΗ ΠΑΡΑΔΙΔΟΜΕΝΑ ΣΤΗΝ ΕΛΕΥΘΕΡΗ ΧΡΗΣΗ
'869': ΜΟΝΟΠΩΛΙΟ ΣΠΙΡΤΩΝ
'870': ΚΑΤΑΣΤΑΤΙΚΟΝ Τ.Α.Κ.Ε
'871': ΕΠΙΚΟΥΡΙΚΟ ΤΑΜΕΙΟ ΥΠΑΛΛΗΛΩΝ ΑΣΤΥΝΟΜΙΑΣ ΠΟΛΕΩΝ (Ε.Τ.Υ.Α.Π.)
'872': ΜΙΣΘΟΔΟΣΙΑ ΙΕΡΕΩΝ – ΕΝΟΡΙΑΚΗ ΕΙΣΦΟΡΑ
'873': ΥΓΕΙΟΝΟΜΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ
'874': ΝΟΜΟΣ ΠΕΡΙ ΚΤΗΜΑΤΙΚΩΝ ΤΡΑΠΕΖΩΝ
'875': ΔΙΕΘΝΗΣ ΣΥΜΒΑΣΗ ΠΕΡΙ ΥΔΡΑΥΛΙΚΩΝ ΔΥΝΑΜΕΩΝ
'876': ΑΝΑΠΗΡΟΙ ΑΞΙΩΜΑΤΙΚΟΙ ΚΑΙ ΟΠΛΙΤΕΣ ΕΙΡΗΝΙΚΗΣ ΠΕΡΙΟΔΟΥ
'877': ΠΟΙΝΙΚΗ ΚΑΙ ΠΕΙΘΑΡΧΙΚΗ ΔΩΣΙΔΙΚΙΑ Λ.Σ
'878': ΔΑΣΙΚΟ ΠΡΟΣΩΠΙΚΟ
'879': ΑΟΠΛΗ ΘΗΤΕΙΑ-ΑΝΤΙΡΡΗΣΙΕΣ ΣΥΝΕΙΔΗΣΗΣ
'880': ΝΕΟΙ ΠΡΟΣΦΥΓΕΣ
'881': ΤΕΧΝΙΚΕΣ ΥΠΗΡΕΣΙΕΣ ΣΤΡΑΤΟΥ
'882': ΜΕΤΟΧΙΚΟ ΤΑΜΕΙΟ ΠΟΛΙΤΙΚΩΝ ΥΠΑΛΛΗΛΩΝ
'883': ΠΡΟΣΩΠΙΚΟ ΙΔΙΩΤΙΚΟΥ ΔΙΚΑΙΟΥ
'884': ΚΩΔΙΚΑΣ ΑΓΡΟΤΙΚΗΣ ΑΣΦΑΛΕΙΑΣ
'885': ΟΡΓΑΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΑΠΟΣΤΟΛΙΚΗΣ ΔΙΑΚΟΝΙΑΣ
'886': ΥΠΟΥΡΓΕΙΟ ΑΙΓΑΙΟΥ
'887': ΓΑΜΟΙ ΔΩΔΕΚΑΝΗΣΟΥ
'888': ΩΡΕΣ ΕΡΓΑΣΙΑΣ ΚΡΕΟΠΩΛΕΙΩΝ
'889': ΚΩΔΙΚΑΣ ΤΕΛΩΝ ΧΑΡΤΟΣΗΜΟΥ
'890': ΔΕΛΤΙΟ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ
'891': ΑΡΜΟΔΙΟΤΗΤΑ ΝΟΜΑΡΧΗ ΣΕ ΕΡΓΑΤΙΚΑ ΖΗΤΗΜΑΤΑ
'892': ΤΡΟΦΟΔΟΣΙΑ Π. ΝΑΥΤΙΚΟΥ
'893': ΣΥΜΦΩΝΙΑ ΠΕΡΙ ΔΙΠΛΩΜΑΤΙΚΩΝ ΣΧΕΣΕΩΝ
'894': ΕΦΕΔΡΟΙ ΚΑΙ ΕΠΙΚΟΥΡΟΙ ΑΞΙΩΜΑΤΙΚΟΙ Π.Ν
'895': ΤΟΥΡΙΣΤΙΚΕΣ ΕΠΙΧΕΙΡΗΣΕΙΣ
'896': ΔΙΕΘΝΕΣ ΠΟΙΝΙΚΟ ΔΙΚΑΣΤΗΡΙΟ
'897': ΔΙΟΙΚΗΤΙΚΕΣ ΠΡΑΞΕΙΣ
'898': ΝΟΣΟΚΟΜΕΙΑ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'899': ΣΥΜΒΟΥΛΙΟ ΧΑΛΥΒΑ
'900': ΤΕΜΑΧΙΣΜΟΣ ΚΡΕΑΤΩΝ
'901': ΕΛΕΓΧΟΣ ΚΑΤΟΧΗΣ ΟΠΛΩΝ
'902': ΑΝΑΠΡΟΣΑΡΜΟΓΕΣ ΤΗΣ ΔΡΑΧΜΗΣ
'903': ΕΦΟΔΙΑΣΜΟΣ ΠΛΟΙΩΝ
'904': ΣΕΙΣΜΟΠΛΗΚΤΟΙ ΙΟΝΙΩΝ ΝΗΣΩΝ
'905': ΔΗΜΟΣΙΑ ΕΠΙΧΕΙΡΗΣΗ ΚΙΝΗΤΩΝ ΑΞΙΩΝ ΑΝΩΝΥΜΗ ΕΤΑΙΡΕΙΑ (Δ.Ε.Κ.Α. Α.Ε.)
'906': ΕΤΑΙΡΕΙΑ – ΕΥΡΩΠΑΙΚΟΣ ΟΜΙΛΟΣ
'907': ΔΙΕΥΘΥΝΣΗ ΑΛΙΕΙΑΣ
'908': ΕΠΙΜΕΛΗΤΗΡΙΟ ΤΟΥΡΙΣΤΙΚΩΝ ΚΑΤΑΣΤΗΜΑΤΩΝ
'909': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΕΛΑΙΟΛΑΔΟΥ
'910': ΠΤΗΤΙΚΗ ΙΚΑΝΟΤΗΤΑ
'911': ΕΚΚΛΗΣΙΑΣΤΙΚΕΣ ΣΧΟΛΕΣ
'912': ΔΙΑΤΙΜΗΣΗ ΙΑΤΡΙΚΩΝ ΠΡΑΞΕΩΝ
'913': ΑΔΙΚΗΜΑΤΑ ΤΥΠΟΥ
'914': ΕΞΑΝΘΗΜΑΤΙΚΟΣ ΤΥΦΟΣ
'915': ΟΙΚΟΣ ΝΑΥΤΟΥ
'916': ΜΑΣΤΙΧΑ
'917': ΣΥΛΛΟΓΟΙ ΚΑΙ ΟΜΟΣΠΟΝΔΙΑ ΔΙΚΑΣΤΙΚΩΝ ΕΠΙΜΕΛΗΤΩΝ
'918': ΕΜΠΟΡΙΚΑ ΚΑΙ ΒΙΟΜΗΧΑΝΙΚΑ ΣΗΜΑΤΑ
'919': ΟΡΓΑΝΩΣΗ ΚΑΙ ΛΕΙΤΟΥΡΓΙΑ ΑΝΩΤΑΤΩΝ ΕΚΠΑΙΔΕΥΤΙΚΩΝ ΙΔΡΥΜΑΤΩΝ
'920': ΥΓΕΙΟΝΟΜΙΚΗ ΑΠΟΘΗΚΗ
'921': ΓΕΝ. ΔΙΕΥΘΥΝΣΗ ΠΟΙΝΙΚΗΣ ΔΙΚΑΙΟΣΥΝΗΣ
'922': ΑΕΡΟΠΟΡΙΚΟ ΔΙΚΑΙΟ
'923': ΜΕΛΕΤΗ ΚΑΙ ΕΠΙΒΛΕΨΗ ΜΗΧΑΝΟΛΟΓΙΚΩΝ ΕΓΚΑΤΑΣΤΑΣΕΩΝ
'924': ΑΘΕΜΙΤΟΣ ΑΝΤΑΓΩΝΙΣΜΟΣ
'925': ΠΟΛΕΜΙΚΗ ΔΙΑΘΕΣΙΜΟΤΗΤΑ
'926': ΛΕΣΧΕΣ ΚΑΙ ΠΡΑΤΗΡΙΑ ΕΛ.ΑΣ
'927': ΚΑΥΣΙΜΑ
'928': ΥΓΕΙΟΝΟΜΙΚΑ ΜΕΤΡΑ
'929': ΚΑΤΑΣΤΑΣΗ ΑΞΙΩΜΑΤΙΚΩΝ
'930': ΕΙΣΠΡΑΞΗ ΠΟΡΩΝ ΤΑΜΕΙΟΥ ΝΟΜΙΚΩΝ
'931': ΔΙΟΙΚΗΤΙΚΗ ΡΥΘΜΙΣΗ ΑΠΟΔΟΧΩΝ ΚΑΙ ΟΡΩΝ ΕΡΓΑΣΙΑΣ
'932': ΓΕΝΙΚΗ ΔΙΕΥΘΥΝΣΗ ΤΑΧΥΔΡΟΜΕΙΩΝ
'933': ΟΡΓΑΝΙΣΜΟΣ ΛΙΜΕΝΟΣ ΘΕΣΣΑΛΟΝΙΚΗΣ ΑΝΩΝΥΜΗ ΕΤΑΙΡΙΑ (Ο.Λ.Θ. Α.Ε.)
'934': ΣΧΟΛΗ ΕΘΝΙΚΗΣ ΑΜΥΝΑΣ
'935': ΚΑΘΟΛΙΚΟΙ
'936': ΕΚΚΛΗΣΙΑΣΤΙΚΑ ΜΟΥΣΕΙΑ
'937': ΔΙΕΘΝΗΣ ΕΚΘΕΣΗ ΘΕΣΣΑΛΟΝΙΚΗΣ Α.Ε. – XELEXPO Α.Ε
'938': ΕΥΕΡΓΕΤΙΚΟΣ ΥΠΟΛΟΓΙΣΜΟΣ ΗΜΕΡΩΝ ΕΡΓΑΣΙΑΣ
'939': ΕΙΣΦΟΡΑ ΕΠΑΓΓΕΛΜΑΤΙΚΟΥ ΚΙΝΔΥΝΟΥ
'940': ΑΠΑΛΛΟΤΡΙΩΣΕΙΣ ΓΙΑ ΤΟΥΡΙΣΤΙΚΟΥΣ ΣΚΟΠΟΥΣ
'941': ΑΠΟΛΥΜΑΝΤΗΡΙΑ
'942': ΕΚΠΟΙΗΣΗ ΠΛΟΙΩΝ ΔΗΜΟΣΙΟΥ
'943': ΔΙΑΚΟΝΟΙ
'944': ΥΔΡΕΥΣΗ ΔΙΑΦΟΡΩΝ ΠΟΛΕΩΝ
'945': ΠΡΩΤΕΣ ΥΛΕΣ ΚΛΩΣΤΟΥΦΑΝΤΟΥΡΓΙΑΣ
'946': ΨΕΥΔΗΣ ΒΕΒΑΙΩΣΗ ΕΝΩΠΙΟΝ ΑΡΧΗΣ
'947': ΑΠΩΛΕΣΘΕΙΣΕΣ ΚΑΙ ΠΑΡΑΓΡΑΦΕΙΣΕΣ ΑΞΙΕΣ
'948': ΦΟΙΤΗΤΙΚΗ ΛΕΣΧΗ
'949': ΤΑΜΕΙΟ ΥΓΕΙΑΣ ΤΑΧΥΔΡΟΜΙΚΟΥ ΠΡΟΣΩΠΙΚΟΥ
'950': ΕΛΕΓΧΟΣ ΔΕΝΔΡΩΔΩΝ ΚΑΛΛΙΕΡΓΕΙΩΝ
'951': ΚΑΤΑΠΟΛΕΜΗΣΗ ΑΝΑΛΦΑΒΗΤΙΣΜΟΥΛΑΙΚΗ ΕΠΙΜΟΡΦΩΣΗ
'952': ΕΠΙΚΟΥΡΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ ΜΕΤΑΦΟΡΩΝ
'953': ΦΟΙΤΗΤΙΚΕΣ ΛΕΣΧΕΣ
'954': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΓΙΑ ΤΗΝ ΠΡΟΣΤΑΣΙΑ ΤΩΝ ΕΡΓΑΖΟΜΕΝΩΝ ΓΥΝΑΙΚΩΝ
'955': ΛΗΣΤΕΙΑ
'956': ΑΓΩΓΕΣ ΑΠΟ ΣΥΝΑΛΛΑΓΜΑΤΙΚΕΣ ΚΑΙ ΓΡΑΜΜΑΤΙΑ
'957': ΕΚΜΙΣΘΩΣΗ ΔΗΜΟΣΙΩΝ ΜΕΤΑΛΛΕΙΩΝ
'958': ΚΟΛΥΜΒΗΤΙΚΕΣ ΔΕΞΑΜΕΝΕΣ
'959': ΕΡΑΝΟΙ ΚΑΙ ΛΑΧΕΙΟΦΟΡΟΙ Η ΦΙΛΑΝΘΡΩΠΙΚΕΣ ΑΓΟΡΕΣ
'960': ΠΡΟΣΤΑΣΙΑ ΕΠΙΒΑΤΗΓΟΥ ΝΑΥΤΙΛΙΑΣ
'961': ΓΕΝΙΚΟΙ ΝΟΜΟΙ ΠΕΡΙ ΞΕΝΟΔΟΧΕΙΩΝ-ΕΠΙΠΛ. ΔΩΜΑΤΙΩΝ ΚΛΠ
'962': ΙΕΡΑΡΧΙΑ ΚΑΙ ΠΡΟΑΓΩΓΕΣ ΑΞΙΩΜΑΤΙΚΩΝ
'963': ΣΥΝΕΡΓΑΤΕΣ (ΓΡΑΜΜΑΤΕΙΣ) ΒΟΥΛΕΥΤΩΝ-ΕΥΡΩΒΟΥΛΕΥΤΩΝ
'964': ΣΧΟΛΗ ΙΚΑΡΩΝ
'965': ΟΡΓΑΝΙΣΜΟΣ ΣΙΔΗΡΟΔΡΟΜΩΝ ΕΛΛΑΔΟΣ (Ο.Σ.Ε.)ΣΙΔΗΡΟΔΡΟΜΙΚΕΣ ΕΠΙΧΕΙΡΗΣΕΙΣ
'966': ΥΓΕΙΟΝΟΜΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ ΚΑΤΑ ΘΑΛΑΣΣΑΝ ΚΑΙ ΚΑΤΑ ΞΗΡΑΝ
'967': ΚΑΝΟΝΙΣΜΟΣ ΜΕΤΑΛΛΕΥΤΙΚΩΝ ΕΡΓΑΣΙΩΝ
'968': ΑΠΟΦΥΓΗ ΣΥΓΚΡΟΥΣΕΩΝ
'969': ΤΟΜΑΤΟΠΑΡΑΓΩΓΗ
'970': ΔΙΑΦΟΡΕΣ ΔΙΑΤΑΞΕΙΣ ΓΙΑ ΤΑ ΑΥΤΟΚΙΝΗΤΑ
'971': ΚΑΤΑΤΑΞΗ ΓΥΝΑΙΚΩΝ ΣΤΟ Λ.Σ
'972': ΕΤΑΙΡΕΙΕΣ ΔΙΟΙΚΟΥΜΕΝΕΣ ΑΠΟ ΤΟΥΣ ΠΙΣΤΩΤΕΣ
'973': ΒΑΛΚΑΝΙΚΕΣ ΣΥΜΦΩΝΙΕΣ
'974': ΜΕΤΑΦΟΡΑ ΣΥΝΤΕΛΕΣΤΗ ΔΟΜΗΣΗΣ
'975': ΠΡΟΜΗΘΕΥΤΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ Π.Ν
'976': ΠΡΟΣΩΠΙΚΟ ΦΑΡΜΑΚΕΙΩΝ
'977': ΔΙΔΑΣΚΟΜΕΝΑ ΜΑΘΗΜΑΤΑ
'978': ΕΚΛΟΓΗ ΒΟΥΛΕΥΤΩΝ - ΕΥΡΩΒΟΥΛΕΥΤΩΝ
'979': ΦΑΡΜΑΚΟΠΟΙΟΙ
'980': ΣΤΡΑΤΙΩΤΙΚΑ ΠΡΑΤΗΡΙΑ
'981': ΚΑΡΚΙΝΟΣ
'982': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ Α.Ε. ΟΙΝΟΠΟΙΙΑΣ, ΖΥΘΟΠΟΙΙΑΣ
ΚΑΙ ΟΙΝΟΠΝΕΥΜΑΤΟΠΟΙΙΑΣ
'983': ΧΕΙΡΙΣΤΕΣ ΑΣΥΡΜΑΤΟΥ
'984': ΠΟΛΙΤΙΚΗ ΕΠΙΣΤΡΑΤΕΥΣΗ-ΠΑΛΛΑΙΚΗ ΑΜΥΝΑ
'985': ΟΡΓΑΝΙΣΜΟΙ ΕΓΓΕΙΩΝ ΒΕΛΤΙΩΣΕΩΝ
'986': ΟΜΟΓΕΝΕΙΣ ΠΑΛΛΙΝΟΣΤΟΥΝΤΕΣ
'987': ΕΥΡΩΠΑΙΚΟΣ ΚΟΙΝΩΝΙΚΟΣ ΧΑΡΤΗΣ
'988': ΟΡΓΑΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ
'989': ΕΞΑΙΡΕΣΗ ΔΙΚΑΣΤΩΝ
'990': ΓΕΝΙΚΕΣ ΕΠΙΘΕΩΡΗΣΕΙΣ – ΔΙΕΥΘΥΝΣΕΙΣ ΣΤΟΙΧΕΙΩΔΟΥΣ ΕΚΠΑΙΔΕΥΣΗΣ
'991': ΚΑΝΟΝΙΣΜΟΣ ΕΠΙΘΕΩΡΗΣΕΩΣ ΚΑΙ ΑΣΦΑΛΕΙΑΣ
'992': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΑΥΤΟΝΟΜΟΥ ΣΤΑΦΙΔΙΚΟΥ ΟΡΓΑΝΙΣΜΟΥ (Τ.Α.Π.Α.Σ.Ο)
'993': ΤΑΜΕΙΟΝ ΠΡΟΝΟΙΑΣ ΟΡΘΟΔΟΞΟΥ ΕΦΗΜΕΡΙΑΚΟΥ
'994': ΣΧΟΛΙΚΗ ΣΩΜΑΤΙΚΗ ΑΓΩΓΗ
'995': ΚΕΝΤΡΟ ΠΑΡΑΓΩΓΙΚΟΤΗΤΑΣ
'996': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΙΔΙΟΚΤΗΤΩΝ
'997': ΒΟΣΚΗ ΕΝΤΟΣ ΔΑΣΩΝ
'998': ΕΛΕΓΧΟΣ ΕΞΑΓΟΜΕΝΩΝ ΓΕΩΡΓΙΚΩΝ ΠΡΟΙΟΝΤΩΝ
'999': ΠΑΙΔΑΓΩΓΙΚΑ ΤΜΗΜΑΤΑ Α.Ε.Ι
'1000': ΥΠΟΤΡΟΦΙΕΣ ΚΛΗΡΟΔΟΤΗΜΑΤΟΣ Π. ΒΑΣΣΑΝΗ
'1001': ΑΤΥΧΗΜΑ ΑΠΟ ΔΟΛΟ ΤΟΥ ΕΡΓΟΔΟΤΗ
'1002': ΒΥΖΑΝΤΙΝΟ ΚΑΙ ΧΡΙΣΤΙΑΝΙΚΟ ΜΟΥΣΕΙΟ
'1003': ΕΙΡΗΝΕΥΤΙΚΕΣ ΑΠΟΣΤΟΛΕΣ
'1004': ΥΓΕΙΟΝΟΜΙΚΟΣ ΄ΕΛΕΓΧΟΣ ΕΙΣΕΡΧΟΜΕΝΩΝ
'1005': ΟΡΚΟΣ ΤΟΥ ΠΟΛΙΤΗ
'1006': ΥΓΕΙΟΝΟΜΙΚΗ ΠΕΡΙΘΑΛΨΗ ΣΠΟΥΔΑΣΤΩΝ
'1007': ΠΑΡΑΧΑΡΑΞΗ ΚΑΙ ΚΙΒΔΗΛΙΑ
'1008': ΔΙΑΜΕΡΙΣΜΑΤΑ ΠΛΟΙΑΡΧΩΝ ΚΑΙ ΠΛΗΡΩΜΑΤΩΝ
'1009': ΚΛΑΔΟΣ ΑΡΩΓΗΣ Τ.Α.Κ.Ε
'1010': ΟΡΓΑΝΙΣΜΟΣ ΒΑΜΒΑΚΟΣ
'1011': ΝΟΣΗΛΕΙΑ ΣΤΡΑΤΙΩΤΙΚΩΝ
'1012': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ
'1013': ΠΟΛΥΕΘΝΕΙΣ ΑΕΡΟΠΟΡΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'1014': ΝΑΥΤΙΚΟ ΑΠΟΜΑΧΙΚΟ ΤΑΜΕΙΟ
'1015': ΥΓΙΕΙΝΗ ΑΡΤΟΠΟΙΕΙΩΝ
'1016': ΝΟΜΑΡΧΙΑΚΑ ΣΥΜΒΟΥΛΙΑ
'1017': ΛΕΣΧΗ ΑΞΙΩΜΑΤΙΚΩΝ Π.Ν
'1018': ΚΑΤΩΤΕΡΟ ΔΙΔΑΚΤΙΚΟ ΠΡΟΣΩΠΙΚΟ
'1019': ΓΕΝΙΚΑ ΠΕΡΙ ΚΥΚΛΟΦΟΡΙΑΣ ΑΥΤΟΚΙΝΗΤΩΝ
'1020': ΤΑΜΕΙΟ ΝΟΣΗΛΕΙΑΣ ΣΠΟΥΔΑΣΤΩΝ
'1021': ΕΠΑΓΓΕΛΜΑΤΙΚΑ ΚΑΙ ΒΙΟΤΕΧΝΙΚΑ ΕΠΙΜΕΛΗΤΗΡΙΑ
'1022': ΑΚΤΟΠΛΟΙΑ
'1023': ΠΡΟΣΤΑΣΙΑ ΑΛΙΕΙΑΣ
'1024': ΜΕ ΤΗ ΝΟΡΒΗΓΙΑ
'1025': ΗΘΙΚΕΣ ΑΜΟΙΒΕΣ ΠΡΟΣΩΠΙΚΟΥ (΄ΕΝΟΠΛΟΥ-ΠΟΛΙΤΙΚΟΥ) ΥΠΟΥΡΓΕΙΟΥ ΔΗΜΟΣΙΑΣ
ΤΑΞΗΣ
'1026': ΛΕΩΦΟΡΕΙΑ ΙΔΙΩΤΙΚΗΣ ΧΡΗΣΕΩΣ
'1027': ΕΡΓΑΤΙΚΕΣ ΔΙΑΦΟΡΕΣ
'1028': ΡΑΔΙΟΗΛΕΚΤΡΟΛΟΓΟΙ-ΡΑΔΙΟΤΕΧΝΙΤΕΣ
'1029': ΠΡΟΓΝΩΣΤΙΚΑ ΠΟΔΟΣΦΑΙΡΟΥ
'1030': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΚΑΙ ΠΡΟΝΟΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΤΗΣ ΑΓΡΟΤΙΚΗΣ ΤΡΑΠΕΖΑΣ
ΤΗΣ ΕΛΛΑΔΑΣ (Τ.Σ.Π. – Α.Τ.Ε.)
'1031': ΥΔΡΕΥΣΗ ΛΕΚΑΝΟΠΕΔΙΟΥ ΑΘΗΝΩΝ
'1032': ΤΡΑΠΕΖΑ ΟΦΘΑΛΜΩΝ
'1033': ΕΘΝΙΚΟ ΚΕΝΤΡΟ ΧΑΡΤΩΝ ΚΑΙ ΧΑΡΤΟΓΡΑΦΙΚΗΣ ΚΛΗΡΟΝΟΜΙΑΣ - ΕΘΝΙΚΗ ΧΑΡΤΟΘΗΚΗ
'1034': ΚΑΝΟΝΙΣΜΟΙ ΑΠΟΦΥΓΗΣ ΣΥΓΚΡΟΥΣΕΩΝ
'1035': ΓΡΑΦΕΙΟ ΕΓΚΛΗΜΑΤΙΩΝ ΠΟΛΕΜΟΥ
'1036': ΑΓΡΟΤΙΚΕΣ ΣΥΝΔΙΚΑΛΙΣΤΙΚΕΣ ΟΡΓΑΝΩΣΕΙΣ
'1037': ΤΑΥΤΟΤΗΤΕΣ
'1038': ΔΑΣΙΚΟΙ ΣΥΝΕΤΑΙΡΙΣΜΟΙ
'1039': ΣΥΜΒΟΛΑΙΟΓΡΑΦΙΚΑ ΔΙΚΑΙΩΜΑΤΑ
'1040': ΙΔΙΟΚΤΗΣΙΑ ΚΑΤ’ ΟΡΟΦΟ
'1041': ΣΧΟΛΙΚΑ ΤΑΜΕΙΑ
'1042': ΑΡΧΕΙΟΦΥΛΑΚΕΙΑ ΔΙΑΦΟΡΑ
'1043': ΑΠΟΖΗΜΙΩΣΗ ΑΝΤΑΛΛΑΞΙΜΩΝ
'1044': ΣΧΟΛΙΚΑ ΚΤΙΡΙΑ
'1045': ΦΟΡΟΛΟΓΙΑ ΟΙΚΟΔΟΜΩΝ
'1046': ΠΡΟΤΥΠΑ ΔΗΜΟΤΙΚΑ
'1047': ΠΡΩΤΕΣ ΥΛΕΣ ΒΥΡΣΟΔΕΨΙΑΣ - ΔΕΡΜΑΤΑ
'1048': ΣΥΜΒΙΒΑΣΜΟΣ ΚΑΙ ΔΙΑΙΤΗΣΙΑ
'1049': ΚΑΤΑΣΤΑΣΗ ΔΗΜΟΤΙΚΩΝ ΚΑΙ ΚΟΙΝΟΤΙΚΩΝ ΥΠΑΛΛΗΛΩΝ
'1050': ΕΣΟΔΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ
'1051': ΣΤΑΔΙΑ ΚΑΙ ΓΥΜΝΑΣΤΗΡΙΑ
'1052': ΚΟΙΝΗ ΑΓΡΟΤΙΚΗ ΠΟΛΙΤΙΚΗ
'1053': ΑΤΟΜΑ ΜΕ ΕΙΔΙΚΕΣ ΑΝΑΓΚΕΣ - ΥΠΕΡΗΛΙΚΕΣ - ΧΡΟΝΙΑ ΠΑΣΧΟΝΤΕΣ
'1054': ΕΚΚΛΗΣΙΑΣΤΙΚΑ ΔΙΚΑΣΤΗΡΙΑ
'1055': ΣΥΜΒΑΣΕΙΣ ΓΙΑ ΤΗΝ ΑΠΟΦΥΓΗ ΔΙΠΛΗΣ ΦΟΡΟΛΟΓΙΑΣ
'1056': ΠΡΟΣΤΑΣΙΑ ΒΑΜΒΑΚΟΠΑΡΑΓΩΓΗΣ
'1057': ΝΑΥΤΙΚΗ ΣΤΡΑΤΟΛΟΓΙΑ
'1058': ΝΟΣΟΚΟΜΕΙΑΚΗ ΠΕΡΙΘΑΛΨΗ ΑΣΦΑΛΙΣΜΕΝΩΝ Ο.Γ.Α
'1059': ΦΥΣΙΚΑ ΟΡΓΑΝΙΚΑ ΛΙΠΑΣΜΑΤΑ
'1060': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΜΙΣΘΩΤΩΝ ΕΣΤΙΑΤΟΡΙΩΝ, ΖΑΧΑΡΟΠΛΑΣΤΕΙΩΝ,
ΚΑΦΕΝΕΙΩΝ Κ.ΛΠ. (Τ.Ε.Α.Μ.Ε.Ζ.)
'1061': ΤΕΧΝΙΚΑΙ ΥΠΗΡΕΣΙΑΙ
'1062': ΣΥΓΚΕΝΤΡΩΣΗ ΠΡΟΙΟΝΤΩΝ
'1063': ΥΔΡΟΓΡΑΦΙΚΗ ΥΠΗΡΕΣΙΑ
'1064': ΥΠΗΡΕΣΙΑ ΕΛΕΓΧΟΥ ΚΑΤΑΣΚΕΥΗΣ ΑΞΙΩΝ ΤΟΥ ΔΗΜΟΣΙΟΥ
'1065': ΕΠΙΣΚΟΠΙΚΑ ΓΡΑΦΕΙΑ
'1066': ΒΕΛΓΙΟ, ΒΕΝΕΖΟΥΕΛΑ Κ.ΛΠ
'1067': ΔΗΜΟΤΙΚΟΣ ΚΑΙ ΚΟΙΝΟΤΙΚΟΣ ΚΩΔΙΚΑΣ
'1068': ΠΡΟΔΟΣΙΑ
'1069': ΜΙΣΘΟΣ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'1070': ΠΟΛΙΤΙΚΟ ΠΡΟΣΩΠΙΚΟ ΝΑΥΤΙΚΟΥ
'1071': ΑΝΑΖΗΤΗΣΗ ΚΑΙ ΔΙΑΦΥΛΑΞΗ ΑΡΧΑΙΟΤΗΤΩΝ
'1072': ΑΔΕΙΕΣ ΛΙΑΝΙΚΗΣ ΠΩΛΗΣΗΣ ΤΣΙΓΑΡΩΝ ΚΑΙ ΕΙΔΩΝ ΜΟΝΟΠΩΛΙΟΥ
'1073': ΕΠΟΠΤΙΚΑ ΜΕΣΑ ΔΙΔΑΣΚΑΛΙΑΣ
'1074': ΕΚΛΟΓΟΔΙΚΕΙΑ
'1075': Ο.Γ.Α ΚΑΤΑΣΤΑΤΙΚΕΣ ΔΙΑΤΑΞΕΙΣ
'1076': ΙΝΣΤΙΤΟΥΤΟ ΥΓΕΙΑΣ ΤΟΥ ΠΑΙΔΙΟΥ
'1077': ΣΧΟΛΗ ΘΕΤΙΚΩΝ ΕΠΙΣΤΗΜΩΝ ΠΑΝΕΠΙΣΤΗΜΙΟΥ ΠΑΤΡΩΝ
'1078': ΕΣΠΕΡΙΔΟΕΙΔΗ-ΟΠΩΡΟΚΗΠΕΥΤΙΚΑ
'1079': ΕΠΙΔΟΜΑΤΑ ΣΤΡΑΤΕΥΟΜΕΝΩΝ
'1080': ΠΡΟΛΗΨΗ ΕΡΓΑΤΙΚΩΝ ΑΤΥΧΗΜΑΤΩΝ ΤΩΝ ΝΑΥΤΙΚΩΝ
'1081': ΥΠΗΡΕΣΙΑ ΑΠΟΜΑΓΝΗΤΙΣΕΩΣ ΠΛΟΙΩΝ
'1082': ΔΙΑΦΟΡΕΣ ΕΙΔΙΚΕΣ ΔΙΑΔΙΚΑΣΙΕΣ
'1083': ΓΕΝΙΚΗ ΔΙΕΥΘΥΝΣΗ ΤΗΛΕΠΙΚΟΙΝΩΝΙΩΝ
'1084': ΕΘΝΙΚΗ ΥΠΗΡΕΣΙΑ ΠΛΗΡΟΦΟΡΙΩΝ (Ε.Υ.Π.)
'1085': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΜΙΣΘΩΤΩΝ (T.E.A.M)
'1086': ΑΣΦΑΛΙΣΗ ΚΑΤΑ ΤΗΣ ΑΝΕΡΓΙΑΣ - ΟΡΓΑΝΙΣΜΟΣ ΑΠΑΣΧΟΛΗΣΗΣ ΕΡΓΑΤΙΚΟΥ ΔΥΝΑΜΙΚΟΥ
'1087': ΣΩΜΑΤΙΚΗ ΙΚΑΝΟΤΗΤΑ ΠΡΟΣΩΠΙΚΟΥ ΣΤΡΑΤΕΥΜΑΤΟΣ
'1088': ΟΙΚΟΝΟΜΙΚΗ ΥΠΗΡΕΣΙΑ Π. ΝΑΥΤΙΚΟΥ
'1089': ΔΑΣΙΚΗ ΦΟΡΟΛΟΓΙΑ
'1090': ΑΤΕΛΕΙΕΣ ΥΠΕΡ ΤΗΣ ΚΤΗΝΟΤΡΟΦΙΑΣ, ΜΕΛΙΣΣΟΚΟΜΙΑΣ Κ.Λ.Π
'1091': ΠΟΛΙΤΙΚΑ ΔΙΚΑΙΩΜΑΤΑ ΤΩΝ ΓΥΝΑΙΚΩΝ
'1092': ΜΕΤΑΘΕΣΕΙΣ ΕΚΠΑΙΔΕΥΤΙΚΩΝ
'1093': ΔΙΕΘΝΕΣ ΚΕΝΤΡΟ ΥΠΟΛΟΓΙΣΜΟΥ
'1094': ΔΙΑΧΕΙΡΙΣΗ ΔΑΣΩΝ
'1095': ΔΟΥΛΕΙΑ
'1096': ΜΕ ΤΗ ΠΟΛΩΝΙΑ
'1097': ΑΝΑΔΙΑΝΟΜΗ ΚΤΗΜΑΤΩΝ
'1098': ΥΠΟΑΠΑΣΧΟΛΟΥΜΕΝΟΙ ΜΙΣΘΩΤΟΙ
'1099': ΟΡΓΑΝΙΣΜΟΙ ΠΡΩΗΝ Υ.Β.Ε.Τ. - Γ.Γ.Β. - Γ.Γ.Ε.Τ
'1100': ΠΑΝΕΠΙΣΤΗΜΙΑΚΗ ΒΙΒΛΙΟΘΗΚΗ ΑΘΗΝΩΝ
'1101': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΑΣΦΑΛΙΣΤ.ΕΤΑΙΡΕΙΑΣ Η ΕΘΝΙΚΗ (Τ.Α.Π.Α.Ε.
Η ΕΘΝΙΚΗ)
'1102': ΤΕΛΗ ΣΧΟΛΑΖΟΥΣΩΝ ΚΛΗΡΟΝΟΜΙΩΝ
'1103': ΞΕΝΕΣ ΓΛΩΣΣΕΣ
'1104': ΚΑΤΑΣΚΗΝΩΣΕΙΣ - ΠΑΙΔΙΚΕΣ ΕΞΟΧΕΣ
'1105': ΔΙΚΑΣΤΗΡΙΑ ΑΝΗΛΙΚΩΝ
'1106': ΣΥΜΒΑΣΕΙΣ ΕΚΤΕΛΕΣΕΩΣ ΑΛΛΟΔΑΠΩΝ ΑΠΟΦΑΣΕΩΝ
'1107': ΦΟΡΟΣ ΕΙΣΟΔΗΜΑΤΟΣ ΝΟΜΙΚΩΝ ΠΡΟΣΩΠΩΝ
'1108': ΘΕΩΡΗΤΙΚΑ ΚΑΙ ΙΣΤΟΡΙΚΑ ΜΑΘΗΜΑΤΑ
'1109': ΑΦΡΟΔΙΣΙΑ
'1110': ΦΑΡΟΙ
'1111': ΔΗΜΟΣΙΟΓΡΑΦΙΚΟ ΕΠΑΓΓΕΛΜΑ
'1112': ΚΑΤΑΣΤΑΤΙΚΟΣ ΝΟΜΟΣ ΕΚΚΛΗΣΙΑΣ ΤΗΣ ΕΛΛΑΔΟΣ
'1113': ΕΛΕΓΧΟΣ ΣΚΟΠΙΜΟΤΗΤΑΣ ΙΔΡΥΣΕΩΣ ΒΙΟΜΗΧΑΝΙΩΝ
'1114': ΓΥΜΝΑΣΙΑ ΚΑΙ ΛΥΚΕΙΑ
'1115': ΑΕΡΟΝΑΥΤΙΚΕΣ ΠΛΗΡΟΦΟΡΙΕΣ
'1116': ΚΑΤΑΣΤΑΣΗ ΥΠΑΞΙΩΜΑΤΙΚΩΝ Π.Ν
'1117': ΥΠΟΥΡΓΕΙΟ ΧΩΡΟΤΑΞΙΑΣ
'1118': ΕΚΤΕΛΕΣΗ ΄ΕΡΓΩΝ
'1119': ΜΙΣΘΟΔΟΣΙΑ ΥΠΑΛΛΗΛΩΝ ΣΕ ΕΠΙΣΤΡΑΤΕΥΣΗ
'1120': ΚΟΙΜΗΤΗΡΙΑ
'1121': ΑΣΦΑΛΙΣΤΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ ΚΙΝΔΥΝΩΝ ΠΟΛΕΜΟΥ
'1122': ΣΥΜΦΩΝΙΑ ΓΙΑ ΑΝΙΘΑΓΕΝΕΙΣ
'1123': ΝΟΜΑΡΧΙΑΚΗ ΑΥΤΟΔΙΟΙΚΗΣΗ
'1124': ΣΧΟΛΗ ΤΟΥΡΙΣΤΙΚΩΝ ΕΠΑΓΓΕΛΜΑΤΩΝ
'1125': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΜΙΣΘΩΤΩΝ ΠΑΡΑΓΩΓΗΣ ΚΑΙ ΕΜΠΟΡΙΑΣ ΟΠΩΡΟΚΗΠΕΥΤΙΚΩΝ
'1126': ΑΠΟΛΥΜΑΝΣΗ ΥΔΑΤΩΝ
'1127': ΠΟΛΕΟΔΟΜΙΚΕΣ ΕΠΙΤΡΟΠΕΣ
'1128': ΟΡΓΑΝΙΣΜΟΣ ΕΚΔΟΣΕΩΣ ΣΧΟΛΙΚΩΝ ΒΙΒΛΙΩΝ
'1129': ΥΠΑΛΛΗΛΟΙ ΝΟΜ. ΠΡΟΣΩΠΩΝ ΔΗΜΟΣΙΟΥ ΔΙΚΑΙΟΥ
'1130': ΑΝΤΙΣΤΑΘΜΙΣΤΙΚΗ ΕΙΣΦΟΡΑ
'1131': ΠΡΟΣΩΠΙΚΟ ΙΔΙΩΤΙΚΩΝ ΕΚΠΑΙΔΕΥΤΗΡΙΩΝ
'1132': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΓΙΑ ΤΑ ΑΥΤΟΚΙΝΗΤΑ
'1133': ΕΞΩΣΧΟΛΙΚΗ ΑΓΩΓΗ
'1134': ΑΣΦΑΛΙΣΤΙΚΗ ΑΡΜΟΔΙΟΤΗΤΑ
'1135': ΕΛΙΕΣ ΚΑΙ ΕΛΑΙΑ
'1136': ΓΑΜΟΙ ΙΣΡΑΗΛΙΤΩΝ
'1137': ΤΑΜΕΙΟ ΑΡΤΟΥ
'1138': ΚΑΝΟΝΙΣΜΟΣ ΕΠΙΤΡΟΠΩΝ
'1139': ΣΥΜΒΑΣΗ ΚΑΤΑ ΔΑΓΚΕΙΟΥ
'1140': ΕΘΝΙΚΟΙ ΔΡΥΜΟΙ
'1141': ΑΠΑΛΛΑΓΕΣ ΤΕΛΩΝ ΧΑΡΤΟΣΗΜΟΥ
'1142': ΔΙΕΘΝΗΣ ΟΡΓΑΝΙΣΜΟΣ ΑΝΑΠΤΥΞΕΩΣ
'1143': ΚΑΝΟΝΙΣΜΟΣ ΕΡΓΑΣΙΑΣ ΕΠΙ ΦΟΡΤΗΓΩΝ ΠΛΟΙΩΝ
'1144': ΛΥΣΣΑ
'1145': ΑΓΡΟΚΤΗΜΑ
'1146': ΚΑΘΗΓΗΤΕΣ ΚΑΙ ΥΦΗΓΗΤΕΣ
'1147': ΠΑΙΔΙΚΟΙ - ΒΡΕΦΟΝΗΠΙΑΚΟΙ ΣΤΑΘΜΟΙ
'1148': ΚΕΝΤΡΟ ΒΥΖΑΝΤΙΝΩΝ ΕΡΕΥΝΩΝ
'1149': ΙΔΡΥΣΗ ΕΛΕΥΘΕΡΗΣ ΖΩΝΗΣ ΣΕ ΔΙΑΦΟΡΑ ΛΙΜΑΝΙΑ ΤΗΣ ΧΩΡΑΣ
'1150': ΣΧΟΛΙΚΑ ΛΕΩΦΟΡΕΙΑ
'1151': ΣΦΑΓΕΙΑ
'1152': ΕΠΙΚΥΡΩΣΗ ΝΟΜΟΘΕΤΗΜΑΤΩΝ
'1153': ΕΓΓΡΑΦΑ ΤΑΥΤΟΤΗΤΑΣ ΝΑΥΤΙΚΩΝ
'1154': ΑΤΟΜΙΚΑ ΔΙΚΑΙΩΜΑΤΑ - ΔΕΔΟΜΕΝΑ ΠΡΟΣΩΠΙΚΟΥ ΧΑΡΑΚΤΗΡΑ
'1155': ΙΑΤΡΟΦΑΡΜΑΚΕΥΤΙΚΗ - ΝΟΣΟΚΟΜΕΙΑΚΗ ΠΕΡΙΘΑΛΨΗ - ΕΞΟΔΑ ΚΗΔΕΙΑΣ
'1156': ΥΠΗΡΕΣΙΑ ΔΙΑΧΕΙΡΙΣΕΩΣ ΑΝΤΑΛΛΑΞΙΜΩΝ ΚΤΗΜΑΤΩΝ
'1157': ΣΤΟΛΕΣ ΠΡΟΣΩΠΙΚΟΥ Λ.Σ
'1158': ΠΕΡΙΦΡΑΞΗ ΟΙΚΟΠΕΔΩΝ
'1159': ΣΙΔΗΡΟΔΡΟΜΟΙ ΑΤΤΙΚΗΣ
'1160': ΤΡΑΧΩΜΑΤΑ
'1161': ΝΑΥΑΓΙΑ-ΝΑΥΑΓΙΑΙΡΕΣΗ
'1162': ΥΠΟΜΗΧΑΝΙΚΟΙ
'1163': ΤΑΙΝΙΟΘΗΚΗ ΤΗΣ ΕΛΛΑΔΟΣ
'1164': ΚΑΝΟΝΙΣΜΟΣ ΤΗΛΕΓΡΑΦΙΚΗΣ ΥΠΗΡΕΣΙΑΣ
'1165': ΣΥΝΤΑΞΕΙΣ ΘΥΜΑΤΩΝ ΤΡΟΜΟΚΡΑΤΙΑΣ
'1166': ΚΑΝΟΝΙΣΜΟΣ ΠΥΡΙΜΑΧΟΥ ΠΡΟΣΤΑΣΙΑΣ ΕΠΙΒΑΤΗΓΩΝ ΠΛΟΙΩΝ
'1167': ΑΤΟΜΙΚΑ ΒΙΒΛΙΑΡΙΑ
'1168': ΕΠΑΓΓΕΛΜΑΤΙΚΑ ΒΙΒΛΙΑΡΙΑ ΑΡΤΕΡΓΑΤΩΝ ΚΛΠ
'1169': ΦΟΡΟΛΟΓΙΑ ΑΜΥΛΟΣΙΡΟΠΙΟΥ, ΣΤΑΦΙΔΙΝΗΣ ΚΛΠ
'1170': ΜΟΥΣΕΙΟ ΕΛΛΗΝΙΚΩΝ ΛΑΙΚΩΝ ΟΡΓΑΝΩΝ
'1171': ΕΠΙΚΟΥΡΙΚΟ ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΚΑΙ ΠΕΡΙΘΑΛΨΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΕΛΛΗΝ. ΗΛΕΚΤΡ.
ΕΤΑΙΡΙΑΣ (Ε.Η.Ε.)
'1172': ΤΑΜΕΙΑ ΜΟΝΙΜΩΝ ΟΔΟΣΤΡΩΜΑΤΩΝ
'1173': ΟΡΓΑΝΙΚΕΣ ΘΕΣΕΙΣ ΑΞΙΩΜΑΤΙΚΩΝ Π.Ν
'1174': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΠΡΟΣΩΠΙΚΟΥ ΤΡΑΠΕΖΑΣ ΑΘΗΝΩΝ
'1175': ΠΟΛΙΟΜΥΕΛΙΤΙΔΑ
'1176': ΠΡΟΑΓΩΓΑΙ ΑΞΙΩΜΑΤΙΚΩΝ ΧΩΡΟΦΥΛΑΚΗΣ
'1177': ΕΠΙΔΟΜΑ ΑΔΕΙΑΣ
'1178': ΕΞΕΤΑΣΕΙΣ ΓΙΑ ΤΗΝ ΠΡΟΣΛΗΨΗ ΠΡΟΣΩΠΙΚΟΥ
'1179': ΕΛΕΓΧΟΣ ΕΞΑΓΩΓΙΚΟΥ ΕΜΠΟΡΙΟΥ
'1180': ΡΑΔΙΟΦΩΝΙΚΟΙ ΣΤΑΘΜΟΙ
'1181': ΚΑΝΟΝΙΣΜΟΣ ΔΙΟΙΚΗΤΙΚΗΣ ΟΡΓΑΝΩΣΕΩΣ Τ.Σ.Α.Υ
'1182': Φ.Κ.Π. ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ
'1183': ΔΙΑΦΟΡΟΙ ΠΟΛΥΕΘΝΕΙΣ ΟΡΓΑΝΙΣΜΟΙ
'1184': ΧΟΛΕΡΑ
'1185': EΝΙΑΙΟΣ ΔΗΜΟΣΙΟΓΡΑΦΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ
'1186': ΑΤΕΛΕΙΕΣ ΔΗΜΟΣΙΩΝ ΥΠΗΡΕΣΙΩΝ
'1187': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΜΗΧΑΝΟΔΗΓΩΝ ΟΔΟΣΤΡΩΤΗΡΩΝ ΚΛΠ
'1188': ΝΟΣΟΚΟΜΟΙ
'1189': ΝΟΣΟΚΟΜΕΙΑ ΦΥΛΑΚΩΝ
'1190': ΑΠΟΚΑΤΑΣΤΑΣΗ ΚΤΗΝΟΤΡΟΦΩΝ
'1191': ΤΕΛΗ ΚΑΙ ΕΙΣΦΟΡΕΣ
'1192': ΑΚΑΤΑΣΧΕΤΑ
'1193': ΞΕΝΟΔΟΧΕΙΑΚΟ ΕΠΙΜΕΛΗΤΗΡΙΟ ΤΗΣ ΕΛΛΑΔΑΣ
'1194': ΔΗΜΟΤΟΛΟΓΙΑ
'1195': ΣΤΑΤΙΣΤΙΚΗ ΥΠΗΡΕΣΙΑ
'1196': ΚΡΑΤΙΚΟ ΕΡΓΑΣΤΗΡΙΟ ΕΛΕΓΧΟΥ ΦΑΡΜΑΚΩΝ
'1197': ΑΕΡΟΠΟΡΙΚΗ ΑΣΤΥΝΟΜΙΑ
'1198': ΕΚΤΑΚΤΕΣ ΕΙΣΦΟΡΕΣ
'1199': ΣΥΝΤΑΞΕΙΣ ΥΠΑΛΛΗΛΩΝ Τ.Τ.Τ
'1200': ΜΕΤΡΑ ΚΑΤΑ ΤΗΣ ΦΟΡΟΔΙΑΦΥΓΗΣ
'1201': ΕΔΑΦΙΚΗ ΕΠΕΚΤΑΣΗ ΝΟΜΟΘΕΣΙΑΣ
'1202': ΜΙΚΡΟΔΙΑΦΟΡΕΣ
'1203': ΤΑΤΖΙΚΙΣΤΑΝ – ΤΑΥΛΑΝΔΗ – ΤΟΥΡΚΙΑ Κ.ΛΠ
'1204': ΣΥΜΒΑΣΗ ΔΙΕΘΝΟΥΣ ΜΕΤΑΦΟΡΑΣ ΕΜΠΟΡΕΥΜΑΤΩΝ ΟΔΙΚΩΣ
'1205': ΚΩΔΙΚΑΣ ΙΔΙΩΤΙΚΟΥ ΝΑΥΤΙΚΟΥ ΔΙΚΑΙΟΥ
'1206': ΚΕΝΤΡΑ ΓΕΩΡΓΙΚΗΣ ΕΚΠΑΙΔΕΥΣΗΣ-Ο.Γ.Ε.Ε.Κ.Α
'1207': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΙΔΡΥΜΑΤΩΝ ΕΜΠΟΡΙΚΟΥ ΝΑΥΤΙΚΟΥ
'1208': ΓΡΑΦΕΙΟ ΔΙΑΡΚΗ ΚΩΔΙΚΑ ΝΟΜΟΘΕΣΙΑΣ
'1209': ΕΡΕΥΝΑ ΙΔΙΩΤΙΚΩΝ ΜΕΤΑΛΛΕΙΩΝ
'1210': ΔΙΕΥΘΥΝΣΗ ΔΗΜΟΣΙΩΝ ΕΡΓΩΝ ΑΕΡΟΠΟΡΙΑΣ
'1211': ΠΕΡΙ ΝΟΜΑΡΧΩΝ
'1212': ΣΥΝΤΑΞΕΙΣ ΘΥΜΑΤΩΝ ΑΠΟ ΕΣΩΤΕΡΙΚΕΣ ΔΙΑΜΑΧΕΣ
'1213': ΔΙΑΧΕΙΡΙΣΗ ΕΦΟΔΙΩΝ ΕΞΩΤΕΡΙΚΟΥ
'1214': ΟΡΓΑΝΩΣΗ ΥΠΗΡΕΣΙΩΝ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'1215': ΦΟΡΤΗΓΑ ΠΛΟΙΑ ΑΝΩ ΤΩΝ 4.500 ΤΟΝΝΩΝ
'1216': ΡΑΔΙΟΤΗΛΕΓΡΑΦΙΚΗ ΥΠΗΡΕΣΙΑ ΠΛΟΙΩΝ
'1217': ΕΠΑΓΓΕΛΜΑΤΙΚΕΣ ΣΧΟΛΕΣ
'1218': ΔΙΑΦΟΡΕΣ ΒΙΟΜΗΧΑΝΙΕΣ
'1219': ΣΥΝΤΗΡΗΣΗ ΑΕΡΟΣΚΑΦΩΝ
'1220': ΟΛΥΜΠΙΑΚΗ ΑΕΡΟΠΟΡΙΑ
'1221': ΟΡΓΑΝΙΣΜΟΣ ΧΩΡΟΦΥΛΑΚΗΣ
'1222': ΠΕΡΙΘΑΛΨΗ ΦΥΜΑΤΙΚΩΝ ΤΑΧΥΔΡΟΜΙΚΩΝ ΥΠΑΛΛΗΛΩΝ
'1223': ΟΡΓΑΝΙΣΜΟΣ ΧΡΗΜΑΤΟΔΟΤΗΣΗΣ ΟΙΚΟΝΟΜΙΚΗΣ ΑΝΑΠΤΥΞΗΣ
'1224': ΠΡΩΤΕΣ ΥΛΕΣ ΞΥΛΙΝΩΝ ΒΑΡΕΛΙΩΝ
'1225': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΤΕΧΝΙΚΩΝ ΤΥΠΟΥ ΑΘΗΝΩΝ (Τ.Α.Τ.Τ.Α.)
'1226': ΠΡΟΠΑΡΑΣΚΕΥΑΣΤΙΚΗ ΣΧΟΛΗ ΚΑΛΩΝ ΤΕΧΝΩΝ ΤΗΝΟΥ
'1227': ΟΙΚΟΝΟΜΙΚΕΣ ΑΝΤΙΠΡΟΣΩΠΕΙΕΣ ΕΞΩΤΕΡΙΚΟΥ
'1228': ΚΑΛΛΙΤΕΧΝΙΚΟΙ ΣΤΑΘΜΟΙ
'1229': ΣΥΜΒΑΣΕΙΣ ΓΙΑ ΤΗ ΒΙΑ ΤΩΝ
'1230': ΠΡΟΣΤΑΣΙΑ ΑΜΠΕΛΟΥΡΓΙΚΗΣ ΠΑΡΑΓΩΓΗΣ
'1231': ΔΙΑΦΟΡΑ ΑΔΙΚΗΜΑΤΑ
'1232': ΑΣΤΥΝΟΜΙΑ ΚΑΙ ΑΣΦΑΛΕΙΑ ΣΙΔΗΡΟΔΡΟΜΩΝ
'1233': ΜΕΤΟΧΙΚΟ ΤΑΜΕΙΟ ΒΑΣΙΛΙΚΗΣ ΑΕΡΟΠΟΡΙΑΣ
'1234': ΥΠΟΘΗΚΗ ΜΗΧΑΝΙΚΩΝ ΕΓΚΑΤΑΣΤΑΣΕΩΝ
'1235': ΕΥΘΥΝΗ ΑΠΟ Τ’ΑΥΤΟΚΙΝΗΤΑ
'1236': ΠΡΟΣΤΑΣΙΑ ΜΗΤΡΟΤΗΤΟΣ ΚΑΙ ΒΡΕΦΩΝ
'1237': ΜΕ ΤΗ ΦΙΛΑΝΔΙΑ
'1238': ΕΠΑΡΧΙΑΚΟΣ ΤΥΠΟΣ
'1239': ΕΠΙΘΕΩΡΗΣΗ ΤΕΛΩΝΕΙΩΝ
'1240': ΕΠΙΤΡΟΠΕΙΕΣ ΤΟΠΩΝΥΜΙΩΝ
'1241': ΜΕΤΑΝΑΣΤΕΥΣΗ ΚΑΙ ΑΠΟΔΗΜΙΑ
'1242': ΔΙΚΗΓΟΡΙΚΟΙ ΣΥΛΛΟΓΟΙ
'1243': ΠΡΟΣΩΠΙΚΟ ΥΠΟΥΡΓΕΙΟΥ ΓΕΩΡΓΙΑΣ
'1244': ΤΜΗΜΑ ΟΙΚΟΝΟΜΙΚΩΝ ΕΠΙΣΤΗΜΩΝ ΠΑΝΜΙΟΥ ΠΑΤΡΩΝ
'1245': ΜΑΛΑΚΤΕΣ
'1246': ΕΛΑΙΑ
'1247': ΑΤΟΜΙΚΑ ΕΓΓΡΑΦΑ ΑΞΙΩΜΑΤΙΚΩΝ
'1248': ΑΓΡΟΤΙΚΗ ΤΡΑΠΕΖΑ ΤΗΣ ΕΛΛΑΔΟΣ
'1249': ΟΠΤΙΚΟΙ - ΚΑΤΑΣΤΗΜΑΤΑ ΟΠΤΙΚΩΝ ΕΙΔΩΝ
'1250': ΔΗΜΟΣΙΕΣ ΕΠΕΝΔΥΣΕΙΣ
'1251': ΚΡΑΤΙΚΗ ΟΡΧΗΣΤΡΑ ΘΕΣΣΑΛΟΝΙΚΗΣ
'1252': ΝΗΟΛΟΓΙΑ-ΥΠΟΘΗΚΟΛΟΓΙΑ-ΣΗΜΑΤΟΛΟΓΗΣΗ
'1253': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΠΡΟΣΩΠΙΚΟΥ ΕΤΑΙΡΕΙΑΣ ΔΙΑΧΕΙΡΙΣΕΩΣ ΕΙΔΩΝ ΜΟΝΟΠΩΛΙΟΥ
(Τ.Α.Π.-Ε.Δ.Ε.Μ.Ε.)
'1254': ΕΙΣΠΡΑΞΗ ΑΞΙΩΝ
'1255': ΥΓΕΙΟΝΟΜΙΚΟΣ ΕΛΕΓΧΟΣ ΤΡΟΦΙΜΩΝ-ΠΟΤΩΝ-ΝΕΡΩΝ
'1256': ΛΟΓΙΣΤΕΣ - ΦΟΡΟΤΕΧΝΙΚΟΙ
'1257': ΕΙΔΙΚΕΣ ΔΙΚΟΝΟΜΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΓΙΑ ΤΟ ΔΗΜΟΣΙΟ
'1258': ΣΧΟΛΕΣ ΣΩΜΑΤΩΝ ΑΣΦΑΛΕΙΑΣ
'1259': ΤΑΜΕΙΟΝ ΚΟΙΝΩΦΕΛΩΝ ΕΡΓΩΝ ΛΕΥΚΑΔΟΣ
'1260': ΕΙΔΙΚΗ ΑΓΩΓΗ, ΕΙΔΙΚΗ ΕΠΑΓΓΕΛΜΑΤΙΚΗ
'1261': ΥΠΗΡΕΣΙΑ ΚΡΑΤΙΚΩΝ ΠΡΟΜΗΘΕΙΩΝ
'1262': ΟΙΝΟΛΟΓΙΚΑ ΙΔΡΥΜΑΤΑ
'1263': ΣΥΝΘΗΚΕΣ ΕΚΔΟΣΕΩΣ
'1264': ΑΞΙΩΜΑΤΙΚΟΙ ΚΑΙ ΥΠΑΞΙΩΜΑΤΙΚΟΙ Λ.Σ
'1265': ΥΓΕΙΟΝΟΜΙΚΗ ΕΞΕΤΑΣΗ ΠΡΟΣΩΠΙΚΟΥ
'1266': ΞΕΝΑ ΣΧΟΛΕΙΑ ΗΜΕΔΑΠΗΣ
'1267': Ε.Σ.Υ.-ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ
'1268': ΤΑΜΕΙΑ ΕΦΑΡΜΟΓΗΣ ΣΧΕΔΙΩΝ ΠΟΛΕΩΝ
'1269': ΣΥΝΕΤΑΙΡΙΣΜΟΙ ΣΤΡΑΤΙΩΤΙΚΩΝ ΕΙΔΩΝ
'1270': ΣΥΝΘΗΚΗ ΠΕΡΙ ΔΙΑΣΤΗΜΑΤΟΣ
'1271': ΔΙΑΧΕΙΡΙΣΗ ΑΝΤΑΛΛΑΞΙΜΩΝ ΚΤΗΜΑΤΩΝ
'1272': ΠΡΟΣΩΠΙΚΟΝ ΔΙΟΙΚΗΣΕΩΣ
'1273': ΣΧΟΛΗ ΕΚΠΤΙΚΩΝ ΛΕΙΤΟΥΡΓΩΝ
'1274': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΞΕΝΟΔΟΧΟΥΠΑΛΛΗΛΩΝ (Τ.Α.Ξ.Υ.)
'1275': ΣΩΜΑΤΙΚΗ ΙΚΑΝΟΤΗΤΑ ΑΞΙΩΜΑΤΙΚΩΝ
'1276': ΒΕΒΑΙΩΣΗ ΕΣΟΔΩΝ ΔΗΜΟΣΙΟΥ ΑΠΟ ΜΕΤΑΛΛΕΙΑ ΚΑΙ ΛΑΤΟΜΕΙΑ
'1277': ΔΙΑΦΟΡΟΙ ΕΠΟΙΚΙΣΤΙΚΟΙ ΝΟΜΟΙ
'1278': ΕΠΙΚΟΥΡΙΚΟ ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΚΡΕΟΠΩΛΩΝ ΚΑΙ ΕΡΓΑΤΟΥΠΑΛΛΗΛΩΝ ΚΡΕΑΤΟΣ
(Ε.Τ.Α.Κ.Ε.Κ)
'1279': ΟΙΚΟΝΟΜΙΚΟ ΠΑΝΕΠΙΣΤΗΜΙΟ ΑΘΗΝΩΝ
'1280': ΓΕΝΙΚΕΣ ΑΠΟΘΗΚΕΣ
'1281': ΤΑΜΕΙΑΚΗ ΥΠΗΡΕΣΙΑ
'1282': ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΠΕΡΙ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ
'1283': ΤΟΜΕΑΣ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΜΙΣΘΩΤΩΝ (ΙΚΑ-ΤΕΑΜ)ΕΙΔΙΚΟΣ ΤΟΜΕΑΣ
ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΜΙΣΘΩΤΩΝ (ΙΚΑ-ΕΤΕΑΜ)
'1284': ΒΑΡΒΑΚΕΙΟ ΛΥΚΕΙΟ
'1285': ΚΩΔΙΚΑΣ ΔΙΚΩΝ ΤΟΥ ΔΗΜΟΣΙΟΥ
'1286': ΔΙΕΘΝΕΣ ΤΑΜΕΙΟΝ ΠΕΡΙΘΑΛΨΕΩΣ ΤΟΥ ΠΑΙΔΙΟΥ
'1287': ΣΙΔΗΡΟΔΡΟΜΟΙ ΕΛΛΗΝΙΚΟΥ ΚΡΑΤΟΥΣ
'1288': ΑΡΔΕΥΣΕΙΣ
'1289': ΤΑΜΕΙΟ ΑΡΧΑΙΟΛΟΓΙΚΩΝ ΠΟΡΩΝ ΚΑΙ ΑΠΑΛΛΟΤΡΙΩΣΕΩΝ
'1290': ΙΔΡΥΜΑ ΒΥΖΑΝΤΙΝΗΣ ΜΟΥΣΙΚΟΛΟΓΙΑΣ
'1291': ΚΥΒΕΡΝΗΤΙΚΟ ΣΥΜΒΟΥΛΙΟ ΕΛΕΓΧΟΥ ΤΙΜΩΝ
'1292': ΕΙΔΙΚΟ ΤΑΜΕΙΟ ΕΠΟΙΚΙΣΜΟΥ
'1293': ΚΤΗΜΑΤΟΛΟΓΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ
'1294': ΚΑΤΑΣΚΕΥΗ ΣΤΑΦΙΔΙΝΗΣ
'1295': ΔΙΕΘΝΗΣ ΥΓΕΙΟΝΟΜΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ
'1296': ΕΠΕΤΗΡΙΔΑ
'1297': ΠΑΓΚΟΣΜΙΟΣ ΟΡΓΑΝΙΣΜΟΣ ΤΟΥΡΙΣΜΟΥ
'1298': ΕΝΙΣΧΥΣΗ ΑΠΡΟΣΤΑΤΕΥΤΩΝ ΠΑΙΔΙΩΝ
'1299': ΔΙΑΦΟΡΟΙ ΕΠΙΣΙΤΙΣΤΙΚΟΙ ΝΟΜΟΙ
'1300': ΔΙΠΛΩΜΑΤΙΚΕΣ ΑΤΕΛΕΙΕΣ
'1301': ΜΕΤΑ ΤΟΥ ΒΕΛΓΙΟΥ
'1302': ΚΑΝΝΑΒΙΣ
'1303': ΕΚΤΕΛΕΣΗ
'1304': ΤΟΥΡΙΣΤΙΚΕΣ ΕΓΚΑΤΑΣΤΑΣΕΙΣ ΡΟΔΟΥ
'1305': ΠΟΙΝΙΚΟ ΜΗΤΡΩΟ
'1306': ΑΝΩΜΑΛΕΣ ΔΙΚΑΙΟΠΡΑΞΙΕΣ ΔΩΔΕΚΑΝΗΣΟΥ
'1307': ΕΜΠΟΡΙΚΑ ΚΑΙ ΒΙΟΜΗΧΑΝΙΚΑ ΕΠΙΜΕΛΗΤΗΡΙΑ
'1308': ΣΥΝΤΟΝΙΣΜΟΣ ΠΡΟΓΡΑΜΜΑΤΩΝ ΚΑΙ ΕΡΓΑΣΙΩΝ ΟΔΩΝ ΚΑΙ ΕΡΓΩΝ ΚΟΙΝΗΣ ΩΦΕΛΕΙΑΣ
'1309': ΠΡΟΣΩΠΙΚΟ ΞΕΝΟΔΟΧΕΙΩΝ
'1310': ΙΝΣΤΙΤΟΥΤΟ ΦΥΣΙΚΗΣ ΤΟΥ ΣΤΕΡΕΟΥ ΦΛΟΙΟΥ ΤΗΣ ΓΗΣ
'1311': ΕΠΙΚΙΝΔΥΝΕΣ ΟΙΚΟΔΟΜΕΣ
'1312': ΑΡΧΕΙΑ ΔΙΚΑΣΤΗΡΙΩΝ
'1313': ΣΚΟΠΟΒΟΛΗ
'1314': ΑΠΟΝΟΜΗ ΣΥΝΤΑΞΕΩΝ ΤΑΜΕΙΟΥ ΝΟΜΙΚΩΝ
'1315': ΣΗΡΟΤΡΟΦΙΑ
'1316': ΕΣΩΤΕΡΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ
'1317': ΠΡΟΣΤΑΣΙΑ ΤΗΣ ΚΤΗΝΟΤΡΟΦΙΑΣ
'1318': ΧΑΡΤΗΣ
'1319': ΥΠΗΡΕΣΙΑ ΕΓΚΛΗΜΑΤΟΛΟΓΙΚΩΝ ΑΝΑΖΗΤΗΣΕΩΝ
'1320': ΥΓΕΙΟΝΟΜΙΚΗ ΠΕΡΙΘΑΛΨΗ ΒΟΥΛΕΥΤΩΝ
'1321': ΔΙΚΑΙΟΣΤΑΣΙΟ ΠΟΛΕΜΟΥ 1940
'1322': ΧΗΜΕΙΟ ΣΤΡΑΤΟΥ
'1323': ΕΠΑΡΧΙΑΚΕΣ ΓΕΝΙΚΕΣ ΣΥΝΕΛΕΥΣΕΙΣ
'1324': ΛΟΓΑΡΙΑΣΜΟΣ ΑΡΩΓΗΣ ΟΙΚΟΓΕΝΕΙΩΝ ΣΤΡΑΤΙΩΤΙΚΩΝ ΕΝΟΠΛΩΝ ΔΥΝΑΜΕΩΝ
'1325': ΚΑΤ’ ΙΔΙΑΝ ΝΑΟΙ
'1326': ΠΛΗΡΩΜΗ ΜΕ ΕΠΙΤΑΓΕΣ
'1327': ΕΘΝΙΚΕΣ ΣΥΛΛΟΓΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'1328': ΣΩΜΑ ΣΤΡΑΤΟΛΟΓΙΑΣ
'1329': ΟΔΟΝΤΙΑΤΡΟΙ
'1330': ΤΑΜΕΙΟ ΕΘΝΙΚΟΥ ΣΤΟΛΟΥ
'1331': ΣΥΜΠΛΗΡΩΜΑΤΙΚΕΣ ΠΑΡΟΧΕΣ ΜΗΤΡΟΤΗΤΑΣ
'1332': ΜΕΤΑΤΡΕΨΙΜΟΤΗΤΑ ΚΑΤΑΘΕΣΕΩΝ
'1333': ΠΤΗΝΟΤΡΟΦΙΑ
'1334': ΠΤΥΧΙΟΥΧΟΙ ΑΛΛΟΔΑΠΩΝ ΠΑΝΕΠΙΣΤΗΜΙΩΝ - ΔΙΑΠΑΝΕΠΙΣΤΗΜΙΑΚΟ ΚΕΝΤΡΟ ΑΝΑΓΝΩΡΙΣΕΩΣ
'1335': ΦΟΡΤΗΓΑ ΑΥΤΟΚΙΝΗΤΑ
'1336': ΥΠΗΡΕΣΙΑ ΜΗΧΑΝΙΚΗΣ ΚΑΛΛΙΕΡΓΕΙΑΣ
'1337': ΕΛΕΓΧΟΣ ΚΙΝΗΜΑΤΟΓΡΑΦΩΝ
'1338': ΔΗΜΟΣΙΟΓΡΑΦΙΚΕΣ ΟΡΓΑΝΩΣΕΙΣ
'1339': ΝΑΥΤΙΛΙΑΚΕΣ ΤΡΑΠΕΖΕΣ
'1340': ΛΕΙΤΟΥΡΓΙΑ ΥΔΡΟΘΕΡΑΠΕΥΤΗΡΙΩΝ
'1341': ΣΥΜΒΟΥΛΙΟ ΕΜΠΟΡΙΚΗΣ ΝΑΥΤΙΛΙΑΣ
'1342': ΕΓΓΕΙΟΣ ΦΟΡΟΛΟΓΙΑ ΚΑΠΝΟΥ
'1343': ΤΕΛΟΣ ΑΔΕΙΩΝ ΟΙΚΟΔΟΜΩΝ
'1344': ΕΘΝΙΚΟΤΗΤΑ ΠΛΟΙΩΝ
'1345': ΠΟΛΙΤΙΚΑ ΚΟΜΜΑΤΑ
'1346': ΣΧΟΛΗ ΘΕΤΙΚΩΝ ΕΠΙΣΤΗΜΩΝ
'1347': ΝΗΟΓΝΩΜΟΝΕΣ
'1348': ΔΙΑΦΟΡΟΙ ΠΟΙΝΙΚΟΙ ΝΟΜΟΙ
'1349': ΠΡΟΣΩΡΙΝΗ ΑΠΟΛΥΣΗ
'1350': ΤΑΜΕΙΟ ΑΛΛΗΛΟΒΟΗΘΕΙΑΣ ΣΤΡΑΤΟΥ ΞΗΡΑΣ
'1351': ΥΠΑΞΙΩΜΑΤΙΚΟΙ ΑΕΡΟΠΟΡΙΑΣ
'1352': ΦΟΡΟΛΟΓΙΑ ΧΡΗΜΑΤΙΣΤΗΡΙΑΚΩΝ ΣΥΜΒΑΣΕΩΝ
'1353': ΠΤΥΧΙΑ ΙΠΤΑΜΕΝΟΥ ΠΡΟΣΩΠΙΚΟΥ
'1354': ΚΡΕΑΤΑ ΣΕ ΠΑΚΕΤΑ
'1355': ΕΛΕΓΧΟΣ ΟΠΛΟΦΟΡΙΑΣ
'1356': ΑΝΑΣΤΟΛΕΣ ΔΗΜΟΣΙΟΥ ΧΡΕΟΥΣ
'1357': ΗΛΕΚΤΡΙΚΟΙ ΣΙΔΗΡΟΔΡΟΜΟΙ ΑΘΗΝΩΝ-ΠΕΙΡΑΙΩΣ (Η.Σ.Α.Π)
'1358': ΔΙΑΘΕΣΗ ΛΥΜΑΤΩΝ ΚΑΙ ΑΠΟΒΛΗΤΩΝ
'1359': ΕΠΙΘΕΩΡΗΣΗ ΤΕΧΝΙΚΗΣ ΕΚΠΑΙΔΕΥΣΗΣ
'1360': ΤΕΛΗ ΑΔΕΙΩΝ ΕΞΑΓΩΓΗΣ
'1361': ΠΡΟΙΟΝΤΑ ΓΑΛΑΚΤΟΣ
'1362': ΓΕΩΡΓΙΚΑ ΕΠΙΜΕΛΗΤΗΡΙΑ
'1363': ΙΕΡΑΡΧΙΚΟΣ ΄ΕΛΕΓΧΟΣ
'1364': ΣΤΡΑΤΙΩΤΙΚΕΣ ΦΥΛΑΚΕΣ
'1365': ΤΑΜΕΙΟ ΕΠΙΚ. ΑΣΦΑΛΙΣΕΩΣ ΥΠΑΛΛΗΛΩΝ ΚΑΠΝΕΜΠΟΡΙΚΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ
'1366': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΚΑΙ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΙΠΠΟΔΡΟΜΙΩΝ
(Τ.Α.Π.Ε.Α.Π.Ι.)
'1367': ΑΠΟΧΩΡΗΤΗΡΙΑ
'1368': ΦΟΡΟΣ ΕΙΣΟΔΗΜΑΤΟΣ ΦΥΣΙΚΩΝ ΚΑΙ ΝΟΜΙΚΩΝ ΠΡΟΣΩΠΩΝ
'1369': ΚΑΤΑΣΤΑΤΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΠΑΡΟΧΩΝ
'1370': ΑΤΤΙΚΟ ΜΕΤΡΟ
'1371': ΒΟΥΣΤΑΣΙΑ
'1372': ΑΠΟΣΤΡΑΤΕΙΕΣ - ΕΠΑΝΑΦΟΡΕΣ
'1373': ΤΡΑΠΕΖΙΤΙΚΑ ΔΑΝΕΙΑ ΣΕ ΧΡΥΣΟ ΚΛΠ
'1374': ΔΙΚΑΙΟΣΤΑΣΙΟ ΠΟΛΕΜΩΝ
'1375': ΕΘΝΙΚΟ ΑΣΤΕΡΟΣΚΟΠΕΙΟ
'1376': ΙΔΙΩΤΙΚΕΣ ΕΠΙΧΕΙΡΗΣΕΙΣ ΠΑΡΟΧΗΣ ΥΠΗΡΕΣΙΩΝ ΑΣΦΑΛΕΙΑΣ
'1377': ΔΑΝΕΙΑ ΕΞΩΤΕΡΙΚΑ
'1378': ΠΝΕΥΜΑΤΙΚΟ ΚΕΝΤΡΟ ΑΘΗΝΩΝ
'1379': ΑΠΟΣΒΕΣΕΙΣ
'1380': ΔΙΑΦΟΡΟΙ ΟΙΝΙΚΟΙ ΚΑΙ ΣΤΑΦΙΔΙΚΟΙ ΝΟΜΟΙ
'1381': ΑΚΑΔΗΜΙΑ ΣΩΜΑΤΙΚΗΣ ΑΓΩΓΗΣ
'1382': ΑΜΜΟΛΗΨΙΑ
'1383': ΠΡΟΣΩΠΙΚΟ ΠΛΟΗΓΙΚΗΣ ΥΠΗΡΕΣΙΑΣ
'1384': ΗΘΙΚΕΣ ΑΜΟΙΒΕΣ ΑΕΡΟΠΟΡΙΑΣ
'1385': ΚΩΔΙΚΑΣ ΦΟΡΟΛΟΓΙΑΣ ΟΙΝΟΠΝΕΥΜΑΤΟΣ
'1386': ΛΙΜΕΝΙΚΑ ΤΑΜΕΙΑ – ΛΙΜΕΝΙΚΑ ΕΡΓΑ
'1387': ΤΑΜΕΙΟ ΕΠΙΚ. ΑΣΦΑΛΙΣΕΩΣ ΥΠΑΛΛΗΛΩΝ ΕΘΝΙΚΟΥ ΟΡΓΑΝΙΣΜΟΥ ΚΑΠΝΟΥ (Τ.Ε.Α.ΥΕ.Ο.Κ)
'1388': ΕΛΕΓΧΟΣ ΤΗΣ ΠΙΣΤΕΩΣ
'1389': ΣΤΡΑΤΙΩΤΙΚΗ ΣΧΟΛΗ ΑΞΙΩΜΑΤΙΚΩΝ ΣΩΜΑΤΩΝ
'1390': ΒΟΗΘΗΤΙΚΑ ΠΡΟΣΩΠΑ ΤΗΣ ΔΙΚΗΣ
'1391': ΟΡΓΑΝΙΣΜΟΣ ΣΧΟΛΙΚΩΝ ΚΤΙΡΙΩΝ
'1392': ΒΙΟΜΗΧΑΝΙΕΣ ΔΩΔΕΚΑΝΗΣΟΥ
'1393': ΥΓΙΕΙΝΗ ΚΑΙ ΑΣΦΑΛΕΙΑ ΧΩΡΩΝ ΕΡΓΑΣΙΑΣ ΚΑΙ ΕΡΓΑΖΟΜΕΝΩΝ
'1394': ΜΕΤΑΤΡΟΠΗ ΤΗΣ ΠΟΙΝΗΣ
'1395': ΑΥΤΟΝΟΜΟΣ ΟΙΚΟΔΟΜΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ ΑΞΙΩΜΑΤΙΚΩΝ
'1396': ΟΔΙΚΕΣ ΜΕΤΑΦΟΡΕΣ-ΜΕΤΑΦΟΡΕΙΣ
'1397': ΑΡΜΑ ΘΕΣΠΙΔΟΣ
'1398': ΔΗΜΟΤΙΚΑ & ΚΟΙΝΟΤΙΚΑ
'1399': ΠΕΡΙΦΕΡΕΙΑΚΕΣ ΥΠΗΡΕΣΙΕΣ
'1400': ΣΧΟΛΗ ΑΝΘΡΩΠΙΣΤΙΚΩΝ ΚΑΙ ΚΟΙΝΩΝΙΚΩΝ ΕΠΙΣΤΗΜΩΝ
'1401': ΣΤΡΑΤΕΥΟΜΕΝΟΙ ΦΟΙΤΗΤΑΙ
'1402': ΓΕΝΙΚΑ
'1403': ΚΑΤΑΠΟΛΕΜΗΣΗ ΕΠΙΖΩΟΤΙΩΝ
'1404': ΟΡΓΑΝΙΣΜΟΣ ΔΙΟΙΚΗΣΕΩΣ ΕΚΚΛΗΣΙΑΣΤΙΚΗΣ ΚΑΙ ΜΟΝΑΣΤΗΡΙΑΚΗΣ ΠΕΡΙΟΥΣΙΑΣ
'1405': ΑΠΑΓΟΡΕΥΣΗ ΧΡΗΣΗΣ ΕΠΙΒΛΑΒΩΝ ΟΥΣΙΩΝ
'1406': ΨΥΧΟΛΟΓΟΙ
'1407': ΠΥΡΑΣΦΑΛΕΙΑ ΕΠΙΧΕΙΡΗΣΕΩΝ ΚΑΙ ΑΠΟΘΗΚΩΝ
'1408': ΑΠΟΚΑΤΑΣΤΑΣΙΣ ΑΠΟΡΩΝ ΚΟΡΑΣΙΔΩΝ
'1409': ΜΕ ΤΗ ΒΕΝΕΖΟΥΕΛΑ
'1410': ΔΙΚΑΙΟ ΤΩΝ ΣΥΝΘΗΚΩΝ
'1411': ΚΤΗΝΙΑΤΡΙΚΑ ΜΙΚΡΟΒΙΟΛΟΓΙΚΑ ΕΡΓΑΣΤΗΡΙΑ
'1412': ΕΡΓΑΣΤΗΡΙΑ
'1413': ΚΑΝΟΝΙΣΜΟΙ TELEX ΚΑΙ TELEFAX
'1414': ΟΠΛΑ ΚΑΙ ΣΩΜΑΤΑ ΣΤΡΑΤΟΥ ΞΗΡΑΣ
'1415': ΕΚΠΑΙΔΕΥΣΗ ΤΑΧΥΔΡΟΜΙΚΩΝ ΥΠΑΛΛΗΛΩΝ
'1416': ΤΙΜΟΛΟΓΙΑ ΠΑΡΟΧΩΝ
'1417': ΜΟΥΣΟΥΛΜΑΝΙΚΕΣ ΚΟΙΝΟΤΗΤΕΣ
'1418': ΣΤΡΑΤΙΩΤΙΚΑ ΕΡΓΑ ΕΝ ΓΕΝΕΙ
'1419': ΣΤΡΑΤΙΩΤΙΚΑ ΝΟΣΟΚΟΜΕΙΑ
'1420': ΔΙΟΙΚΗΣΗ ΔΗΜΟΣΙΩΝ ΚΤΗΜΑΤΩΝ –
'1421': ΕΙΔΙΚΕΣ ΤΙΜΕΣ ΚΑΥΣΙΜΩΝ ΚΑΙ ΗΛΕΚΤΡΙΚΗΣ ΕΝΕΡΓΕΙΑΣ
'1422': ΕΓΓΡΑΦΗ ΣΠΟΥΔΑΣΤΩΝ
'1423': ΔΗΜΟΤΙΚΑ-ΚΟΙΝΟΤΙΚΑ ΔΑΣΗ ΚΑΙ ΚΗΠΟΙ
'1424': ΔΗΜΟΣΙΑ ΕΠΙΧΕΙΡΗΣΗ ΠΟΛΕΟΔΟΜΙΑΣ ΚΑΙ ΣΤΕΓΑΣΕΩΣ
'1425': ΣΥΝΤΑΞΙΟΔΟΤΗΣΗ ΠΡΟΣΩΠΙΚΟΥ Ι.Κ.Α
'1426': ΕΞΕΤΑΣΤΙΚΕΣ ΕΠΙΤΡΟΠΕΣ ΒΟΥΛΗΣ
'1427': ΜΕΤΡΑ ΚΑΤΑ ΤΩΝ ΠΥΡΚΑΙΩΝ ΔΑΣΩΝ
'1428': ΥΠΟΥΡΓΕΙΟ ΕΘΝΙΚΗΣ ΟΙΚΟΝΟΜΙΑΣ
'1429': ΣΥΓΚΕΝΤΡΩΣΗ ΠΕΡΙΟΥΣΙΑΣ ΤΟΥ ΔΗΜΟΣΙΟΥ
'1430': ΚΑΤΑΣΚΕΥΗ ΚΑΙ ΣΥΝΤΗΡΗΣΗ ΟΔΩΝ
'1431': ΤΕΛΩΝΕΙΑΚΑ ΚΤΙΡΙΑ
'1432': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΕΚΤΕΛΩΝΙΣΤΩΝ (Τ.Σ.Ε.)
'1433': ΚΑΘΗΓΗΤΙΚΕΣ ΕΔΡΕΣ
'1434': ΝΑΥΤΙΚΗ ΕΡΓΑΣΙΑ ΝΕΩΝ
'1435': ΕΚΤΕΛΕΣΗ ΘΑΝΑΤΙΚΗΣ ΠΟΙΝΗΣ
'1436': ΕΠΙΘΕΩΡΗΣΗ ΠΛΟΙΩΝ
'1437': ΔΙΠΛΩΜΑΤΑ ΚΑΙ ΑΔΕΙΕΣ ΝΑΥΤΙΚΗΣ ΙΚΑΝΟΤΗΤΑΣ
'1438': ΙΣΤΟΡΙΚΟ ΚΑΙ ΕΘΝΟΛΟΓΙΚΟ ΜΟΥΣΕΙΟ
'1439': ΠΡΟΣΤΑΣΙΑ ΕΡΓΑΖΟΜΕΝΗΣ ΝΕΑΣ
'1440': ΥΠΗΡΕΣΙΑ ΕΠΙΜΕΛΗΤΩΝ ΑΝΗΛΙΚΩΝ
'1441': ΑΣΤΙΚΗ ΕΥΘΥΝΗ ΑΠΟ ΠΥΡΗΝΙΚΗ ΕΝΕΡΓΕΙΑ
'1442': ΚΩΔΙΚΑΣ ΦΟΡΟΛΟΓΙΑΣ ΚΑΘΑΡΑΣ ΠΡΟΣΟΔΟΥ
'1443': ΕΠΙΘΕΩΡΗΣΗ Υ.Ε.Ν
'1444': ΚΑΤΑΓΓΕΛΙΑ ΣΥΜΒΑΣΕΩΣ ΕΡΓΑΣΙΑΣ ΣΥΝΔΙΚΑΛΙΣΤΙΚΩΝ ΣΤΕΛΕΧΩΝ
'1445': ΥΓΕΙΟΝΟΜΙΚΕΣ ΔΙΑΤΑΞΕΙΣ
'1446': ΔΙΔΑΣΚΑΛΕΙΟ ΜΕΣΗΣ ΕΚΠΑΙΔΕΥΣΗΣ
'1447': ΥΠΟΒΡΥΧΙΑ
'1448': ΥΠΗΡΕΣΙΑ ΑΠΩΛΕΙΩΝ, ΝΕΚΡΟΤΑΦΕΙΩΝ ΚΛΠ
'1449': ΑΓΡΟΤ. ΑΠΟΚΑΤΑΣΤΑΣΗ ΣΤΑ ΔΩΔΕΚΑΝΗΣΑ
'1450': ΕΙΔΙΚΕΣ ΑΠΑΛΛΟΤΡΙΩΣΕΙΣ
'1451': ΣΤΕΓΑΣΗ ΤΑΧΥΔΡΟΜΙΚΩΝ ΥΠΗΡΕΣΙΩΝ
'1452': ΔΙΑΜΕΤΑΚΟΜΙΣΗ ΝΑΡΚΩΤΙΚΩΝ
'1453': ΜΕΤΑΜΟΣΧΕΥΣΗ ΒΙΟΛΟΓΙΚΩΝ ΟΥΣΙΩΝ
'1454': ΒΡΑΒΕΙΑ ΚΑΙ ΧΟΡΗΓΙΕΣ
'1455': ΕΥΡΩΠΑΙΚΗ ΜΟΡΦΩΤΙΚΗ ΣΥΜΒΑΣΗ
'1456': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΕΛΛΗΝ. ΕΡΥΘΡΟΥ ΣΤΑΥΡΟΥ
(Τ.Ε.Α.Π.Ε.Ε.Σ.)
'1457': ΑΤΕΛΕΙΕΣ ΕΙΔΩΝ ΒΟΗΘΕΙΑΣ
'1458': ΕΚΤΕΛΕΣΗ ΕΡΓΩΝ ΟΧΥΡΩΣΗΣ
'1459': ΡΟΥΑΝΤΑ – ΡΟΥΜΑΝΙΑ Κ.ΛΠ
'1460': ΜΟΝΙΜΕΣ ΑΝΤΙΠΡΟΣΩΠΕΙΕΣ
'1461': ΠΡΟΣΤΑΣΙΑ ΕΦΕΔΡΩΝ ΙΠΤΑΜΕΝΩΝ
'1462': ΤΡΑΠΕΖΕΣ ΕΞΩΤΕΡΙΚΟΥ ΕΜΠΟΡΙΟΥ
'1463': ΙΑΤΡΙΚΟΝ ΠΡΟΣΩΠΙΚΟΝ ΔΗΜΟΣΙΟΥ ΚΑΙ Ν.Π.Δ.Δ
'1464': ΔΙΑΦΟΡΑ ΜΟΝΑΣΤΗΡΙΑ
'1465': ΕΤΑΙΡΕΙΕΣ ΕΠΕΝΔΥΣΕΩΝ - ΧΑΡΤΟΦΥΛΑΚΙΟΥ ΚΑΙ ΑΜΟΙΒΑΙΩΝ ΚΕΦΑΛΑΙΩΝ
'1466': ΑΝΑΓΝΩΡΙΣΗ ΤΗΣ ΕΛΛΗΝΙΚΗΣ ΠΟΛΙΤΕΙΑΣ
'1467': ΔΙΕΘΝΗΣ ΣΥΜΒΑΣΗ
'1468': ΛΙΜΕΝΑΡΧΕΙΑ
'1469': ΣΕΙΣΜΟΠΛΗΚΤΟΙ ΘΕΣΣΑΛΙΑΣ
'1470': ΣΤΡΑΤΕΥΣΗ ΓΥΝΑΙΚΩΝ
'1471': ΣΤΡΑΤΙΩΤΙΚΗ ΥΠΗΡΕΣΙΑ ΚΑΤΑΣΚΕΥΗΣ ΕΡΓΩΝ ΑΝΑΣΥΓΚΡΟΤΗΣΗΣ
'1472': ΠΡΟΣΤΑΣΙΑ ΤΗΣ ΤΙΜΗΣ ΤΟΥ ΠΟΛΙΤΙΚΟΥ ΚΟΣΜΟΥ
'1473': ΕΠΙΜΟΡΦΩΣΗ ΛΕΙΤΟΥΡΓΩΝ Μ.Ε
'1474': ΕΝΙΣΧΥΣΗ ΕΞΑΓΩΓΗΣ
'1475': ΗΛΕΚΤΡΟΦΩΤΙΣΜΟΣ ΔΙΑΦΟΡΩΝ ΠΟΛΕΩΝ
'1476': ΜΕ ΤΙΣ ΚΑΤΩ ΧΩΡΕΣ
'1477': ΝΑΥΠΗΓΟΥΜΕΝΑ ΠΛΟΙΑ-ΝΑΥΠΗΓΟΕΠΙΣΚΕΥΑΣΤΙΚΕΣ
'1478': ΕΛΕΓΧΟΣ ΠΩΛΗΣΕΩΝ ΕΠΙ ΠΙΣΤΩΣΕΙ
'1479': ΕΛΕΓΧΟΣ ΒΙΟΜΗΧΑΝΙΚΩΝ ΕΓΚΑΤΑΣΤΑΣΕΩΝ
'1480': ΔΙΕΘΝΗΣ ΟΙΚΟΝΟΜΙΚΗ ΕΠΙΤΡΟΠΗ
'1481': ΓΡΑΦΕΙΑ ΕΥΡΕΣΗΣ ΕΡΓΑΣΙΑΣ - ΣΥΜΒΟΥΛΟΙ ΕΡΓΑΣΙΑΣ
'1482': ΜΟΝΟΠΩΛΙΟ ΝΑΡΚΩΤΙΚΩΝ
'1483': ΑΠΑΛΛΑΓΕΣ ΦΟΡΟΛΟΓΙΑΣ ΚΛΗΡΟΝΟΜΙΩΝ
'1484': ΠΑΓΚΟΣΜΙΑ ΟΡΓΑΝΩΣΗ ΥΓΕΙΑΣ
'1485': ΕΘΝΙΚΟ ΙΔΡΥΜΑ ΕΡΕΥΝΩΝ
'1486': ΝΟΜΟΘΕΣΙΑ ΠΕΡΙ ΣΥΛΛΟΓΙΚΗΣ ΣΥΜΒΑΣΕΩΣ
'1487': ΕΘΝΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ ΦΑΡΜΑΚΩΝ
'1488': ΔΙΑΦΟΡΑ ΓΥΜΝΑΣΙΑ & ΛΥΚΕΙΑ
'1489': ΞΕΝΕΣ ΣΧΟΛΕΣ ΓΕΩΠΟΝΙΑΣ ΚΑΙ ΔΑΣΟΛΟΓΙΑΣ
'1490': ΠΡΟΣΤΑΣΙΑ ΑΝΕΡΓΩΝ
'1491': ΦΙΛΑΝΘΡΩΠΙΚΑ ΚΑΤΑΣΤΗΜΑΤΑ ΚΕΦΑΛΛΗΝΙΑΣ
'1492': ΚΑΝΟΝΙΣΜΟΣ ΠΑΡΟΧΩΝ Τ.Ε.Β.Ε
'1493': ΩΔΕΙΑ ΚΛΠ. ΜΟΥΣΙΚΑ ΙΔΡΥΜΑΤΑ
'1494': ΠΡΟΣΚΥΝΗΜΑΤΙΚΑ ΙΔΡΥΜΑΤΑ
'1495': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΑΝΩΝ. ΥΔΡΟΗΛΕΚΤΡ. ΕΤ. ΓΛΑΥΚΟΣ
'1496': ΠΡΕΣΒΕΙΕΣ ΚΑΙ ΠΡΟΞΕΝΕΙΑ
'1497': ΥΠΟΥΡΓΕΙΑ ΤΥΠΟΥ ΚΑΙ ΤΟΥΡΙΣΜΟΥ
'1498': ΖΩΝΕΣ ΕΝΕΡΓΟΥ ΠΟΛΕΟΔΟΜΙΑΣ
'1499': ΕΚΚΛΗΣΙΑ ΙΟΝΙΩΝ ΝΗΣΩΝ
'1500': ΕΠΙΤΡΟΠΑΙ ΑΣΦΑΛΕΙΑΣ
'1501': ΥΠΟΥΡΓΟΙ
'1502': ΠΟΙΝΙΚΗ ΔΙΑΤΙΜΗΣΗ
'1503': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΕΡΓΑΤΩΝ ΚΕΡΑΜΟΠΟΙΩΝ
'1504': ΠΡΩΤΕΣ ΥΛΕΣ ΠΑΙΓΝΙΟΧΑΡΤΩΝ
'1505': ΚΡΥΠΤΟΓΡΑΦΙΚΗ ΥΠΗΡΕΣΙΑ
'1506': ΔΙΕΘΝΗΣ ΕΠΙΤΡΟΠΗ ΠΡΟΣΩΠΙΚΗΣ ΚΑΤΑΣΤΑΣΕΩΣ
'1507': ΕΛΕΓΧΟΣ ΗΛΕΚΤΡΙΚΩΝ ΕΓΚΑΤΑΣΤΑΣΕΩΝ
'1508': ΔΙΑΧΕΙΡΙΣΗ ΙΔΡΥΜΑΤΩΝ ΚΑΙ ΚΛΗΡΟΔΟΤΗΜΑΤΩΝ
'1509': ΤΕΛΩΝΕΙΑΚΗ ΣΤΑΤΙΣΤΙΚΗ
'1510': ΙΔΙΩΤΙΚΕΣ ΝΑΥΤΙΚΕΣ ΣΧΟΛΕΣ
'1511': ΑΕΡΟΠΟΡΙΚΑ ΑΤΥΧΗΜΑΤΑ
'1512': ΑΝΩΤΕΡΟ ΔΙΔΑΚΤΙΚΟ ΠΡΟΣΩΠΙΚΟ
'1513': ΔΙΑΦΟΡΟΙ ΔΙΟΙΚΗΤΙΚΟΙ ΕΡΓΑΤΙΚΟΙ ΝΟΜΟΙ
'1514': ΣΥΜΒΟΥΛΙΟ ΓΕΩΓΡΑΦΙΚΩΝ ΥΠΗΡΕΣΙΩΝ
'1515': ΕΚΚΛΗΣΙΑΣΤΙΚΕΣ ΒΙΒΛΙΟΘΗΚΕΣ
'1516': ΤΜΗΜΑ ΕΠΙΣΤΗΜΗΣ ΦΥΣΙΚΗΣ ΑΓΩΓΗΣ ΚΑΙ ΑΘΛΗΤΙΣΜΟΥ
'1517': ΠΕΡΙΟΡΙΣΜΟΣ ΣΥΝΘΕΣΕΩΣ ΥΠΗΡΕΣΙΩΝ
'1518': ΤΑΜΕΙΑ ΕΠΑΡΧΙΑΚΗΣ ΟΔΟΠΟΙΙΑΣ
'1519': ΤΙΜΟΛΟΓΙΑ Ο.Τ.Ε - ΚΟΣΤΟΛΟΓΗΣΗ ΥΠΗΡΕΣΙΩΝ Ο.Τ.Ε
'1520': ΕΘΝΙΚΗ ΒΙΒΛΙΟΘΗΚΗ
'1521': ΔΗΜΟΣΙΕΣ ΣΧΟΛΕΣ ΥΠΟΜΗΧΑΝΙΚΩΝ
'1522': ΑΝΑΦΟΡΕΣ ΠΡΟΣ ΤΙΣ ΑΡΧΕΣ
'1523': ΚΡΑΤΙΚΗ ΕΚΜΕΤΑΛΛΕΥΣΗ ΛΕΩΦΟΡΕΙΑΚΩΝ ΓΡΑΜΜΩΝ
'1524': ΔΙΑΦΟΡΑ ΕΠΙΔΟΜΑΤΑ
'1525': ΙΔΙΩΤΙΚΗ ΑΕΡΟΠΟΡΙΑ – ΑΕΡΟΛΕΣΧΕΣ
'1526': ΤΜΗΜΑ ΔΙΟΙΚΗΤΙΚΗΣ ΤΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ
'1527': ΔΙΕΘΝΕΙΣ ΑΕΡΟΠΟΡΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'1528': ΠΡΟΙΚΟΔΟΤΗΣΕΙΣ ΕΞ ΕΘΝΙΚΩΝ ΓΑΙΩΝ
'1529': ΔΙΟΡΘΩΣΗ ΑΣΥΜΦΩΝΙΩΝ
'1530': ΕΠΙΤΡΟΠΗ ΔΙΟΙΚΗΣΕΩΣ
'1531': ΜΕΤΑ ΤΗΣ ΓΕΡΜΑΝΙΑΣ
'1532': ΟΙΚΟΔΟΜΙΚΟΙ ΣΥΝΕΤΑΙΡΙΣΜΟΙ
'1533': ΚΑΤΑΣΤΑΤΙΚΟΙ ΝΟΜΟΙ
'1534': ΑΞΙΩΜΑΤΙΚΟΙ ΓΡΑΦΕΙΟΥ
'1535': ΚΑΝΟΝΙΣΜΟΣ ΕΝΑΕΡΙΟΥ ΚΥΚΛΟΦΟΡΙΑΣ
'1536': ΔΙΑΧΕΙΡΙΣΗ ΚΑΥΣΙΜΩΝ
'1537': ΟΜΟΛΟΓΙΑΚΑ ΔΑΝΕΙΑ
'1538': ΕΡΓΑ
'1539': ΣΧΟΛΗ ΝΑΥΤΙΚΩΝ ΔΟΚΙΜΩΝ
'1540': ΠΩΛΗΣΗ ΦΑΡΜΑΚΩΝ ΑΠΟ ΙΑΤΡΟΥΣ
'1541': ΣΗΜΑΤΑ ΕΘΝΙΚΟΤΗΤΑΣ ΚΑΙ ΝΗΟΛΟΓΗΣΕΩΣ
'1542': ΛΕΙΤΟΥΡΓΟΙ ΣΤΟΙΧΕΙΩΔΟΥΣ
'1543': ΕΦΕΤΕΙΑ ΚΑΙ ΠΡΩΤΟΔΙΚΕΙΑ
'1544': ΥΠΟΥΡΓΕΙΟ ΠΡΟΕΔΡΙΑΣ ΚΥΒΕΡΝΗΣΕΩΣ
'1545': ΜΟΡΦΩΤΙΚΟΣ – ΚΙΝΗΜΑΤΟΓΡΑΦΟΣ
'1546': ΚΑΤΑΜΕΤΡΗΣΗ ΧΩΡΗΤΙΚΟΤΗΤΑΣ
'1547': ΦΩΤΑΕΡΙΟ
'1548': ΠΑΘΗΤΙΚΗ ΑΕΡΑΜΥΝΑ
'1549': ΠΡΟΣΩΠΙΚΟ ΝΟΣΗΛΕΥΤΙΚΩΝ ΙΔΡΥΜΑΤΩΝ
'1550': ΜΕ ΤΗΝ ΚΥΠΡΟ
'1551': ΚΟΛΛΗΓΟΙ (ΕΠΙΜΟΡΤΟΙ ΚΑΛΛΙΕΡΓΗΤΕΣ)
'1552': ΤΑΜΕΙΟ ΑΡΩΓΗΣ Λ.Σ
'1553': ΙΧΘΥΟΣΚΑΛΕΣ
'1554': ΣΧΗΜΑ ΚΑΙ ΤΙΜΗ ΠΩΛΗΣΗΣ ΕΦΗΜΕΡΙΔΩΝ
'1555': ΥΙΟΘΕΣΙΑ
'1556': ΕΚΤΕΛΕΣΗ ΕΡΓΩΝ ΑΡΜΟΔΙΟΤΗΤΑΣ ΕΚΚΛΗΣΙΑΣ
'1557': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΠΡΟΣΩΠΙΚΟΥ
'1558': ΔΙΑΦΟΡΕΣ ΕΥΡΩΠΑΙΚΕΣ ΣΥΜΦΩΝΙΕΣ
'1559': ΕΓΓΕΙΟΣ ΦΟΡΟΛΟΓΙΑ
'1560': ΠΑΙΔΑΓΩΓΙΚΕΣ ΑΚΑΔΗΜΙΕΣ
'1561': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΕΡΓΑΤΟΥΠΑΛΛΗΛΩΝ ΜΕΤΑΛΛΟΥ (ΤΑ.Π.Ε.Μ.)
'1562': ΤΕΧΝΙΚΗ ΕΚΜΕΤΑΛΛΕΥΣΗ ΑΕΡΟΣΚΑΦΩΝ
'1563': ΕΝΩΣΗ ΑΠΟΣΤΡΑΤΩΝ ΑΞΙΩΜΑΤΙΚΩΝ Β.Α
'1564': ΑΣΦΑΛΙΣΗ ΕΡΓΑΤΩΝ ΓΕΩΡΓΙΑΣ
'1565': ΟΡΓΑΝΩΣΗ ΚΑΛΛΙΤΕΧΝΙΚΩΝ ΕΚΔΗΛΩΣΕΩΝ-ΦΕΣΤΙΒΑΛ
'1566': ΠΕΡΙΟΥΣΙΑΚΕΣ ΣΥΝΕΠΕΙΕΣ ΤΗΣ ΠΟΙΝΗΣ
'1567': ΤΗΛΕΓΡΑΦΙΚΗ ΑΝΤΑΠΟΚΡΙΣΗ
'1568': ΕΠΙΘΕΩΡΗΣΗ ΔΗΜΟΣΙΩΝ ΥΠΟΛΟΓΩΝ
'1569': ΜΕ ΤΟΝ ΚΑΝΑΔΑ
'1570': ΑΛΛΗΛΟΓΡΑΦΙΑ Υ.Ε.Ν
'1571': ΤΕΧΝΙΚΟ ΠΡΟΣΩΠΙΚΟ ΑΕΡΟΠΟΡΙΑΣ
'1572': ΚΛΑΔΟΣ ΑΥΤΟΤΕΛΩΣ ΑΠΑΣΧΟΛΟΥΜΕΝΩΝ, ΕΛΕΥΘΕΡΩΝ ΚΑΙ ΑΝΕΞΑΡΤΗΤΩΝ
'1573': ΣΧΟΛΕΙΑ ΒΑΡΥΚΟΩΝ Η ΚΩΦΩΝ
'1574': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΚΑΤΩΤΕΡΩΝ ΠΛΗΡΩΜΑΤΩΝ Ε.Ν
'1575': ΤΟΥΡΙΣΤΙΚΑ ΠΛΟΙΑ - ΣΚΑΦΗ ΑΝΑΨΥΧΗΣ - ΤΟΥΡΙΣΤΙΚΟΙ ΛΙΜΕΝΕΣ (ΜΑΡΙΝΕΣ)
'1576': ΕΠΙΔΟΜΑΤΑ ΕΟΡΤΩΝ ΧΡΙΣΤΟΥΓΕΝΝΩΝ ΚΑΙ ΠΑΣΧΑ
'1577': ΕΠΙΜΕΛΗΤΗΡΙΑ - ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ
'1578': ΥΠΟΥΡΓΕΙΟ ΕΡΕΥΝΑΣ ΚΑΙ ΤΕΧΝΟΛΟΓΙΑΣ
'1579': ΣΤΕΓΑΣΗ ΑΞΙΩΜΑΤΙΚΩΝ
'1580': ΠΑΡΑΡΤΗΜΑΤΑ ΓΕΝΙΚΟΥ ΧΗΜΕΙΟΥ
'1581': ΚΑΘΑΡΙΣΤΡΙΕΣ
'1582': ΚΑΝΟΝΙΣΜΟΣ ΝΑΥΤΟΔΙΚΕΙΟΥ
'1583': ΑΜΟΙΒΕΣ ΜΗΧΑΝΙΚΩΝ
'1584': ΕΠΙΜΟΡΦΩΣΗ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ
'1585': ΚΑΝΟΝΙΣΜΟΙ ΕΠΙΒΑΤΗΓΩΝ ΠΛΟΙΩΝ
'1586': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΠΡΟΣΩΠΙΚΟΥ ΕΤΑΙΡΙΑΣ ΕΛΛ. ΚΑΛΥΚΟΠΟΙΕΙΟΥ-ΠΥΡΙΤΙΔΟΠΟΙΕΙΟΥ
'1587': ΠΡΟΣΩΠΙΚΟ ΤΡΑΠΕΖΩΝ
'1588': ΛΥΣΣΙΑΤΡΕΙΑ
'1589': ΣΥΝΟΡΙΑΚΕΣ ΥΓΕΙΟΝΟΜΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'1590': ΠΟΛΕΜΙΚΟ ΜΟΥΣΕΙΟ
'1591': ΚΑΘΗΚΟΝΤΑ ΤΕΛΩΝΕΙΑΚΩΝ ΥΠΑΛΛΗΛΩΝ
'1592': ΕΠΕΚΤΑΣΗ ΤΗΣ ΑΣΦΑΛΙΣΕΩΣ
'1593': ΦΟΡΟΛΟΓΙΚΕΣ ΑΠΑΛΛΑΓΕΣ
'1594': ΕΠΙΔΟΜΑ ΣΤΡΑΤΕΥΣΗΣ
'1595': ΔΙΑΡΚΗ ΣΤΡΑΤΟΔΙΚΕΙΑ
'1596': ΣΥΝΤΑΞΙΟΔΟΤΗΣΗ ΠΡΟΣΩΠΙΚΟΥ Ο.Γ.Α
'1597': ΑΣΤΥΝΟΜΙΑ ΕΜΠΟΡΙΚΗΣ ΝΑΥΤΙΛΙΑΣ
'1598': ΦΡΟΝΤΙΣΤΕΣ ΜΟΝΑΔΩΝ
'1599': ΑΡΑΒΟΣΙΤΟΣ
'1600': ΜΗΤΡΟΠΟΛΕΙΣ
'1601': ΦΙΛΑΝΘΡΩΠΙΚΑ ΣΩΜΑΤΕΙΑ
'1602': ΔΙΑΦΟΡΟΙ ΠΟΛΥΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ
'1603': ΕΞΥΓΙΑΝΤΙΚΑ ΕΡΓΑ
'1604': ΦΥΛΛΑ ΠΟΙΟΤΗΤΑΣ ΝΑΥΤΩΝ
'1605': ΦΙΛΑΝΘΡΩΠΙΚΑ ΙΔΡΥΜΑΤΑ ΚΑΙ ΣΩΜΑΤΕΙΑ
'1606': ΕΣΤΙΑ ΝΑΥΤΙΚΩΝ
'1607': ΓΛΥΚΑ ΚΑΙ ΚΟΝΣΕΡΒΕΣ
'1608': ΠΡΟΣΤΑΣΙΑ ΥΠΟΒΡΥΧΙΩΝ ΚΑΛΩΔΙΩΝ
'1609': ΕΠΕΞΕΡΓΑΣΙΑ ΚΑΙ ΕΜΠΟΡΙΑ ΣΥΚΩΝ
'1610': ΧΑΡΟΚΟΠΕΙΟ
'1611': ΔΙΑΜΕΤΑΚΟΜΙΣΗ ΣΤΗΝ ΑΛΒΑΝΙΑ
'1612': ΕΠΙΘΕΩΡΗΣΗ ΦΥΛΑΚΩΝ
'1613': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΠΕΡΙ ΚΥΡΙΑΚΗΣ ΑΡΓΙΑΣ
'1614': ΚΙΝΗΜΑΤΟΓΡΑΦΙΚΗ ΒΙΟΜΗΧΑΝΙΑ
'1615': ΠΙΣΤΟΠΟΙΗΤΙΚΑ ΠΡΟΕΛΕΥΣΕΩΣ
'1616': ΤΟΥΡΙΣΤΙΚΗ ΠΡΟΠΑΓΑΝΔΑ
'1617': ΕΙΣΦΟΡΑ ΕΙΣΑΓΩΓΕΩΝ
'1618': ΚΑΖΙΝΟ
'1619': ΜΕ ΤΗΝ ΕΛΒΕΤΙΑ
'1620': ΔΙΚΑΣΤΙΚΟΙ ΕΠΙΜΕΛΗΤΕΣ
'1621': ΚΩΔΙΚΑΣ ΠΟΙΝΙΚΗΣ ΔΙΚΟΝΟΜΙΑΣ
'1622': ΤΟΠΙΚΕΣ ΔΙΟΙΚΗΤΙΚΕΣ ΕΠΙΤΡΟΠΕΣ
'1623': ΕΤΑΙΡΕΙΕΣ ΚΕΦΑΛΑΙΟΠΟΙΗΣΕΩΣ
'1624': ΟΡΥΖΑ
'1625': ΔΙΟΙΚΗΤΙΚΟ ΣΥΜΒΟΥΛΙΟ Ο.Γ.Α
'1626': ΕΚΠΑΙΔΕΥΤΙΚΟ ΠΡΟΣΩΠΙΚΟ ΣΧΟΛΩΝ Π.Ν
'1627': ΒΑΣΙΛΕΙΑ ΚΑΙ ΑΝΤΙΒΑΣΙΛΕΙΑ
'1628': ΥΠΗΡΕΣΙΑ ΣΤΙΣ ΕΠΑΡΧΙΕΣ Τ.Π. ΚΑΙ Δ
'1629': ΓΕΩΡΓΙΚΕΣ ΒΙΟΜΗΧΑΝΙΕΣ
'1630': ΒΟΥΛΕΥΤΗΡΙΟ
'1631': ΠΟΡΘΜΕΙΑ
'1632': ΕΚΤΕΛΕΣΗ ΥΔΡΑΥΛΙΚΩΝ ΕΡΓΩΝ
'1633': ΙΝΣΤΙΤΟΥΤΑ ΚΡΗΤΙΚΟΥ ΔΙΚΑΙΟΥ - ΑΙΓΑΙΟΥ ΚΑΙ ΔΙΑΦΟΡΑ ΕΡΕΥΝΗΤΙΚΑ ΚΕΝΤΡΑ
'1634': ΑΤΕΛΕΙΕΣ ΔΙΑΦΟΡΕΣ
'1635': ΚΕΝΤΡΑ ΠΑΡΑΘΕΡΙΣΜΟΥ -
'1636': ΣΧΟΛΕΣ ΑΕΡΟΠΟΡΙΑΣ
'1637': ΛΕΠΡΑ
'1638': ΑΙΣΘΗΤΙΚΟΙ
'1639': ΕΚΚΑΘΑΡΙΣΗ ΠΟΙΝΙΚΩΝ ΕΞΟΔΩΝ
'1640': ΓΕΝ. ΟΙΚΟΔΟΜΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ
'1641': ΕΛΕΓΧΟΣ ΔΑΠΑΝΩΝ ΤΟΥ ΚΡΑΤΟΥΣ
'1642': ΠΕΤΡΕΛΑΙΟΚΙΝΗΤΑ ΚΑΙ ΙΣΤΙΟΦΟΡΑ
'1643': ΚΑΛΛΙΕΡΓΕΙΑ ΚΑΠΝΟΥ
'1644': ΔΙΟΙΚΗΣΗ ΜΟΝΑΣΤΗΡΙΩΝ
'1645': ΚΤΗΝΙΑΤΡΙΚΑ ΙΔΙΟΣΚΕΥΑΣΜΑΤΑ
'1646': ΜΟΝΙΜΟΙ ΚΑΙ ΕΘΕΛΟΝΤΕΣ
'1647': ΦΟΡΟΛΟΓΙΑ ΚΕΡΔΩΝ ΕΙΣΑΓΩΓΕΩΝ
'1648': ΑΓΩΓΕΣ ΕΞΩΣΕΩΣ ΜΙΣΘΩΤΩΝ
'1649': ΟΡΓΑΝΩΣΗ ΕΞΩΤΕΡΙΚΟΥ ΕΜΠΟΡΙΟΥ
'1650': ΑΓΩΓΕΣ ΜΗΧΑΝΙΚΩΝ
'1651': ΝΑΥΤΙΚΗ ΣΧΟΛΗ ΠΟΛΕΜΟΥ
'1652': ΜΕΤΑΦΟΡΑ ΘΕΣΕΩΝ
'1653': ΕΙΣΑΓΩΓΗ ΕΠΑΓΓΕΛΜΑΤΙΚΟΥ ΥΛΙΚΟΥ
'1654': ΣΥΓΚΡΟΤΗΣΗ ΚΑΙ ΛΕΙΤΟΥΡΓΙΑ
'1655': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΑΕΡΟΠΟΡΙΚΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ
(T.Ε.Α.Π.Α.Ε.)
'1656': ΣΥΛΛΟΓΗ ΚΑΙ ΔΙΑΚΙΝΗΣΗ ΠΕΤΡΕΛΑΙΟΕΙΔΩΝ ΕΡΜΑΤΩΝ
'1657': ΚΕΝΤΡΑ ΑΔΥΝΑΤΙΣΜΑΤΟΣ – ΔΙΑΙΤΟΛΟΓΙΑΣ
'1658': ΟΜΑΔΙΚΗ ΚΑΤΑΓΓΕΛΙΑ ΣΥΜΒΑΣΕΩΣ ΕΡΓΑΣΙΑΣ
'1659': ΔΙΑΦΟΡΑ ΜΟΥΣΕΙΑ
'1660': ΒΕΒΑΙΩΣΗ ΚΑΙ ΕΙΣΠΡΑΞΗ ΕΣΟΔΩΝ
'1661': ΓΡΑΦΕΙΑ ΤΥΠΟΥ
'1662': ΔΙΟΙΚΗΤΙΚΟ ΠΡΟΣΩΠΙΚΟ
'1663': ΣΥΝΕΡΓΕΙΑ ΕΠΙΣΚΕΥΩΝ
'1664': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΗΣ ΠΡΟΝΟΙΑΣ ΚΑΙ ΑΣΘΕΝΕΙΑΣ ΕΡΓΑΖΟΜΕΝΩΝ
ΣΤΑ ΛΙΜΑΝΙΑ (Τ.Ε.Α.Π.Α.Ε.Λ.)
'1665': ΑΣΦΑΛΙΣΗ ΚΑΠΝΕΡΓΑΤΩΝ
'1666': ΑΝΤΙΣΗΚΩΜΑΤΑ (ΕΞΑΓΟΡΑ ΘΗΤΕΙΑΣ)
'1667': ΡΥΜΟΥΛΚΟΥΜΕΝΑ ΟΧΗΜΑΤΑ
'1668': ΝΟΜΟΙ ΑΝΑΦΕΡΟΜΕΝΟΙ ΣΕ ΠΟΛΛΕΣ ΦΟΡΟΛΟΓΙΕΣ
'1669': ΟΙΚΟΣΥΣΤΗΜΑΤΑ–ΒΙΟΤΟΠΟΙ
'1670': ΠΡΟΣΤΑΣΙΑ ΠΡΟΣΩΠΩΝ
'1671': ΕΘΝΙΚΟ ΤΥΠΟΓΡΑΦΕΙΟ
'1672': ΔΙΚΑΣΤΙΚΑ ΚΑΤΑΣΤΗΜΑΤΑ
'1673': ΠΡΟΣΤΑΣΙΑ ΒΙΒΛΙΟΥ-ΕΘΝΙΚΟ ΚΕΝΤΡΟ ΒΙΒΛΙΟΥ-ΛΟΓΟΤΕΧΝΙΑ
'1674': ΔΑΣΜΟΙ ΑΝΤΙΝΤΑΜΠΙΓΚ
'1675': ΔΑΣΗ ΠΑΡΑΜΕΘΟΡΙΩΝ ΠΕΡΙΟΧΩΝ
'1676': ΘΕΟΛΟΓΙΚΗ ΣΧΟΛΗ
'1677': ΟΡΟΙ - ΠΡΟΔΙΑΓΡΑΦΕΣ ΤΥΠΟΠΟΙΗΣΗΣ
'1678': ΦΟΡΟΛΟΓΙΑ ΒΥΝΗΣ ΚΑΙ ΖΥΘΟΥ
'1679': ΑΠΟΘΗΚΗ ΚΤΗΝΙΑΤΡΙΚΩΝ ΕΦΟΔΙΩΝ
'1680': ΠΑΡΟΧΗ ΤΗΛΕΦΩΝΙΚΩΝ ΣΥΝΔΕΣΕΩΝ
'1681': ΠΑΡΑΧΩΡΗΣΗ ΙΑΜΑΤΙΚΩΝ ΠΗΓΩΝ
'1682': ΜΑΘΗΤΙΚΑ ΣΥΣΣΙΤΙΑ
'1683': ΠΡΟΣΛΗΨΗ ΕΦΕΔΡΩΝ, ΑΝΑΠΗΡΩΝ ΠΟΛΕΜΟΥ, ΠΟΛΥΤΕΚΝΩΝ ΚΑΙ ΑΛΛΩΝ ΑΤΟΜΩΝ
ΜΕ ΕΙΔΙΚΕΣ ΑΝΑΓΚΕΣ
'1684': ΕΡΤ – 3
'1685': ΣΧΟΛΗ ΠΟΛΕΜΟΥ ΑΕΡΟΠΟΡΙΑΣ
'1686': ΤΟΠΟΘΕΤΗΣΕΙΣ - ΜΕΤΑΤΑΞΕΙΣ
'1687': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΠΡΟΣΤΑΣΙΑΣ
'1688': ΦΥΣΙΚΟ ΑΕΡΙΟ
'1689': ΤΕΧΝΙΚΑ ΕΡΓΑ
'1690': ΔΙΠΛΩΜΑΤΟΥΧΟΙ ΑΝΩΤΑΤΩΝ
'1691': ΕΘΝΙΚΟ ΝΟΜΙΣΜΑΤΙΚΟ ΜΟΥΣΕΙΟ
'1692': ΟΙΚΟΝΟΜΙΚΗ ΑΣΤΥΝΟΜΙΑ ΣΤΗ ΘΑΛΑΣΣΑ
'1693': ΑΣΦΑΛΕΙΑ, ΛΕΙΤΟΥΡΓΙΑ ΚΑΙ ΕΚΜΕΤΑΛΛΕΥΣΗ
'1694': ΕΙΔΙΚΑ ΠΡΟΝΟΜΙΑ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ
'1695': ΓΡΑΜΜΑΤΕΙΑ ΤΩΝ ΔΙΚΑΣΤΗΡΙΩΝ ΚΑΙ ΕΙΣΑΓΓΕΛΙΩΝ
'1696': ΑΛΙΠΑΣΤΑ
'1697': ΕΠΙΔΟΣΗ ΔΙΚΟΓΡΑΦΩΝ
'1698': ΚΕΝΤΡΙΚΟ ΤΑΜΕΙΟ ΓΕΩΡΓΙΑΣ
'1699': ΣΤΡΑΤΙΩΤΙΚΑ ΣΥΜΒΟΥΛΙΑ
'1700': ΤΑΜΕΙΑΚΗ ΥΠΗΡΕΣΙΑ ΤΕΛΩΝΕΙΩΝ
'1701': ΝΟΣΗΛΕΥΤΙΚΟ ΙΔΡΥΜΑ Μ.Τ.Σ
'1702': ΔΙΚΑΙΟ ΘΑΛΑΣΣΑΣ-ΥΦΑΛΟΚΡΗΠΙΔΑ
'1703': ΕΙΔΙΚΟΣ ΦΟΡΟΣ ΚΑΤΑΝΑΛΩΣΗΣ
'1704': ΜΕΙΟΝΟΤΙΚΑ ΣΧΟΛΕΙΑ
'1705': ΓΡΑΦΕΙΑ ΕΜΠΟΡΙΚΩΝ ΠΛΗΡΟΦΟΡΙΩΝ
'1706': ΣΥΝΤΟΝΙΣΤΙΚΟΝ ΣΥΜΒΟΥΛΙΟΝ ΝΕΩΝ ΠΡΟΣΦΥΓΩΝ
'1707': ΠΕΡΙΘΑΛΨΗ ΑΠΟΡΩΝ ΚΑΙ ΑΝΑΣΦΑΛΙΣΤΩΝ
'1708': ΦΟΡΟΛΟΓΙΑ ΚΕΝΤΡΩΝ ΔΙΑΣΚΕΔΑΣΕΩΣ ΚΑΙ ΠΟΛΥΤΕΛΕΙΑΣ
'1709': ΣΠΟΓΓΑΛΙΕΥΤΙΚΑ – ΔΥΤΕΣ
'1710': ΔΙΕΘΝΕΣ ΝΟΜΙΣΜΑΤΙΚΟ ΤΑΜΕΙΟ
'1711': ΒΙΒΛΙΟ ΔΙΕΚΔΙΚΗΣΕΩΝ
'1712': ΕΓΚΑΤΑΣΤΑΣΗ - ΛΕΙΤΟΥΡΓΙΑ ΚΑΤΑΣΚΕΥΩΝ ΚΕΡΑΙΩΝ
'1713': ΕΝΩΣΗ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ
'1714': ΛΟΓΙΣΤΙΚΟΣ ΚΑΙ ΟΙΚΟΝΟΜΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ
'1715': ΚΑΤΩΤΕΡΑ ΟΡΓΑΝΑ ΣΩΜΑΤΩΝ ΑΣΦΑΛΕΙΑΣ
'1716': ΥΠΟΥΡΓΕΙΟ ΕΜΠΟΡΙΚΗΣ ΝΑΥΤΙΛΙΑΣ
'1717': ΟΡΓΑΝΙΣΜΟΣ ΕΛΕΓΚΤΙΚΟΥ ΣΥΝΕΔΡΙΟΥ
'1718': ΑΓΟΡΕΣ ΑΓΡΟΤΙΚΩΝ ΠΡΟΙΟΝΤΩΝ
'1719': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΜΙΣΘΩΤΩΝ ΚΛΩΣΤΟΥΦΑΝΤΟΥΡΓΙΑΣ
'1720': ΞΕΝΑΓΟΙ ΚΑΙ ΔΙΕΡΜΗΝΕΙΣ
'1721': ΠΟΛΕΜΙΚΕΣ ΣΥΝΤΑΞΕΙΣ
'1722': ΑΣΤΙΚΕΣ ΣΥΓΚΟΙΝΩΝΙΕΣ ΑΘΗΝΩΝ-ΠΕΙΡΑΙΩΣ ΚΑΙ ΠΕΡΙΧΩΡΩΝ-Ο.Α.Σ.Α
'1723': ΚΑΤΑΣΤΑΤΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΤΑΜΕΙΟΥ ΑΣΦΑΛΙΣΕΩΣ ΑΡΤΕΡΓΑΤΩΝ Κ.Λ.Π
'1724': ΑΤΥΧΗΜΑΤΑ ΣΕ ΜΕΤΑΛΛΕΙΑ ΚΛΠ
'1725': ΦΟΡΟΛΟΓΙΑ ΠΟΛΕΜΙΚΩΝ ΚΕΡΔΩΝ
'1726': ΣΧΕΔΙΟ ΠΟΛΕΩΣ ΘΕΣΣΑΛΟΝΙΚΗΣ
'1727': ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ ΑΓΡΟΤ. ΑΣΦΑΛΕΙΑΣ
'1728': ΚΡΑΤΙΚΟ ΩΔΕΙΟ ΘΕΣΣΑΛΟΝΙΚΗΣ
'1729': ΚΕΝΤΡΑ ΑΝΩΤΕΡΗΣ ΤΕΧΝΙΚΗΣ ΕΚΠΑΙΔΕΥΣΗΣ (Κ.A.Τ.Ε.)
'1730': ΤΗΛΕΦΩΝΙΚΗ ΑΝΤΑΠΟΚΡΙΣΗ
'1731': ΟΙΚΟΝΟΜΙΚΑ ΓΥΜΝΑΣΙΑ
'1732': ΒΙΒΛΙΑ ΚΑΙ ΕΥΡΕΤΗΡΙΑ ΣΥΝΕΤΑΙΡΙΣΜΩΝ
'1733': ΕΠΙΔΟΜΑ ΑΝΕΡΓΙΑΣ
'1734': ΕΓΓΡΑΦΕΣ, ΕΞΕΤΑΣΕΙΣ, ΠΡΟΓΡΑΜΜΑΤΑ ΚΛΠ
'1735': ΣΧΟΛΗ ΜΟΝΙΜΩΝ ΥΠΑΞΙΩΜΑΤΙΚΩΝ
'1736': ΕΚΚΛΗΣΙΑ ΑΜΕΡΙΚΗΣ
'1737': ΜΕΤΟΧΙΚΟ ΤΑΜΕΙΟ ΣΤΡΑΤΟΥ
'1738': ΝΟΣΗΛΕΙΑ
'1739': ΣΧΟΛΗ ΕΥΕΛΠΙΔΩΝ
'1740': ΥΠΟΥΡΓΕΙΟ ΕΡΓΑΣΙΑΣ ΚΑΙ ΚΟΙΝΩΝΙΚΩΝ ΑΣΦΑΛΙΣΕΩΝ
'1741': ΚΑΝΟΝΙΣΜΟΣ ΧΡΗΜΑΤΙΣΤΗΡΙΟΥ ΑΞΙΩΝ ΑΘΗΝΩΝ
'1742': ΑΝΤΙΣΕΙΣΜΙΚΟΣ ΚΑΝΟΝΙΣΜΟΣ
'1743': ΦΑΡΜΑΚΕΥΤΙΚΗ ΔΕΟΝΤΟΛΟΓΙΑ
'1744': ΦΟΡΟΛΟΓΙΑ ΕΛΑΙΩΔΩΝ ΠΡΟΙΟΝΤΩΝ
'1745': ΕΙΔΙΚΑ ΡΑΔΙΟΤΗΛΕΦΩΝΙΚΑ ΔΙΚΤΥΑ
'1746': ΤΕΧΝΙΚΕΣ ΥΠΗΡΕΣΙΕΣ
'1747': ΑΡΧΕΙΑ ΥΓΙΕΙΝΗΣ
'1748': ΟΔΟΙΠΟΡΙΚΑ ΚΑΙ ΑΠΟΖΗΜΙΩΣΕΙΣ ΑΠΟΣΤΟΛΩΝ ΕΞΩΤΕΡΙΚΟΥ
'1749': ΔΙΑΦΟΡΟΙ ΛΟΓΙΣΤΙΚΟΙ ΝΟΜΟΙ
'1750': ΕΚΚΛΗΣΙΑΣΤΙΚΟΙ ΥΠΑΛΛΗΛΟΙ
'1751': ΝΑΥΤΙΚΑ ΕΠΑΓΓΕΛΜΑΤΙΚΑ ΣΩΜΑΤΕΙΑ ΚΑΙ ΟΜΟΣΠΟΝΔΙΕΣ
'1752': ΤΕΛΗ ΧΡΗΣΗΣ ΑΕΡΟΛΙΜΕΝΩΝ
'1753': ΠΡΟΑΙΡΕΤΙΚΗ ΑΣΦΑΛΙΣΗ
'1754': ΜΕ ΤΗ ΛΙΒΥΗ
'1755': ΠΟΤΑΜΟΠΛΟΙΑ ΦΟΡΤΙΟΥ ΥΓΡΩΝ ΚΑΥΣΙΜΩΝ
'1756': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΠΡΟΣΩΠΙΚΟΥ ΕΛΛΗΝΙΚΩΝ ΗΛΕΚΤΡΙΚΩΝ ΣΙΔΗΡΟΔΡΟΜΩΝ ΑΘΗΝΩΝ-ΠΕΙΡΑΙΩΣ
(Τ.Σ.Π.-Η.Σ.Α.Π)
'1757': ΜΕΣΑΖΟΝΤΕΣ
'1758': ΣΤΡΑΤΙΩΤΙΚΟΣ ΠΟΙΝΙΚΟΣ
'1759': ΔΙΚΑΙΩΜΑΤΑ ΚΑΙ ΚΑΘΗΚΟΝΤΑ ΦΟΙΤΗΤΩΝ
'1760': ΠΡΟΕΔΡΙΑ ΔΗΜΟΚΡΑΤΙΑΣ
'1761': ΚΩΔΙΚΑΣ ΕΜΠΟΡΙΚΟΥ ΝΟΜΟΥ
'1762': ΣΥΝΤΑΞΙΟΔΟΤΗΣΗ Ο.Γ.Α
'1763': ΣΑΝΑΤΟΡΙΑ
'1764': ΕΛΕΓΧΟΣ ΕΜΠΟΡΙΟΥ ΕΙΔΩΝ ΠΡΩΤΗΣ ΑΝΑΓΚΗΣ
'1765': ΒΑΛΑΝΙΔΙΑ
'1766': ΠΟΛΥΤΕΧΝΙΚΗ ΣΧΟΛΗ ΠΑΝΕΠΙΣΤΗΜΙΟΥ ΠΑΤΡΩΝ
'1767': ΣΕΙΣΜΟΠΛΗΚΤΟΙ ΠΕΛΟΠΟΝΝΗΣΟΥ
'1768': ΔΙΕΘΝΗΣ ΟΡΓΑΝΙΣΜΟΣ ΧΡΗΜΑΤΟΔΟΤΗΣΕΩΣ
'1769': ΜΕΤΑΦΟΡΑ ΣΤΟ ΕΣΩΤΕΡΙΚΟ
'1770': ΙΣΤΟΡΙΚΟ ΑΡΧΕΙΟ ΥΔΡΑΣ
'1771': ΕΓΚΑΤΑΣΤΑΣΗ ΚΑΙ ΚΙΝΗΣΗ ΑΛΛΟΔΑΠΩΝ
'1772': ΣΧΟΛΗ ΤΕΧΝΙΚΗΣ ΕΚΠΑΙΔΕΥΣΗΣ ΑΞΙΩΜΑΤΙΚΩΝ
'1773': ΓΑΜΟΣ ΣΤΡΑΤΙΩΤΙΚΩΝ
'1774': ΑΠΑΓΟΡΕΥΣΗ ΕΞΟΔΟΥ ΟΦΕΙΛΕΤΩΝ
'1775': ΠΡΩΤΕΣ ΥΛΕΣ ΨΕΚΑΣΤΗΡΩΝ
'1776': ΦΙΛΕΚΠΑΙΔΕΥΤΙΚΗ ΕΤΑΙΡΕΙΑ
'1777': ΑΔΕΙΕΣ ΟΔΗΓΩΝ ΑΥΤΟΚΙΝΗΤΩΝ
'1778': ΕΘΝΙΚΗ ΠΙΝΑΚΟΘΗΚΗ ΚΑΙ ΜΟΥΣΕΙΟ ΑΛ. ΣΟΥΤΣΟΥ
'1779': ΤΑΧΥΔΡΟΜΙΚΑ ΔΕΜΑΤΑ
'1780': ΕΙΣΠΡΑΞΗ ΠΟΡΩΝ
'1781': ΟΡΓΑΝΩΣΗ ΚΑΙ ΛΕΙΤΟΥΡΓΙΑ ΤΕΧΝΙΚΩΝ ΣΧΟΛΩΝ
'1782': ΔΙΑΘΕΣΗ ΓΑΙΩΝ ΣΤΗ ΘΕΣΣΑΛΙΑ
'1783': ΔΙΑΚΡΙΣΗ ΑΣΦΑΛΙΣΜΕΝΩΝ
'1784': ΑΓΑΘΟΕΡΓΑ ΙΔΡΥΜΑΤΑ ΚΕΡΚΥΡΑΣ
'1785': ΥΠΑΙΘΡΙΟ-ΠΛΑΝΟΔΙΟ ΕΜΠΟΡΙΟ ΚΑΙ ΕΜΠΟΡΟΠΑΝΗΓΥΡΕΙΣ
'1786': ΕΞΑΓΩΓΙΚΑ ΤΕΛΗ
'1787': ΥΠΟΥΡΓΙΚΟ ΣΥΜΒΟΥΛΙΟ - ΟΡΓΑΝΩΣΗ ΥΠΟΥΡΓΕΙΩΝ - ΚΥΒΕΡΝΗΤΙΚΕΣ ΕΠΙΤΡΟΠΕΣ
'1788': ΑΥΤΟΚΙΝΗΤΑ ΚΑΙ ΑΜΑΞΙΔΙΑ ΑΝΑΠΗΡΩΝ ΠΟΛΕΜΟΥ
'1789': ΥΠΗΡΕΣΙΕΣ ΠΕΡΙΦΕΡΕΙΑΚΗΣ ΑΝΑΠΤΥΞΗΣ
'1790': ΔΙΑΤΙΜΗΣΗ ΦΑΡΜΑΚΩΝ
'1791': ΦΟΡΟΛΟΓΙΑ ΕΙΔΩΝ ΠΟΛΥΤΕΛΕΙΑΣ
'1792': ΝΑΥΤΙΚΗ ΠΟΙΝΙΚΗ ΝΟΜΟΘΕΣΙΑ
'1793': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΕΤΑΙΡΕΙΩΝ ΠΕΤΡΕΛΑΙΟΕΙΔΩΝ
'1794': ΔΩΡΟ ΕΟΡΤΩΝ ΕΦΗΜΕΡΙΔΟΠΩΛΩΝ
'1795': ΔΙΕΥΚΟΛΥΝΣΕΙΣ ΓΙΑ ΤΗΝ ΑΝΟΙΚΟΔΟΜΗΣΗ
'1796': ΕΠΙΣΚΕΥΑΣΤΕΣ - ΣΥΝΕΡΓΕΙΑ ΕΠΙΣΚΕΥΗΣ ΑΥΤΟΚΙΝΗΤΩΝΟΔΙΚΗ ΒΟΗΘΕΙΑ ΟΧΗΜΑΤΩΝ
'1797': ΠΑΡΑΧΩΡΗΣΗ ΔΑΣΩΝ
'1798': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΑΣΘΕΝΕΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΤΡΑΠΕΖΩΝ ΠΙΣΤΕΩΣ, ΓΕΝΙΚΗΣ
ΚΑΙ ΑΜΕΡΙΚΑΝ ΕΞΠΡΕΣ
'1799': ΠΛΗΤΤΟΜΕΝΑ ΑΠΟ ΤΗΝ ΑΝΕΡΓΙΑ ΕΠΑΓΓΕΛΜΑΤΑ
'1800': ΤΑΜΕΙΑ Κ.Α.Τ.Ε
'1801': ΕΙΔΙΚΟΙ ΣΤΡΑΤΙΩΤΙΚΟΙ ΟΡΓΑΝΙΣΜΟΙ
'1802': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΙΟΝΙΚΗΣ ΚΑΙ ΛΑΙΚΗΣ ΤΡΑΠΕΖΑΣ (Τ.Α.Π.-
Ι.Λ.Τ.)
'1803': ΠΡΟΣΤΑΣΙΑ ΑΠΟ ΑΚΤΙΝΟΒΟΛΙΕΣ
'1804': ΚΡΑΤΙΚΟ ΘΕΑΤΡΟ Β. ΕΛΛΑΔΟΣ
'1805': ΥΓΕΙΟΝΟΜΙΚΟΣ ΕΛΕΓΧΟΣ ΦΟΙΤΗΤΩΝ
'1806': ΔΙΑΦΟΡΑ
'1807': ΤΕΛΩΝΕΙΑΚΗ ΥΠΗΡΕΣΙΑ ΣΙΔΗΡΟΔΡΟΜΩΝ
'1808': ΕΦΕΥΡΕΣΕΙΣ ΑΦΟΡΩΣΑΙ ΕΘΝ. ΑΜΥΝΑ
'1809': ΥΠΟΒΡΥΧΙΟΣ ΤΗΛΕΓΡΑΦΟΣ
'1810': ΑΔΕΙΕΣ ΟΙΚΟΔΟΜΗΣ ΞΕΝΟΔΟΧΕΙΩΝ
'1811': ΙΝΣΤΙΤΟΥΤΟ ΒΥΖΑΝΤΙΝΩΝ ΣΠΟΥΔΩΝ
'1812': ΣΧΟΛΗ ΓΕΩΤΕΧΝΙΚΩΝ ΕΠΙΣΤΗΜΩΝ ΠΑΝΜΙΟΥ ΘΕΣΝΙΚΗΣ
'1813': ΒΙΒΛΙΟΘΗΚΕΣ
'1814': ΤΑΜΕΙΑ ΑΝΕΓΕΡΣΕΩΣ ΔΙΔΑΚΤΗΡΙΩΝ
'1815': ΕΠΙΔΟΜΑ ΒΙΒΛΙΟΘΗΚΗΣ
'1816': ΚΑΤΑΣΤΗΜΑΤΑ ΑΦΟΡΟΛΟΓΗΤΩΝ ΕΙΔΩΝ
'1817': ΕΠΙΧΕΙΡΗΣΕΙΣ ΠΕΡΙΘΑΛΨΕΩΣ ΗΛΙΚΙΩΜΕΝΩΝ Η ΑΝΑΠΗΡΩΝ
'1818': ΛΙΜΕΝΙΚΟΙ ΣΤΑΘΜΟΙ
'1819': ΝΟΜΟΘΕΤΙΚΕΣ ΕΞΟΥΣΙΟΔΟΤΗΣΕΙΣ
'1820': ΘΑΛΑΜΟΙ ΡΑΔΙΟΙΣΟΤΟΠΩΝ
'1821': ΔΙΟΙΚΗΣΗ ΕΚΚΛΗΣΙΑΣΤΙΚΗΣ ΕΚΠΑΙΔΕΥΣΗΣ
'1822': ΑΠΑΓΟΡΕΥΜΕΝΕΣ ΚΑΙ
'1823': ΗΘΟΠΟΙΟΙ
'1824': ΣΥΜΒΑΣΕΙΣ ΠΕΡΙ ΔΙΕΘΝΩΝ ΕΚΘΕΣΕΩΝ
'1825': ΣΦΡΑΓΙΣΤΟΣ ΧΑΡΤΗΣ
'1826': ΕΤΑΙΡΕΙΕΣ ΔΙΑΧΕΙΡΙΖΟΜΕΝΕΣ ΔΗΜΟΣΙΑ ΣΥΜΦΕΡΟΝΤΑ
'1827': ΤΕΛΩΝΕΙΑΚΕΣ ΔΙΕΥΚΟΛΥΝΣΕΙΣ
'1828': ΔΕΞΑΜΕΝΟΠΛΟΙΑ
'1829': ΚΕΝΤΡΟ ΔΙΕΘΝΟΥΣ ΚΑΙ ΕΥΡΩΠΑΙΚΟΥ
'1830': ΕΠΙΒΑΤΗΓΑ ΜΕΣΟΓΕΙΑΚΑ ΚΑΙ ΤΟΥΡΙΣΤΙΚΑ ΠΛΟΙΑ
'1831': ΕΠΙΘΕΩΡΗΣΗ ΔΙΚΑΣΤΙΚΩΝ ΥΠΑΛΛΗΛΩΝ
'1832': ΚΑΝΟΝΙΣΜΟΣ ΘΕΑΤΡΩΝ ΚΙΝΗΜΑΤΟΓΡΑΦΩΝ ΚΛΠ
'1833': ΜΕΤΑΛΛΕΥΤΙΚΟΣ ΚΩΔΙΚΑΣ
'1834': ΚΑΤΑΣΤΑΤΙΚΟ Τ.Ε.Α.Α.Π.Α.Ε
'1835': ΠΑΝΕΠΙΣΤΗΜΙΑΚΗ ΛΕΣΧΗ
'1836': ΕΜΠΟΡΙΚΑ ΚΑΙ ΒΙΟΜΗΧΑΝΙΚΑ ΣΗΜΑΤΑ - (ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ)
'1837': ΕΠΙΔΟΜΑΤΑ ΑΠΟΛΥΟΜΕΝΩΝ ΟΠΛΙΤΩΝ ΩΣ ΑΝΙΚΑΝΩΝ
'1838': ΣΥΜΒΟΥΛΙΟ ΕΝΕΡΓΕΙΑΣ
'1839': ΣΧΟΛΗ ΝΟΜΙΚΩΝ,ΟΙΚΟΝΟΜΙΚΩΝ ΚΑΙ ΠΟΛΙΤΙΚΩΝ ΕΠΙΣΤΗΜΩΝ
'1840': ΠΡΟΠΛΗΡΩΜΕΣ ΚΑΙ ΠΡΟΚΑΤΑΒΟΛΕΣ
'1841': ΚΛΑΔΟΣ ΑΣΘΕΝΕΙΑΣ Τ.Ε.Β.Ε
'1842': ΔΙΑΝΟΜΗ ΓΑΙΩΝ ΚΩΠΑΙΔΑΣ
'1843': ΠΡΟΣΩΠΙΚΟ ΑΣΦΑΛΕΙΑΣ Ν.Π.Δ.Δ. - ΟΡΓΑΝΙΣΜΩΝ & ΕΠΙΧΕΙΡΗΣΕΩΝ
'1844': ΥΠΟΥΡΓΕΙΟ ΥΠΟΔΟΜΩΝ, ΜΕΤΑΦΟΡΩΝ ΚΑΙ ΔΙΚΤΥΩΝ
'1845': ΑΕΡΟΝΑΥΑΓΟΣΩΣΤΙΚΗ ΜΟΝΑΔΑ
'1846': ΚΟΥΡΕΙΑ, ΚΟΜΜΩΤΗΡΙΑ Κ.Λ.Π
'1847': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΔΙΚΑΣΤΙΚΩΝ ΕΠΙΜΕΛΗΤΩΝ
'1848': ΕΙΔΙΚΑ ΣΥΝΕΡΓΕΙΑ
'1849': ΚΑΤΕΨΥΓΜΕΝΑ ΚΡΕΑΤΑ
'1850': ΜΕΣΟΓΕΙΑΚΑ ΔΡΟΜΟΛΟΓΙΑ ΕΠΙΒΑΤΗΓΩΝ ΠΛΟΙΩΝ
'1851': ΣΥΓΚΡΟΤΗΣΗ ΠΡΟΣΩΠΙΚΟΥ ΑΕΡΟΠΟΡΙΑΣ
'1852': ΥΠΑΛΛΗΛΙΚΟΣ ΚΩΔΙΚΑΣ
'1853': ΓΕΝΙΚΕΣ ΔΙΑΤΑΞΕΙΣ ΠΕΡΙ ΦΑΡΜΑΚΕΙΩΝ
'1854': ΔΙΑΦΟΡΟΙ ΣΤΕΓΑΣΤΙΚΟΙ ΝΟΜΟΙ
'1855': ΥΠΟΥΡΓΕΙΟ ΣΥΝΤΟΝΙΣΜΟΥ
'1856': ΠΡΟΣΛΗΨΕΙΣ ΣΤΟ ΔΗΜΟΣΙΟ
'1857': ΤΑΜΕΙΟ ΕΠΙΚ. ΑΣΦΑΛ. ΠΡΟΣΩΠ. Ο.Ε.Α.Σ. ΚΑΙ ΥΠΑΛΛ. ΓΡΑΦΕΙΩΝ ΚΟΙΝΩΝ
ΤΑΜΕΙΩΝ ΙΔΙΩΤΙΚΩΝ ΛΕΩΦΟΡΕΙΩΝ
'1858': ΣΤΡΑΤΙΩΤΙΚΗ ΑΣΤΥΝΟΜΙΑ
'1859': ΝΟΜΙΣΜΑΤΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'1860': ΑΡΧΗ ΔΙΑΣΦΑΛΙΣΗΣ ΑΠΟΡΡΗΤΟΥ ΕΠΙΚΟΙΝΩΝΙΩΝ (Α.Δ.Α.Ε.)
'1861': ΣΤΡΑΤΙΩΤΙΚΑ ΣΥΝΕΡΓΕΙΑ
'1862': ΠΡΟΣΩΠΙΚΗ ΚΡΑΤΗΣΗ
'1863': ΕΦΗΜΕΡΙΔΑ ΤΗΣ ΚΥΒΕΡΝΗΣΕΩΣ
'1864': ΑΝΩΤΑΤΟ ΥΓΕΙΟΝΟΜΙΚΟ ΣΥΜΒΟΥΛΙΟ
'1865': ΓΡΑΜΜΑΤΕΙΣ ΣΤΡΑΤΟΔΙΚΕΙΩΝ
'1866': ΚΑΤΑΣΤΑΣΗ ΔΙΟΠΩΝ, ΝΑΥΤΩΝ ΚΑΙ ΝΑΥΤΟΠΑΙΔΩΝ
'1867': ΠΕΡΙΠΤΩΣΕΙΣ ΑΜΟΙΒΑΙΑΣ ΣΥΝΔΡΟΜΗΣ
'1868': ΥΠΟΝΟΜΟΙ ΠΡΩΤΕΥΟΥΣΑΣ
'1869': ΤΕΛΗ ΔΙΑΔΡΟΜΗΣ ΕΝΑΕΡΙΟΥ ΧΩΡΟΥ
'1870': ΥΓΕΙΟΝΟΜΙΚΑΙ ΕΠΙΤΡΟΠΑΙ
'1871': ΙΑΤΡΙΚΕΣ ΕΙΔΙΚΟΤΗΤΕΣ
'1872': ΕΡΤ – 2
'1873': ΕΚΤΕΛΕΣΗ ΕΡΓΩΝ Ο.Σ.Ε.ΚΑΙ ΣΥΝΔΕΔΕΜΕΝΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ
'1874': ΓΕΩΡΓΙΚΕΣ ΣΧΟΛΕΣ
'1875': ΣΥΜΜΕΤΟΧΗ ΣΥΝΕΤΑΙΡΙΣΜΩΝ ΣΕ ΠΡΟΜΗΘΕΙΕΣ ΔΗΜΟΣΙΟΥ
'1876': ΔΙΚΑΙΩΜΑ ΧΟΡΤΟΝΟΜΗΣ
'1877': ΟΙΚΟΚΥΡΙΚΕΣ ΣΧΟΛΕΣ
'1878': ΚΕΝΤΡΑ ΥΓΕΙΑΣ-ΠΟΛΥΙΑΤΡΕΙΑ
'1879': ΔΙΚΑΣΤΗΡΙΟ ΣΥΝΔΙΑΛΛΑΓΗΣ ΚΑΙ ΔΙΑΙΤΗΣΙΑΣ
'1880': ΕΠΙΘΕΩΡΗΣΗ ΙΧΘΥΩΝ
'1881': ΣΥΝΕΤΑΙΡΙΣΜΟΙ ΕΞΕΥΓΕΝΙΣΜΟΥ ΔΕΝΔΡΩΝ
'1882': ΦΟΙΤΗΤΕΣ
'1883': ΔΟΜΗΣΗ ΕΠΙ ΡΥΜΟΤΟΜΟΥΜΕΝΩΝ ΑΚΙΝΗΤΩΝ
'1884': ΑΠΑΣΧΟΛΗΣΗ - ΕΞΕΙΔΙΚΕΥΣΗ - ΚΑΤΑΡΤΙΣΗ ΑΝΕΡΓΩΝ
'1885': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΥΠΑΛΛΗΛΩΝ ΦΑΡΜΑΚΕΥΤΙΚΩΝ ΕΡΓΑΣΙΩΝ (Τ.Ε.Α.Υ.Φ.Ε.)
'1886': ΝΟΜΙΣΜΑΤΙΚΟ ΣΥΣΤΗΜΑ
'1887': ΑΠΟΓΡΑΦΗ ΝΑΥΤΙΚΩΝ
'1888': ΕΘΝΙΚΟ ΘΕΑΤΡΟ
'1889': ΥΠΗΡΕΣΙΑ ΕΠΙΣΤΗΜΟΝΙΚΗΣ ΄ΕΡΕΥΝΑΣ ΚΑΙ ΑΝΑΠΤΥΞΕΩΣ
'1890': ΠΑΡΟΧΕΣ ΑΣΤΥΝΟΜΙΚΟΥ ΠΡΟΣΩΠΙΚΟΥ ΕΛΛΗΝΙΚΗΣ ΑΣΤΥΝΟΜΙΑΣ
'1891': ΣΙΒΙΤΑΝΙΔΕΙΟΣ ΣΧΟΛΗ
'1892': ΣΤΡΑΤΙΩΤΙΚΗ ΙΑΤΡΙΚΗ ΣΧΟΛΗ
'1893': ΥΠΟΥΡΓΕΙΟ ΚΟΙΝΩΝΙΚΩΝ ΥΠΗΡΕΣΙΩΝ
'1894': ΑΠΑΓΟΡΕΥΣΗ ΑΠΑΛΛΟΤΡΙΩΣΗΣ ΠΛΟΙΩΝ
'1895': ΠΑΝΕΠΙΣΤΗΜΙΑΚΑ ΣΥΓΓΡΑΜΜΑΤΑ
'1896': ΜΟΥΣΟΥΛΜΑΝΟΙ
'1897': ΔΙΚΑΣΤΙΚΟΙ ΣΥΜΒΟΥΛΟΙ ΠΟΛΕΜΙΚΟΥ ΝΑΥΤΙΚΟΥ
'1898': ΑΕΡΟΠΟΡΙΚΑ ΕΡΓΑ ΚΑΙ ΠΡΟΜΗΘΕΙΕΣ
'1899': ΤΟΠΙΚΑ ΕΓΓΕΙΟΒΕΛΤΙΩΤΙΚΑ ΕΡΓΑ
'1900': ΦΟΡΟΛΟΓΙΑ ΖΩΩΝ
'1901': ΣΥΝΤΑΓΜΑ
'1902': ΝΟΜΟΙ ΠΕΡΙ ΧΡΗΜΑΤΙΣΤΗΡΙΟΥ - ΕΠΙΤΡΟΠΗ ΚΕΦΑΛΑΙΑΓΟΡΑΣ - ΧΡΗΜΑΤΙΣΤΗΡΙΑΚΗ
ΑΓΟΡΑ ΠΑΡΑΓΩΓΩΝ
'1903': ΓΕΩΤΡΗΣΕΙΣ
'1904': ΤΑΜΕΙΑ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΚΑΙ ΠΡΟΝΟΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΕΜΠΟΡΙΚΗΣ
ΤΡΑΠΕΖΑΣ ΕΛΛΑΔΑΣ (Τ.Ε.Α.Π.Ε.Τ.Ε ΚΑΙ Τ.Α.Π.Ε.Τ.Ε.)
'1905': ΕΦΕΔΡΟΙ ΑΕΡΟΠΟΡΙΑΣ
'1906': ΚΑΤ’ ΙΔΙΑΝ ΙΔΙΩΤΙΚΑ ΕΚΠΑΙΔΕΥΤΗΡΙΑ
'1907': ΣΧΟΛΗ ΝΟΜΙΚΩΝ ΚΑΙ ΟΙΚΟΝΟΜΙΚΩΝ ΕΠΙΣΤΗΜΩΝ
'1908': ΚΑΤΑΒΟΛΗ ΕΙΣΦΟΡΩΝ ΜΕ ΔΟΣΕΙΣ
'1909': ΠΑΛΑΙΟΤΕΡΕΣ ΑΕΡΟΠΟΡΙΚΕΣ ΕΤΑΙΡΕΙΕΣ
'1910': ΤΡΟΜΟΚΡΑΤΙΑ - ΟΡΓΑΝΩΜΕΝΗ
'1911': ΤΑΜΕΙΑ ΕΛΙΑΣ-ΔΑΚΟΚΤΟΝΙΑ
'1912': ΓΡΑΦΕΙΑ ΕΥΡΕΣΕΩΣ ΝΑΥΤΙΚΗΣ ΕΡΓΑΣΙΑΣ
'1913': ΑΡΤΟΠΟΙΕΙΑ
'1914': ΦΟΡΟΛΟΓΙΑ ΚΥΚΛΟΥ ΕΡΓΑΣΙΩΝ
'1915': ΣΥΝΑΛΛΑΓΜΑΤΙΚΗ ΚΑΙ ΓΡΑΜΜΑΤΙΟ ΣΕ ΔΙΑΤΑΓΗ
'1916': ΠΕΡΙΦΕΡΕΙΑΚΕΣ ΥΠΗΡΕΣΙΕΣ ΥΠΟΥΡΓΕΙΟΥ ΜΕΤΑΦΟΡΩΝ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΩΝ
'1917': ΕΛΛΗΝΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ ΤΟΥΡΙΣΜΟΥ
'1918': ΠΡΟΣΤΑΣΙΑ ΤΡΑΥΜΑΤΙΩΝ, ΑΙΧΜΑΛΩΤΩΝ ΚΑΙ ΑΜΑΧΟΥ ΠΛΗΘΥΣΜΟΥ
'1919': ΚΑΝΟΝΙΣΜΟΣ ΛΕΙΤΟΥΡΓΙΑΣ Τ.Ε.Β.Ε
'1920': ΣΤΕΓΑΣΗ ΑΝΑΠΗΡΩΝ ΠΟΛΕΜΟΥ
'1921': ΑΘΛΗΤΙΣΜΟΣ ΚΑΙ ΨΥΧΑΓΩΓΙΑ Π. ΝΑΥΤΙΚΟΥ
'1922': ΑΝΕΛΚΥΣΤΗΡΕΣ - ΑΝΥΨΩΤΙΚΑ ΜΕΣΑ ΚΑΙ ΜΗΧΑΝΗΜΑΤΑ
'1923': ΣΥΝΤΑΞΕΙΣ ΠΛΗΡΩΜΑΤΩΝ ΕΠΙΤΑΚΤΩΝ ΠΛΟΙΩΝ
'1924': ΔΙΚΑΙΩΜΑΤΑ ΥΠΕΡΗΜΕΡΙΑΣ
'1925': ΚΩΔΙΚΑΣ ΠΟΛΕΜΙΚΩΝ ΣΥΝΤΑΞΕΩΝ
'1926': ΚΑΠΝΟΣ
'1927': ΠΡΟΣΤΑΣΙΑ ΣΕΙΣΜΟΠΛΗΚΤΩΝ
'1928': ΑΠΟΣΤΡΑΤΕΙΕΣ ΚΑΙ ΑΠΟΚΑΤΑΣΤΑΣΕΙΣ
'1929': ΠΡΟΣΩΠΙΚΟ ΕΠΑΓΓΕΛΜΑΤΙΚΩΝ ΣΧΟΛΩΝ
'1930': ΔΙΕΘΝΕΙΣ ΣΥΜΒΑΣΕΙΣ ΓΙΑ ΤΗΝ ΠΡΟΣΤΑΣΙΑ ΤΩΝ ΕΡΓΑΖΟΜΕΝΩΝ ΑΝΗΛΙΚΩΝ
'1931': ΚΕΝΤΡΙΚΗ ΑΓΟΡΑ ΑΘΗΝΩΝ
'1932': ΕΝΙΣΧΥΣΗ ΕΛΑΙΟΠΑΡΑΓΩΓΗΣ
'1933': ΑΝΟΙΚΤΑ ΣΩΦΡΟΝΙΣΤΙΚΑ ΚΑΤΑΣΤΗΜΑΤΑ
'1934': ΦΙΛΑΝΘΡΩΠΙΚΑ ΙΔΡΥΜΑΤΑ ΖΑΚΥΝΘΟΥ
'1935': ΔΙΑΦΟΡΑ ΕΙΔΗ ΤΡΟΦΙΜΩΝ, ΠΟΤΩΝ & ΑΝΤΙΚΕΙΜΕΝΩΝ
'1936': ΦΟΡΟΛΟΓΙΑ ΕΠΙΧΕΙΡΗΣΕΩΝ ΤΥΠΟΥ
'1937': ΠΕΡΙΟΡΙΣΜΟΙ ΕΙΣΑΓΩΓΗΣ
'1938': ΠΡΟΣΩΡΙΝΗ ΕΙΣΔΟΧΗ ΕΜΠΟΡΕΥΜΑΤΩΝ
'1939': ΑΡΧΕΙΟ
'1940': ΔΙΥΛΙΣΤΗΡΙΑ ΠΕΤΡΕΛΑΙΟΥ
'1941': ΕΙΣΑΓΩΓΗ ΠΑΙΔΑΓΩΓΙΚΟΥ ΥΛΙΚΟΥ
'1942': ΕΠΙΘΕΩΡΗΣΗ ΚΛΗΡΟΔΟΤΗΜΑΤΩΝ
'1943': ΣΙΔΗΡΟΔΡΟΜΟΙ ΒΟΡΕΙΟΔΥΤΙΚΗΣ ΕΛΛΑΔΟΣ
'1944': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΕΡΓΑΤΟΤΕΧΝΙΤΩΝ ΔΟΜΙΚΩΝ ΚΑΙ ΞΥΛΟΥΡΓΙΚΩΝ
ΕΡΓΑΣΙΩΝ (Τ.Ε.Α.Ε.Δ.Ξ.Ε.)
'1945': ΤΑΜΕΙΑ ΠΡΟΝΟΙΑΣ ΣΤΙΣ ΠΡΕΣΒΕΙΕΣ
'1946': ΟΙΚΟΓΕΝΕΙΑΚΟΣ ΠΡΟΓΡΑΜΜΑΤΙΣΜΟΣ - ΥΓΕΙΑ ΠΑΙΔΙΟΥ
'1947': ΑΡΧΙΕΡΕΙΣ
'1948': ΣΥΜΒΟΥΛΙΑ ΥΠΟΥΡΓΕΙΟΥ ΔΙΚΑΙΟΣΥΝΗΣ
'1949': ΝΟΣΟΚΟΜΕΙΑΚΗ ΠΕΡΙΘΑΛΨΗ
'1950': ΚΑΤΑΣΤΗΜΑΤΑ ΠΩΛΗΣΕΩΣ ΟΙΝΟΠΝΕΥΜΑΤΩΔΩΝ ΠΟΤΩΝ ΚΑΙ ΚΕΝΤΡΑ ΔΙΑΣΚΕΔΑΣΕΩΣ
'1951': ΠΡΩΤΕΥΟΥΣΑ
'1952': ΠΟΛΥΤΕΧΝΕΙΟ ΚΡΗΤΗΣ
'1953': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΕΤΑΙΡΕΙΩΝ ΤΣΙΜΕΝΤΩΝ (Τ.Ε.Α.Π.Ε.Τ.)
'1954': ΕΛΛΗΝΙΚΟΣ ΤΑΠΗΤΟΥΡΓΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ
'1955': ΕΦΑΡΜΟΓΗ ΔΗΜΟΣΙΟΥΠΑΛΛΗΛΙΚΟΥ ΚΩΔΙΚΑ
'1956': ΗΛΕΚΤΡΟΛΟΓΙΚΟ ΕΡΓΑΣΤΗΡΙΟ
'1957': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΜΗΧΑΝΙΚΩΝ ΚΑΙ ΕΡΓΟΛΗΠΤΩΝ
'1958': ΜΕΣΙΤΕΣ ΑΣΤΙΚΩΝ ΣΥΜΒΑΣΕΩΝ
'1959': ΠΛΩΤΕΣ ΔΕΞΑΜΕΝΕΣ
'1960': ΚΑΝΟΝΙΣΜΟΙ ΦΟΡΤΩΣΕΩΝ
'1961': ΕΙΔΙΚΑ ΕΠΙΔΟΜΑΤΑ
'1962': ΠΟΙΝΙΚΟΣ ΚΩΔΙΚΑΣ
'1963': ΕΙΔΙΚΟΣ ΛΟΓΑΡΙΑΣΜΟΣ ΠΡΟΝΟΙΑΣ (Τ.Σ.Ε.Υ.Π.)
'1964': ΕΘΝΙΚΗ ΑΝΤΙΣΤΑΣΗ
'1965': ΟΡΓΑΝΙΣΜΟΣ ΒΙΟΜΗΧΑΝΙΚΗΣ ΑΝΑΠΤΥΞΗΣ
'1966': ΕΡΓΑ ΚΟΙΝΗΣ ΥΠΟΔΟΜΗΣ
'1967': ΔΙΕΥΘΥΝΣΗ TΕΛΩΝΕΙΩΝ ΠΕΙΡΑΙΑ
'1968': ΙΑΤΡΙΚΗ ΣΧΟΛΗ ΙΩΑΝΝΙΝΩΝ
'1969': ΖΩΟΚΛΟΠΗ ΚΑΙ ΖΩΟΚΤΟΝΙΑ
'1970': ΡΥΘΜΙΣΙΣ ΚΙΝΗΣΕΩΣ ΕΝ ΟΔΟΙΣ
'1971': ΕΤΑΙΡΕΙΕΣ ΠΡΟΣΤΑΣΙΑΣ ΚΡΑΤΟΥΜΕΝΩΝ - ΑΠΟΦΥΛΑΚΙΖΟΜΕΝΩΝ
'1972': ΔΑΣΙΚΗ ΔΙΕΥΘΕΤΗΣΗ ΧΕΙΜΑΡΡΩΝ
'1973': ΣΥΝΟΡΙΑΚΟΙ ΦΥΛΑΚΕΣ
'1974': ΣΧΟΛΗ ΘΕΤΙΚΩΝ ΕΠΙΣΤΗΜΩΝ ΠΑΝΜΙΟΥ ΙΩΑΝΝΙΝΩΝ
'1975': ΕΚΠΑΙΔΕΥΣΗ Π.ΝΑΥΤΙΚΟΥ
'1976': ΔΙΚΑΙΟΣΤΑΣΙΟ ΕΠΙΣΤΡΑΤΕΥΣΕΩΣ 1974
'1977': ΡΑΔΙΟΤΗΛΕΓΡΑΦΙΚΗ ΚΑΙ ΡΑΔΙΟΤΗΛΕΦΩΝΙΚΗ ΥΠΗΡΕΣΙΑ
'1978': ΦΑΡΜΑΚΑ-ΙΔΙΟΣΚΕΥΑΣΜΑΤΑ
'1979': ΣΥΝΤΕΛΕΣΤΕΣ ΚΕΡΔΟΥΣ ΕΠΑΓΓΕΛΜΑΤΙΩΝ
'1980': ΕΘΝΙΚΟ ΚΕΝΤΡΟ ΚΟΙΝΩΝΙΚΩΝ ΕΡΕΥΝΩΝ
'1981': ΚΕΦΑΛΑΙΟ ΝΑΥΤΙΚΗΣ ΕΚΠΑΙΔΕΥΣΕΩΣ
'1982': ΕΙΣΠΡΑΞΗ ΕΣΟΔΩΝ ΠΑΡΕΛΘΟΥΣΩΝ ΧΡΗΣΕΩΝ
'1983': ΟΡΓΑΝΙΣΜΟΣ ΗΝΩΜΕΝΩΝ ΕΘΝΩΝ
'1984': ΣΕΙΣΜΟΠΛΗΚΤΟΙ ΝΗΣΟΥ ΘΗΡΑΣ
'1985': ΚΕΝΤΡΙΚΗ ΑΓΟΡΑ ΘΕΣΣΑΛΟΝΙΚΗΣ
'1986': ΔΙΑΦΘΟΡΑ ΑΛΛΟΔΑΠΩΝ ΔΗΜΟΣΙΩΝ ΛΕΙΤΟΥΡΓΩΝ
'1987': ΓΕΩΠΟΝΙΚΟ ΠΑΝΕΠΙΣΤΗΜΙΟ ΑΘΗΝΩΝ
'1988': ΚΑΝΟΝΙΣΜΟΣ ΣΤΡΑΤΟΔΙΚΕΙΩΝ
'1989': ΔΙΑΦΟΡΕΣ ΥΓΕΙΟΝΟΜΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'1990': ΤΟΥΡΙΣΤΙΚΑ ΛΕΩΦΟΡΕΙΑ
'1991': ΔΑΝΕΙΑ ΑΠΟ ΕΚΔΟΤΙΚΕΣ ΤΡΑΠΕΖΕΣ
'1992': ΕΠΙΘΑΛΑΣΣΙΑ ΑΡΩΓΗ - ΡΥΜΟΥΛΚΗΣΗ ΠΛΟΙΩΝ
'1993': ΠΡΟΣΤΑΣΙΑ ΤΟΥ ΚΑΘΕΣΤΩΤΟΣ
'1994': ΣΥΜΒΑΣΕΙΣ ΠΕΡΙ ΥΛΙΚΟΥ ΕΥΗΜΕΡΙΑΣ ΝΑΥΤΙΛΛΟΜΕΝΩΝ
'1995': ΜΕΣΙΤΕΣ ΕΓΧΩΡΙΩΝ ΠΡΟΙΟΝΤΩΝ
'1996': ΚΡΑΤΙΚΗ ΟΡΧΗΣΤΡΑ ΑΘΗΝΩΝ
'1997': ΤΜΗΜΑΤΑ ΜΟΥΣΙΚΩΝ - ΘΕΑΤΡΙΚΩΝ ΣΠΟΥΔΩΝ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΑΣ - ΜΕΣΩΝ ΜΑΖΙΚΗΣ
ΕΝΗΜΕΡΩΣΗΣ
'1998': ΠΕΙΘΑΡΧΙΚΗ ΕΞΟΥΣΙΑ ΛΙΜΕΝΙΚΩΝ ΑΡΧΩΝ
'1999': ΙΝΣΤΙΤΟΥΤΟ ΑΜΥΝΤΙΚΩΝ ΑΝΑΛΥΣΕΩΝ (Ι.Α.Α.)
'2000': ΙΔΙΩΤΙΚΟΙ ΣΤΑΘΜΟΙ ΑΣΥΡΜΑΤΟΥ - ΧΡΗΣΗ ΡΑΔΙΟΣΥΧΝΟΤΗΤΩΝ
'2001': ΑΝΑΓΝΩΡΙΣΗ ΞΕΝΩΝ ΚΑΤΑΜΕΤΡΗΣΕΩΝ
'2002': ΓΕΝΟΚΤΟΝΙΑ
'2003': ΕΠΕΞΕΡΓΑΣΙΑ ΚΑΠΝΟΥ
'2004': ΣΥΜΒΟΥΛΙΟ ΕΠΙΚΡΑΤΕΙΑΣ
'2005': ΙΑΤΡΟΙ Ι.Κ.Α
'2006': ΥΠΟΘΗΚΗ
'2007': ΑΡΜΟΔΙΟΤΗΤΑ ΛΙΜΕΝΙΚΟΥ ΣΩΜΑΤΟΣ
'2008': ΕΙΣΑΓΩΓΕΣ ΓΙΑ ΕΚΘΕΣΕΙΣ, ΣΥΝΕΔΡΙΑ ΚΛΠ
'2009': ΕΥΡΩΠΑΙΚΗ ΤΡΑΠΕΖΑ ΑΝΑΣΥΓΚΡΟΤΗΣΗ-ΑΝΑΠΤΥΞΗ
'2010': ΑΕΡΟΔΡΟΜΙΟ ΣΠΑΤΩΝ
'2011': ΤΜΗΜΑ ΔΗΜΟΣΙΟΓΡΑΦΙΑΣ - ΜΕΣΩΝ ΜΑΖΙΚΗΣ ΕΠΙΚΟΙΝΩΝΙΑΣ
'2012': ΤΟΚΟΣ
'2013': ΕΝΙΣΧΥΣΗ ΠΟΛΕΜΟΠΑΘΩΝ ΚΛΠ. ΑΓΡΟΤΩΝ
'2014': ΕΞΟΔΑ ΚΗΔΕΙΑΣ ΣΤΡΑΤΙΩΤΙΚΩΝ
'2015': ΠΑΡΟΧΕΣ ΥΠΑΛΛΗΛΩΝ
'2016': ΠΡΟΣΤΑΣΙΑ ΣΙΤΟΠΑΡΑΓΩΓΗΣ
'2017': ΑΣΦΑΛΙΣΗ Ο.Γ.Α ΑΠΟ ΑΝΕΜΟΘΥΕΛΛΑ ΚΑΙ ΠΛΗΜΜΥΡΑ
'2018': ΔΙΕΥΘΥΝΣΗ ΚΑΤΑΣΚΕΥΩΝ ΚΑΙ ΕΞΟΠΛΙΣΜΟΥ
'2019': ΤΕΛΩΝΕΙΑΚΟΙ ΥΠΟΛΟΓΟΙ
'2020': ΓΕΝΙΚΗ ΓΡΑΜΜΑΤΕΙΑ ΑΘΛΗΤΙΣΜΟΥ
'2021': ΣΥΝΤΑΞΕΙΣ
'2022': ΑΔΕΙΕΣ ΠΡΟΣΩΠΙΚΟΥ Λ.Σ
'2023': ΣΥΝΤΑΞΕΙΣ ΣΤΡΑΤΙΩΤΙΚΩΝ ΠΑΘΟΝΤΩΝ ΣΤΗΝ
'2024': ΑΣΦΑΛΙΣΗ ΕΠΙΒΑΤΩΝ
'2025': ΑΠΑΛΛΟΤΡΙΩΣΗ ΑΚΙΝΗΤΩΝ
'2026': ΣΧΟΛΗ ΕΠΙΣΤΗΜΩΝ ΥΓΕΙΑΣ
'2027': ΕΝΟΙΚΙΟΣΤΑΣΙΟ ΒΟΣΚΩΝ
'2028': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΗΘΟΠΟΙΩΝ - ΣΥΓΓΡΑΦΕΩΝ ΤΕΧΝΙΚΩΝ ΘΕΑΤΡΟΥ
'2029': ΕΥΡΩΠΑΙΚΟ ΕΝΤΑΛΜΑ ΣΥΛΛΗΨΗΣ
'2030': ΑΝΤΙΚΕΙΜΕΝΑ ΔΕΔΗΛΩΜΕΝΗΣ ΑΞΙΑΣ ΑΝΤΙΚΑΤΑΒΟΛΕΣ
'2031': ΓΕΝΙΚΗ ΔΙΕΥΘΥΝΣΗ ΜΕΤΑΦΟΡΩΝ
'2032': ΟΡΓΑΝΙΣΜΟΣ ΥΠΟΥΡΓΕΙΟΥ ΔΙΚΑΙΟΣΥΝΗΣ
'2033': ΕΥΘΥΝΗ ΥΠΟΥΡΓΩΝ
'2034': ΤΜΗΜΑ ΚΤΗΝΙΑΤΡΙΚΗΣ
'2035': ΔΙΚΑΣΤΙΚΟ ΣΩΜΑ ΕΝΟΠΛΩΝ ΔΥΝΑΜΕΩΝ
'2036': ΕΝΟΡΙΑΚΟΙ ΝΑΟΙ ΚΑΙ ΕΦΗΜΕΡΙΟΙ
'2037': ΥΓΕΙΟΝΟΜΙΚΕΣ ΕΠΙΤΡΟΠΕΣ ΝΑΥΤΙΚΟΥ
'2038': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΚΑΙ ΠΡΟΝΟΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΕΛΛΗΝΙΚΗΣ
ΡΑΡΙΟΦΩΝΙΑΣ-ΤΗΛΕΟΡΑΣΕΩΣ-ΤΟΥΡΙΣΜΟΥ (Τ.Ε.Α.Π.Π. Ε.Ρ.Τ. Τ.)
'2039': ΣΤΡΑΤΙΩΤΙΚΗ ΒΟΗΘΕΙΑ Η.Π.Α
'2040': ΣΥΝΤΑΞΕΙΣ ΠΡΟΣΩΠΙΚΟΥ
'2041': ΧΡΗΜΑΤΙΚΗ ΔΙΑΧΕΙΡΙΣΗ Π. ΝΑΥΤΙΚΟΥ
'2042': ΠΟΛΙΤΙΚΟ ΓΡΑΦΕΙΟ ΠΡΩΘΥΠΟΥΡΓΟΥ
'2043': ΛΟΥΤΡΟΘΕΡΑΠΕΙΑ ΚΑΙ ΑΕΡΟΘΕΡΑΠΕΙΑ
'2044': ΣΥΜΒΟΥΛΙΟ ΚΟΙΝΩΝΙΚΩΝ ΑΣΦΑΛΙΣΕΩΝ
'2045': ΕΝΤΟΚΑ ΓΡΑΜΜΑΤΙΑ
'2046': ΣΩΦΡΟΝΙΣΤΙΚΟΣ ΚΩΔΙΚΑΣ
'2047': ΔΗΜΟΤΙΚΕΣ ΕΠΙΧΕΙΡΗΣΕΙΣ
'2048': ΚΩΔΙΚΑΣ ΠΟΛΙΤΙΚΗΣ ΔΙΚΟΝΟΜΙΑΣ - ΝΕΟΣ
'2049': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΥΠΑΛΛΗΛΩΝ ΚΟΥΡΕΙΩΝ ΚΑΙ ΚΟΜΜΩΤΗΡΙΩΝ
'2050': ΠΡΟΣΩΠΙΚΟ ΣΙΔΗΡΟΔΡΟΜΩΝ- Ο.Σ.Ε.- ΣΙΔΗΡΟΔΡΟΜΙΚΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ
'2051': ΔΙΑΦΟΡΟΙ ΝΟΜΟΙ ΓΙΑ ΤΟΝ ΤΥΠΟ
'2052': ΤΑΧΥΔΡΟΜΙΚΑ ΔΕΛΤΑΡΙΑ
'2053': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ ΗΛΕΚΤΡ. ΕΤ. ΑΘΗΝΩΝ - ΠΕΙΡΑΙΩΣ ΚΑΙ ΕΛΛΗΝ.
ΗΛΕΚΤΡ. ΕΤΑΙΡΙΑΣ (Τ.Α.Π Η.Ε.Α.Π.- Ε.Η.Ε.)
'2054': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΗΣ ΑΡΤΟΠΟΙΩΝ
'2055': ΔΗΜΟΤΙΚΟΙ ΚΑΙ ΚΟΙΝΟΤΙΚΟΙ ΑΡΧΟΝΤΕΣ
'2056': ΜΕΤΑΦΟΡΑ ΤΑΧΥΔΡΟΜΕΙΟΥ
'2057': ΚΑΝΟΝΙΣΜΟΣ ΠΑΡΟΧΩΝ ΤΑΜΕΙΟΥ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΑΣΦΑΛΙΣΤΩΝ ΚΑΙ
ΠΡΟΣΩΠΙΚΟΥ ΑΣΦΑΛΙΣΤΙΚΩΝ ΕΠΙΧΕΙΡΗΣΕΩΝ (Τ.Ε.Α.Α.Π.Α.Ε.)
'2058': ΠΡΟΣΩΠΙΚΟ
'2059': ΔΗΜΟΣΙΑ ΕΠΙΧΕΙΡΗΣΗ ΗΛΕΚΤΡΙΣΜΟΥ
'2060': ΚΑΝΟΝΙΣΜΟΙ ΕΡΓΩΝ ΩΠΛΙΣΜΕΝΟΥ ΣΚΥΡΟΔΕΜΑΤΟΣ
'2061': ΑΛΕΥΡΑ-ΑΡΤΟΣ
'2062': ΤΕΛΗ ΠΡΟΣΟΡΜΙΣΕΩΣ, ΠΑΡΑΒΟΛΗΣ ΚΑΙ ΠΑΡΟΠΛΙΣΜΟΥ
'2063': ΙΔΙΩΤΙΚΑ ΕΚΠΑΙΔΕΥΤΗΡΙΑ ΦΡΟΝΤΙΣΤΗΡΙΑ
'2064': ΑΡΧΑΙΟΛΟΓΙΚΗ ΥΠΗΡΕΣΙΑ
'2065': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΤΥΠΟΓΡΑΦΩΝ ΚΑΙ ΜΙΣΘΩΤΩΝ ΓΡΑΦΙΚΩΝ ΤΕΧΝΩΝ (Τ.Α.Τ.
& Μ.Γ.Τ)
'2066': ΕΙΔΙΚΕΣ ΕΦΑΡΜΟΓΕΣ ΚΥΡΙΑΚΗΣ ΑΡΓΙΑΣ
'2067': ΔΙΑΦΟΡΟΙ ΝΟΜΟΙ ΓΙΑ ΤΑ ΠΛΗΡΩΜΑΤΑ
'2068': ΑΣΤΙΚΑ ΣΧΟΛΕΙΑ
'2069': ΤΑΜΕΙΑ ΣΥΝΤΑΞΕΩΝ ΕΦΗΜΕΡΙΔΟΠΩΛΩΝ ΚΑΙ ΥΠΑΛΛΗΛΩΝ ΠΡΑΚΤΟΡΕΙΩΝ ΑΘΗΝΩΝ-ΘΕΣΝΙΚΗΣ
(Τ.Σ.Ε.Υ.Π.)
'2070': ΔΟΜΙΚΑ ΕΡΓΑ
'2071': ΝΑΥΣΤΑΘΜΟΣ
'2072': ΑΝΤΙΓΡΑΦΙΚΑ ΔΙΚΑΙΩΜΑΤΑ
'2073': ΕΠΙΔΟΜΑ ΟΙΚΟΓΕΝΕΙΑΚΩΝ ΒΑΡΩΝ
'2074': ΕΛΛΗΝΙΚΗ-ΕΥΡΩΠΑΙΚΗ ΦΑΡΜΑΚΟΠΟΙΙΑ
'2075': ΔΕΛΤΙΑ ΤΑΥΤΟΤΗΤΟΣ
'2076': ΣΧΟΛΙΑΤΡΙΚΗ ΥΠΗΡΕΣΙΑ
'2077': ΥΔΡΟΓΟΝΑΝΘΡΑΚΕΣ
'2078': ΓΕΝΙΚΑ ΠΕΡΙ ΕΚΘΕΣΕΩΝ
'2079': ΦΟΡΟΛΟΓΙΚΕΣ ΔΙΕΥΚΟΛΥΝΣΕΙΣ
'2080': ΛΣΜΟΣ ΠΡΟΝΟΙΑΣ ΠΡΟΣΩΠΙΚΟΥ Ι.Κ.Α
'2081': ΕΛΕΓΧΟΣ ΚΤΙΡΙΑΚΩΝ ΕΡΓΩΝ
'2082': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΗΣ
'2083': ΕΛΑΙΟΠΥΡΗΝΕΣ
'2084': ΕΜΦΥΤΕΥΤΙΚΑ ΚΤΗΜΑΤΑ
'2085': ΤΟΥΡΙΣΤΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'2086': ΚΛΑΔΟΣ ΑΣΦΑΛΙΣΕΩΣ ΤΕΧΝΙΚΩΝ ΤΥΠΟΥ ΘΕΣΣΑΛΟΝΙΚΗΣ (Κ.Α.Τ.Τ.Θ.)
'2087': ΜΕΤΕΩΡΟΛΟΓΙΚΗ ΥΠΗΡΕΣΙΑ
'2088': ΑΓΡΟΤΙΚΟΣ ΚΩΔΙΚΑΣ
'2089': ΤΕΧΝΙΚΟ ΕΠΙΜΕΛΗΤΗΡΙΟ
'2090': ΕΛΕΓΧΟΣ ΝΟΜΙΜΟΦΡΟΣΥΝΗΣ
'2091': ΑΡΧΑΙΟΛΟΓΙΚΗ ΕΤΑΙΡΙΑ
'2092': ΣΧΟΛΑΖΟΥΣΕΣ ΚΛΗΡΟΝΟΜΙΕΣ
'2093': ΓΕΦΥΡΑ ΡΙΟΥ - ΑΝΤΙΡΡΙΟΥ
'2094': ΦΟΙΤΗΣΗ, ΕΞΕΤΑΣΕΙΣ ΚΛΠ
'2095': ΤΥΧΕΡΑ, ΜΙΚΤΑ ΚΑΙ ΤΕΧΝΙΚΑ ΠΑΙΓΝΙΑ
'2096': ΟΡΓΑΝΙΚΟΙ ΑΡΙΘΜΟΙ ΥΠΑΞΙΩΜΑΤΙΚΩΝ
'2097': ΦΟΡΟΛΟΓΙΑ ΚΙΝΗΤΗΣ ΚΑΙ ΑΚΙΝΗΤΗΣ ΠΕΡΙΟΥΣΙΑΣ
'2098': ΑΤΕΛΕΙΕΣ ΑΓΙΟΥ ΟΡΟΥΣ
'2099': ΜΟΝΟΠΩΛΙΟ ΑΛΑΤΙΟΥ
'2100': ΑΣΦΑΛΙΣΗ ΕΛΛΗΝΩΝ ΕΞΩΤΕΡΙΚΟΥ
'2101': ΔΙΕΘΝΕΣ ΚΕΝΤΡΟ ΑΝΩΤΑΤΩΝ
'2102': ΑΝΑΠΡΟΣΑΡΜΟΓΕΣ ΣΥΝΤΑΞΕΩΝ
'2103': ΓΕΝΙΚΕΣ ΕΠΙΘΕΩΡΗΣΕΙΣ-ΔΙΕΥΘΥΝΣΕΙΣ
'2104': ΣΩΜΑ ΟΡΚΩΤΩΝ ΛΟΓΙΣΤΩΝ
'2105': ΣΕΙΣΜΟΠΛΗΚΤΟΙ ΒΟΡΕΙΟΥ ΕΛΛΑΔΟΣ
'2106': ΠΑΝΕΠΙΣΤΗΜΙΑ ΠΕΙΡΑΙΩΣ-ΜΑΚΕΔΟΝΙΑΣ
'2107': ΧΩΡΟΤΑΞΙΑ ΚΑΙ ΠΕΡΙΒΑΛΛΟΝ
'2108': ΕΣΩΤΕΡΙΚΟΙ ΚΑΝΟΝΙΣΜΟΙ ΕΡΓΑΣΙΑΣ
'2109': ΕΛΕΓΧΟΣ ΝΑΥΤΙΚΩΝ ΑΤΥΧΗΜΑΤΩΝ
'2110': ΠΝΕΥΜΑΤΙΚΑ ΚΕΝΤΡΑ
'2111': ΠΛΟΗΓΙΚΑ ΔΙΚΑΙΩΜΑΤΑ
'2112': ΣΤΡΑΤΕΥΟΜΕΝΟΙ ΔΙΚΗΓΟΡΟΙ
'2113': ΣΥΣΤΑΤΙΚΑ ΑΥΤΟΚΙΝΗΤΩΝ
'2114': ΣΙΔΗΡΟΔΡΟΜΟΙ ΠΕΛΟΠΟΝΝΗΣΟΥ
'2115': ΤΜΗΜΑ ΜΕΘΟΔΟΛΟΓΙΑΣ, ΙΣΤΟΡΙΑΣ ΚΑΙ ΘΕΩΡΙΑΣ ΤΗΣ ΕΠΙΣΤΗΜΗΣ
'2116': ΕΥΡΩΠΑΙΚΟ ΠΟΛΙΤΙΣΤΙΚΟ ΚΕΝΤΡΟ ΔΕΛΦΩΝ
'2117': ΣΥΝΕΤΑΙΡΙΣΜΟΙ ΕΓΓΕΙΩΝ ΒΕΛΤΙΩΣΕΩΝ
'2118': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΗΣ ΔΗΜΟΣΙΩΝ ΥΠΑΛΛΗΛΩΝ (Τ.Ε.Α.Δ.Υ.)
'2119': ΙΕΡΟΚΗΡΥΚΕΣ
'2120': ΕΙΡΗΝΟΔΙΚΕΙΑ - ΠΤΑΙΣΜΑΤΟΔΙΚΕΙΑ
'2121': ΑΓΟΡΑΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ
'2122': ΤΡΑΠΕΖΙΤΙΚΗ ΕΠΙΤΑΓΗ
'2123': ΝΑΥΑΓΟΣΩΣΤΙΚΑ ΚΑΙ ΡΥΜΟΥΛΚΑ
'2124': ΦΟΡΟΛΟΓΙΚΕΣ ΔΙΑΦΟΡΕΣΙ
'2125': ΜΕΤΡΑ ΚΑΙ ΣΤΑΘΜΑ
'2126': ΓΕΝΙΚΟ ΧΗΜΕΙΟ ΤΟΥ ΚΡΑΤΟΥΣ
'2127': ΣΥΜΦΩΝΙΑ ΓΙΑ ΙΣΑ ΟΙΚΟΝΟΜΙΚΑ ΚΟΙΝΩΝΙΚΑ
'2128': ΣΥΝΟΡΙΑΚΟΙ ΣΤΑΘΜΟΙ
'2129': ΑΞΙΩΜΑΤΙΚΟΙ ΣΩΜΑΤΩΝ ΑΣΦΑΛΕΙΑΣ
'2130': ΥΠΗΡΕΣΙΑΚΑ ΣΥΜΒΟΥΛΙΑ
'2131': ΕΙΣΑΓΩΓΙΚΟΣ ΝΟΜΟΣ
'2132': ΚΤΗΜΑΤΟΛΟΓΙΟ
'2133': ΕΤΑΙΡΕΙΑ ΔΙΑΧΕΙΡΙΣΕΩΣ ΥΠΕΓΓΥΩΝ ΠΡΟΣΟΔΩΝ
'2134': ΥΠΟΥΡΓΕΙΟ ΜΑΚΕΔΟΝΙΑΣ – ΘΡΑΚΗΣ
'2135': ΤΟΥΡΙΣΤΙΚΑ ΓΡΑΦΕΙΑ ΚΑΙ ΣΩΜΑΤΕΙΑ
'2136': ΔΑΝΕΙΑ ΑΝΑΣΥΓΚΡΟΤΗΣΗΣ
'2137': ΑΣΤΙΚΕΣ ΣΥΓΚΟΙΝΩΝΙΕΣ ΘΕΣΣΑΛΟΝΙΚΗΣ-Ο.Α.Σ.Θ
'2138': ΕΘΕΛΟΝΤΕΣ ΑΕΡΟΠΟΡΙΑΣ
'2139': ΣΗΜΕΙΩΤΕΣ
'2140': ΤΕΛΗ ΕΓΚΑΤΑΣΤΑΣΗΣ - ΛΕΙΤΟΥΡΓΙΑΣ ΚΕΡΑΙΩΝ
'2141': Η.Π.Α
'2142': ΠΑΝΕΠΙΣΤΗΜΙΑ ΑΙΓΑΙΟΥ, ΙΟΝΙΟΥ ΚΑΙ ΘΕΣΣΑΛΙΑΣ
'2143': ΤΑΜΕΙΟ ΠΡΟΝΟΙΑΣ ΞΕΝΟΔΟΧΩΝ
'2144': ΣΥΜΒΟΥΛΙΑ ΣΤΕΓΑΣΕΩΣ
'2145': ΤΕΧΝΙΚΗ ΕΚΜΕΤΑΛΛΕΥΣΗ ΙΔΙΩΤΙΚΩΝ ΑΕΡΟΠΛΑΝΩΝ
'2146': ΦΟΡΟΛΟΓΙΑ ΔΗΜΟΣΙΩΝ ΘΕΑΜΑΤΩΝ
'2147': ΣΤΡΑΤΟΛΟΓΙΑ ΟΠΛΙΤΩΝ ΧΩΡΟΦΥΛΑΚΗΣ
'2148': ΓΥΜΝΑΣΙΑ ΑΡΙΣΤΟΥΧΩΝ
'2149': ΣΧΟΛΙΚΗ ΑΝΤΙΛΗΨΗ
'2150': ΕΥΘΥΝΗ ΣΤΡΑΤΙΩΤΙΚΩΝ
'2151': ΣΤΑΘΜΟΙ ΕΠΙΒΗΤΟΡΩΝ
'2152': ΒΕΒΑΙΩΣΗ ΠΤΑΙΣΜΑΤΩΝ ΑΠΟ
'2153': ΔΙΑΖΥΓΙΟ
'2154': ΔΙΕΘΝΗΣ ΣΥΜΒΑΣΗ ΠΕΡΙ ΑΝΑΓΚΑΣΤΙΚΗΣ ΕΡΓΑΣΙΑΣ
'2155': ΔΙΕΥΚΟΛΥΝΣΗ ΔΙΕΘΝΟΥΣ ΝΑΥΤΙΛΙΑΚΗΣ ΚΙΝΗΣΕΩΣ
'2156': ΕΝΟΙΚΙΟΣΤΑΣΙΟ
'2157': ΕΚΘΕΣΕΙΣ ΖΑΠΠΕΙΟΥ ΜΕΓΑΡΟΥ
'2158': ΔΙΑΧΕΙΡΙΣΗ ΥΛΙΚΟΥ Π. ΝΑΥΤΙΚΟΥ
'2159': ΕΦΕΔΡΙΚΑ ΤΑΜΕΙΑ ΚΡΗΤΗΣ
'2160': ΣΙΤΑΡΙ
'2161': ΦΟΡΤΗΓΑ 501-4500 ΤΟΝΝΩΝ
'2162': ΤΡΑΠΕΖΑ ΕΡΓΑΣΙΑΣ
'2163': ΑΤΕΛΕΙΕΣ ΥΠΕΡ ΤΗΣ ΓΕΩΡΓΙΑΣ
'2164': ΑΙΓΙΑΛΟΣ ΚΑΙ ΠΑΡΑΛΙΑ
'2165': ΔΑΣΗ ΙΔΡΥΜΑΤΩΝ
'2166': ΙΧΘΥΟΤΡΟΦΕΙΑ
'2167': ΑΠΟΓΡΑΦΕΣ Π. ΝΑΥΤΙΚΟΥ
'2168': ΣΗΜΑΤΑ ΚΑΙ ΔΕΛΤΙΑ ΑΝΑΠΗΡΩΝ ΠΟΛΕΜΟΥ
'2169': ΠΕΙΘΑΡΧΙΚΟ ΔΙΚΑΙΟ ΑΣΤΥΝΟΜΙΚΟΥ ΠΡΟΣΩΠΙΚΟΥ ΕΛΛΗΝΙΚΗΣ ΑΣΤΥΝΟΜΙΑΣ
'2170': ΑΤΜΟΛΕΒΗΤΕΣ
'2171': ΤΑΧΥΔΡΟΜΙΚΗ ΥΠΗΡΕΣΙΑ ΣΤΡΑΤΟΥ
'2172': ΠΡΟΣΤΑΣΙΑ ΠΙΝΑΚΙΔΩΝ
'2173': ΑΓΡΟΤΙΚΑ ΚΤΗΝΙΑΤΡΕΙΑ
'2174': ΧΡΗΜΑΤΙΣΤΗΡΙΑΚΑ ΔΙΚΑΣΤΗΡΙΑ
'2175': ΕΓΓΡΑΦΗ ΠΡΟΕΡΧΟΜΕΝΩΝ ΑΠΟ ΤΗΝ ΑΛΛΟΔΑΠΗ
'2176': ΟΡΓΑΝΙΣΜΟΣ ΔΙΑΧΕΙΡΙΣΗΣ ΔΗΜΟΣΙΟΥ ΥΛΙΚΟΥ
'2177': ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΥΠΡΟΥ
'2178': ΚΑΤΕΡΓΑΣΙΑ ΞΗΡΑΣ ΣΤΑΦΙΔΑΣ
'2179': ΤΕΛΩΝΕΙΑΚΗ ΔΙΑΙΡΕΣΗ
'2180': ΑΖΗΤΗΤΑ
'2181': ΜΕΛΙΣΣΟΤΡΟΦΙΑ
'2182': ΔΙΕΥΘΥΝΣΗ ΘΑΛΑΣΣΙΩΝ ΚΡΑΤΙΚΩΝ ΜΕΤΑΦΟΡΩΝ
'2183': ΕΚΜΕΤΑΛΛΕΥΣΗ ΜΕΤΑΛΛΕΙΩΝ ΜΕ ΕΓΓΥΗΣΗ
'2184': ΙΔΙΩΤΙΚΕΣ ΕΠΑΓΓΕΛΜΑΤΙΚΕΣ ΣΧΟΛΕΣ
'2185': ΔΙΑΘΕΣΗ ΑΧΡΗΣΤΟΥ ΥΛΙΚΟΥ
'2186': ΤΑΧΥΔΡΟΜΙΚΕΣ ΜΕΤΑΦΟΡΕΣ
'2187': ΕΡΥΘΡΟ ΠΙΠΕΡΙ
'2188': ΠΙΚΠΑ-ΕΟΠ-ΚΕΝΤΡΟ ΒΡΕΦΩΝ Η ΜΗΤΕΡΑ-ΕΛΕΠΑΠ
'2189': ΣΥΜΜΕΤΟΧΗ ΣΕ ΣΥΜΒΟΥΛΙΑ
'2190': ΓΥΜΝΑΣΤΗΡΙΟ
'2191': ΙΑΤΡΙΚΟΙ- ΟΔΟΝΤΙΑΤΡΙΚΟΙ ΣΥΛΛΟΓΟΙ
'2192': ΕΙΣΑΓΩΓΗ ΦΟΙΤΗΤΩΝ
'2193': ΕΛΛΗΝΙΚΟ ΄ΙΔΡΥΜΑ ΠΟΛΙΤΙΣΜΟΥ
'2194': ΛΟΙΜΟΚΑΘΑΡΤΗΡΙΑ ΖΩΩΝ
'2195': ΔΙΕΘΝΗΣ ΟΡΓΑΝΙΣΜΟΣ ΑΤΟΜΙΚΗΣ ΕΝΕΡΓΕΙΑΣ
'2196': ΤΑΜΕΙΟ ΕΞΟΔΟΥ ΚΑΙ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΜΙΣΘΩΤΩΝ ΒΙΟΜΗΧΑΝΙΑΣ ΚΑΠΝΟΥ
'2197': ΚΑΘΗΓΗΤΕΣ Ε.Μ.Π
'2198': ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ
'2199': ΒΕΒΑΙΩΣΗ ΦΟΡΟΛΟΓΙΑΣ ΚΑΘΑΡΑΣ ΠΡΟΣΟΔΟΥ
'2200': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΠΡΟΣΩΠΙΚΟΥ ΤΡΑΠΕΖΩΝ ΕΛΛΑΔΟΣ ΚΑΙ ΚΤΗΜΑΤΙΚΗΣ
'2201': ΔΗΜΟΨΗΦΙΣΜΑΤΑ
'2202': ΕΛΛΗΝΙΚΟ ΑΝΟΙΚΤΟ ΠΑΝΕΠΙΣΤΗΜΙΟ
'2203': ΚΑΛΛΙΤΕΧΝΙΚΟ ΕΠΑΓΓΕΛΜΑΤΙΚΟ ΕΠΙΜΕΛΗΤΗΡΙΟ
'2204': ΑΝΟΙΚΟΔΟΜΗΣΙΣ
'2205': ΔΑΣΙΚΟΣ ΚΩΔΙΚΑΣ
'2206': ΚΑΝΟΝΙΣΜΟΣ ΠΥΡΟΣΒΕΣΤΙΚΩΝ ΜΕΣΩΝ ΤΩΝ ΠΛΟΙΩΝ
'2207': ΔΙΦΘΕΡΙΤΙΔΑ
'2208': ΒΙΒΛΙΑ ΚΑΙ ΦΟΡΟΛΟΓΙΚΑ ΣΤΟΙΧΕΙΑ
'2209': ΕΛΕΓΧΟΣ ΕΞΑΓΟΜΕΝΩΝ ΕΛΑΙΩΝ
'2210': ΕΠΙΔΟΜΑΤΑ ΟΙΚΟΓΕΝΕΙΩΝ ΣΤΡΑΤΙΩΤΙΚΩΝ
'2211': ΕΥΡΩΠΑΙΚΕΣ ΣΥΜΦΩΝΙΕΣ ΠΟΥ ΑΦΟΡΟΥΝ ΤΗΝ ΤΗΛΕΟΡΑΣΗ
'2212': ΕΚΤΑΚΤΑ ΣΤΡΑΤΟΔΙΚΕΙΑ
'2213': ΠΟΛΕΜΙΚΗ ΒΙΟΜΗΧΑΝΙΑ
'2214': ΑΣΕΜΝΟΙ ΓΥΝΑΙΚΕΣ
'2215': ΑΠΕΛΕΥΘΕΡΩΣΗ ΑΓΟΡΑΣ ΗΛΕΚΤΡΙΚΗΣ ΕΝΕΡΓΕΙΑΣ ΕΝΕΡΓΕΙΑΚΗ ΠΟΛΙΤΙΚΗ Ρ.Α.Ε
'2216': ΠΡΟΕΙΣΠΡΑΞΗ ΔΙΚΗΓΟΡΙΚΗΣ ΑΜΟΙΒΗΣ
'2217': ΕΘΝΙΚΗ ΣΧΟΛΗ ΔΗΜΟΣΙΑΣ ΥΓΕΙΑΣ (Ε.Σ.Δ.Υ.)
'2218': ΠΡΟΜΗΘΕΙΑ ΘΕΙΟΥ ΚΑΙ ΘΕΙΙΚΟΥ ΧΑΛΚΟΥ
'2219': ΧΗΜΙΚΟΙ - ΧΗΜΙΚΕΣ ΒΙΟΜΗΧΑΝΙΕΣ
'2220': ΑΣΦΑΛΙΣΗ ΚΑΤΑ ΤΗΣ ΑΣΘΕΝΕΙΑΣ
'2221': ΤΑΜΕΙΟ ΑΛΛΗΛΟΒΟΗΘΕΙΑΣ ΠΡΟΣΩΠΙΚΟΥ ΕΘΝΙΚΟΥ ΤΥΠΟΓΡΑΦΕΙΟΥ (Τ.Α.Π.Ε.Τ.)
'2222': ΟΡΓΑΝΙΣΜΟΣ ΥΠΟΥΡΓΕΙΟΥ ΟΙΚΟΝΟΜΙΚΩΝ
'2223': ΠΕΡΙΕΧΟΜΕΝΟ ΔΗΛΩΣΗΣ ΦΟΡΟΥ ΕΙΣΟΔΗΜΑΤΟΣ
'2224': ΠΡΩΤΕΣ ΥΛΕΣ ΣΙΔΕΡΕΝΙΩΝ ΒΑΡΕΛΙΩΝ
'2225': ΕΥΡΩΠΑΙΚΟΣ ΚΩΔΙΚΑΣ ΚΟΙΝΩΝΙΚΗΣ ΑΣΦΑΛΕΙΑΣ
'2226': ΔΙΑΦΟΡΟΙ ΓΕΩΡΓΙΚΟΙ ΣΥΝΕΤΑΙΡΙΣΜΟΙ
'2227': ΣΧΕΔΙΑ ΠΟΛΕΩΝ ΙΟΝΙΩΝ ΝΗΣΩΝ
'2228': ΕΥΡΩΠΑΙΚΗ ΟΙΚΟΝΟΜΙΚΗ ΚΟΙΝΟΤΗΤΑ ΕΥΡΩΠΑΙΚΗ ΕΝΩΣΗ
'2229': ΣΧΟΛΗ ΔΙΟΙΚΗΣΕΩΣ ΝΟΣΗΛΕΥΤ. ΙΔΡΥΜΑΤΩΝ
'2230': ΔΙΑΦΟΡΟΙ ΝΟΜΟΙ ΕΜΠΡΑΓΜΑΤΟΥ ΔΙΚΑΙΟΥ
'2231': ΕΠΙΜΕΛΗΤΕΙΑ ΚΑΙ ΟΙΚΟΝΟΜΙΚΕΣ ΥΠΗΡΕΣΙΕΣ
'2232': ΔΙΑΔΙΚΑΣΙΑ ΑΤΕΛΕΙΑΣ
'2233': ΠΑΙΔΙΚΕΣ ΕΞΟΧΕΣ
'2234': ΤΑΜΕΙΟ ΣΥΝΤΑΞΕΩΝ ΠΡΟΣΩΠΙΚΟΥ ΕΘΝΙΚΗΣ ΤΡΑΠΕΖΑΣ ΤΗΣ ΕΛΛΑΔΟΣ
'2235': ΚΡΑΤΙΚΗ ΕΚΜΕΤΑΛΛΕΥΣΗ ΔΑΣΩΝ
'2236': ΑΝΕΞΑΡΤΗΣΙΑ ΤΗΣ ΕΚΚΛΗΣΙΑΣ ΤΗΣ ΕΛΛΑΔΟΣ
'2237': ΤΕΧΝΙΚΑ ΠΤΥΧΙΑ
'2238': ΕΠΙΒΑΤΙΚΑ ΑΥΤΟΚΙΝΗΤΑ (ΔΗΜΟΣΙΑΣ ΚΑΙ ΙΔΙΩΤΙΚΗΣ ΧΡΗΣΗΣ)
'2239': ΣΥΜΒΑΣΕΙΣ ΒΟΥΛΕΥΤΩΝ
'2240': ΟΡΓΑΝΙΣΜΟΣ ΤΩΝ ΔΙΚΑΣΤΗΡΙΩΝ
'2241': ΕΚΠΑΙΔΕΥΤΙΚΟΙ ΛΕΙΤΟΥΡΓΟΙ ΕΝ ΓΕΝΕΙ
'2242': ΑΡΜΟΔΙΟΤΗΤΑ ΤΕΛΩΝΕΙΑΚΩΝ ΑΡΧΩΝ
'2243': ΕΙΔΙΚΑ ΕΦΕΤΕΙΑ
'2244': ΑΞΙΩΜΑΤΙΚΟΙ ΑΕΡΟΠΟΡΙΑΣ
'2245': ΠΑΝΕΠΙΣΤΗΜΙΑΚΗ ΒΙΒΛΙΟΘΗΚΗ
'2246': ΕΠΙΤΡΟΠΗ ΣΥΝΤΑΞΗΣ ΣΧΕΔΙΟΥ ΚΩΔΙΚΑ ΕΡΓΑΣΙΑΣ
'2247': ΕΛΟΝΟΣΙΑ
'2248': ΝΑΥΛΟΣΥΜΦΩΝΑ
'2249': ΣΙΔΗΡΟΔΡΟΜΟΙ ΘΕΣΣΑΛΙΚΟΙ
'2250': ΡΑΔΙΟΦΩΝΙΚΕΣ ΣΥΜΒΑΣΕΙΣ
'2251': ΠΡΟΩΘΗΣΗ ΓΕΩΡΓΙΚΗΣ ΠΑΡΑΓΩΓΗΣ-ΕΘ.Ι.ΑΓ.Ε
'2252': ΕΠΟΧΙΑΚΩΣ ΕΡΓΑΖΟΜΕΝΟΙ ΜΙΣΘΩΤΟΙ
'2253': ΔΙΔΑΚΤΙΚΟ ΠΡΟΣΩΠΙΚΟ
'2254': ΚΩΔΙΚΑΣ ΚΕΝΤΡΙΚΗΣ, ΠΡΕΣΒΕΥΤΙΚΗΣ ΚΑΙ
'2255': ΠΟΛΙΤΙΚΟ ΠΡΟΣΩΠΙΚΟ ΥΠΟΥΡΓΕΙΟΥ ΕΘΝΙΚΗΣ ΑΜΥΝΑΣ
'2256': ΔΙΠΛΩΜΑΤΑ ΕΥΡΕΣΙΤΕΧΝΙΑΣ
'2257': ΣΩΜΑΤΕΙΑ ΓΕΩΡΓΙΚΩΝ ΕΡΓΑΤΩΝ
'2258': ΚΩΔΙΚΑΣ ΠΕΡΙ ΕΙΣΠΡΑΞΕΩΣ ΔΗΜΟΣΙΩΝ ΕΣΟΔΩΝ
'2259': ΤΡΑΠΕΖΟΓΡΑΜΜΑΤΙΑ
'2260': ΠΡΟΜΗΘΕΥΤΙΚΟΣ ΟΡΓΑΝΙΣΜΟΣ Ε.Β.Α
'2261': ΕΛΕΓΧΟΣ ΑΣΦΑΛΕΙΑΣ ΑΥΤΟΚΙΝΗΤΩΝΚΕΝΤΡΑ ΤΕΧΝΙΚΟΥ ΕΛΕΓΧΟΥ ΟΧΗΜΑΤΩΝ (Κ.Τ.Ε.Ο.)
'2262': ΕΞΑΓΩΓΗ ΤΥΡΟΥ
'2263': ΝΑΥΤΙΛΙΑΚΟ ΣΥΝΑΛΛΑΓΜΑ
'2264': ΤΑΜΕΙΟ ΕΠΙΚΟΥΡΙΚΗΣ ΑΣΦΑΛΙΣΕΩΣ ΗΛΕΤΡΟΤΕΧΝΙΤΩΝ ΕΛΛΑΔΟΣ (T.E.A.H.E.)
'2265': ΜΙΣΘΟΙ ΣΤΡΑΤΙΩΤΙΚΩΝ ΚΑΙ ΠΡΟΣΑΥΞΗΣΕΙΣ
'2266': ΑΣΤΙΚΟΣ ΚΩΔΙΚΑΣ
'2267': ΜΕ ΤΙΣ ΗΝΩΜΕΝΕΣ ΠΟΛΙΤΕΙΕΣ ΑΜΕΡΙΚΗΣ
'2268': ΤΑΜΕΙΟ ΑΣΦΑΛΙΣΕΩΣ ΠΡΟΣΩΠΙΚΟΥ Ο.Τ.Ε. (Τ.Α.Π.-Ο.Τ.Ε.)
'2269': ΜΑΙΕΣ
'2270': ΦΥΓΟΔΙΚΙΑ
'2271': ΟΡΓΑΝΙΣΜΟΣ ΞΕΝΟΔΟΧΕΙΑΚΗΣ ΠΙΣΤΗΣ
'2272': ΔΗΜΟΤΙΚΟΙ ΣΤΡΑΤΟΛΟΓΟΙ
'2273': ΑΝΩΤΑΤΟ ΔΙΚΑΣΤΙΚΟ ΣΥΜΒΟΥΛΙΟ
'2274': ΙΣΤΟΡΙΚΟ ΑΡΧΕΙΟ ΚΡΗΤΗΣ
'2275': ΕΛΛΗΝΙΚΗ ΘΑΛΑΣΣΙΑ ΄ΕΝΩΣΗ
'2276': ΕΚΠΟΙΗΣΕΙΣ ΚΑΙ ΕΚΜΙΣΘΩΣΕΙΣ
'2277': ΤΑΧΥΔΡΟΜΙΚΕΣ ΕΠΙΤΑΓΕΣ
'2278': ΥΠΗΡΕΣΙΑ ΜΗΤΡΩΟΥ
'2279': ΔΙΑΦΟΡΑ ΟΙΚΟΝΟΜΙΚΑ ΘΕΜΑΤΑ
'2280': ΕΝΔΙΚΑ ΜΕΣΑ
'2281': ΤΕΛΗ ΑΕΡΟΠΟΡΙΚΩΝ ΤΑΞΙΔΙΩΝ
'2282': ΜΕ ΤΗΝ ΑΙΓΥΠΤΟ
'2283': ΔΙΑΦΟΡΕΣ ΒΙΒΛΙΟΘΗΚΕΣ
'2284': ΚΕΝΤΡΙΚΗ ΥΠΗΡΕΣΙΑ
splits:
- name: train
num_bytes: 216757887
num_examples: 28536
- name: test
num_bytes: 71533786
num_examples: 9516
- name: validation
num_bytes: 68824457
num_examples: 9511
download_size: 147827496
dataset_size: 357116130
- config_name: volume
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ΚΟΙΝΩΝΙΚΗ ΠΡΟΝΟΙΑ
'1': ΓΕΩΡΓΙΚΗ ΝΟΜΟΘΕΣΙΑ
'2': ΡΑΔΙΟΦΩΝΙΑ ΚΑΙ ΤΥΠΟΣ
'3': ΒΙΟΜΗΧΑΝΙΚΗ ΝΟΜΟΘΕΣΙΑ
'4': ΥΓΕΙΟΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ
'5': ΠΟΛΕΜΙΚΟ ΝΑΥΤΙΚΟ
'6': ΤΑΧΥΔΡΟΜΕΙΑ - ΤΗΛΕΠΙΚΟΙΝΩΝΙΕΣ
'7': ΔΑΣΗ ΚΑΙ ΚΤΗΝΟΤΡΟΦΙΑ
'8': ΕΛΕΓΚΤΙΚΟ ΣΥΝΕΔΡΙΟ ΚΑΙ ΣΥΝΤΑΞΕΙΣ
'9': ΠΟΛΕΜΙΚΗ ΑΕΡΟΠΟΡΙΑ
'10': ΝΟΜΙΚΑ ΠΡΟΣΩΠΑ ΔΗΜΟΣΙΟΥ ΔΙΚΑΙΟΥ
'11': ΝΟΜΟΘΕΣΙΑ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ ΤΡΑΠΕΖΩΝ ΚΑΙ ΧΡΗΜΑΤΙΣΤΗΡΙΩΝ
'12': ΠΟΛΙΤΙΚΗ ΑΕΡΟΠΟΡΙΑ
'13': ΕΜΜΕΣΗ ΦΟΡΟΛΟΓΙΑ
'14': ΚΟΙΝΩΝΙΚΕΣ ΑΣΦΑΛΙΣΕΙΣ
'15': ΝΟΜΟΘΕΣΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ
'16': ΝΟΜΟΘΕΣΙΑ ΕΠΙΜΕΛΗΤΗΡΙΩΝ ΣΥΝΕΤΑΙΡΙΣΜΩΝ ΚΑΙ ΣΩΜΑΤΕΙΩΝ
'17': ΔΗΜΟΣΙΑ ΕΡΓΑ
'18': ΔΙΟΙΚΗΣΗ ΔΙΚΑΙΟΣΥΝΗΣ
'19': ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ
'20': ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
'21': ΕΚΠΑΙΔΕΥΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
'22': ΔΗΜΟΣΙΟ ΛΟΓΙΣΤΙΚΟ
'23': ΤΕΛΩΝΕΙΑΚΗ ΝΟΜΟΘΕΣΙΑ
'24': ΣΥΓΚΟΙΝΩΝΙΕΣ
'25': ΕΘΝΙΚΗ ΑΜΥΝΑ
'26': ΣΤΡΑΤΟΣ ΞΗΡΑΣ
'27': ΑΓΟΡΑΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ
'28': ΔΗΜΟΣΙΟΙ ΥΠΑΛΛΗΛΟΙ
'29': ΠΕΡΙΟΥΣΙΑ ΔΗΜΟΣΙΟΥ ΚΑΙ ΝΟΜΙΣΜΑ
'30': ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ
'31': ΛΙΜΕΝΙΚΗ ΝΟΜΟΘΕΣΙΑ
'32': ΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
'33': ΠΟΛΙΤΙΚΗ ΔΙΚΟΝΟΜΙΑ
'34': ΔΙΠΛΩΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
'35': ΔΙΟΙΚΗΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
'36': ΑΜΕΣΗ ΦΟΡΟΛΟΓΙΑ
'37': ΤΥΠΟΣ ΚΑΙ ΤΟΥΡΙΣΜΟΣ
'38': ΕΘΝΙΚΗ ΟΙΚΟΝΟΜΙΑ
'39': ΑΣΤΥΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ
'40': ΑΓΡΟΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
'41': ΕΡΓΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
'42': ΠΟΙΝΙΚΗ ΝΟΜΟΘΕΣΙΑ
'43': ΕΜΠΟΡΙΚΗ ΝΟΜΟΘΕΣΙΑ
'44': ΕΠΙΣΤΗΜΕΣ ΚΑΙ ΤΕΧΝΕΣ
'45': ΕΜΠΟΡΙΚΗ ΝΑΥΤΙΛΙΑ
'46': ΣΥΝΤΑΓΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ
splits:
- name: train
num_bytes: 216757887
num_examples: 28536
- name: test
num_bytes: 71533786
num_examples: 9516
- name: validation
num_bytes: 68824457
num_examples: 9511
download_size: 145147904
dataset_size: 357116130
configs:
- config_name: chapter
data_files:
- split: train
path: chapter/train-*
- split: test
path: chapter/test-*
- split: validation
path: chapter/validation-*
- config_name: subject
data_files:
- split: train
path: subject/train-*
- split: test
path: subject/test-*
- split: validation
path: subject/validation-*
- config_name: volume
data_files:
- split: train
path: volume/train-*
- split: test
path: volume/test-*
- split: validation
path: volume/validation-*
default: true
---
# Dataset Card for Greek Legal Code
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/christospi/glc-nllp-21
- **Paper:** https://arxiv.org/abs/2109.15298
- **Data:** https://doi.org/10.5281/zenodo.5528002
- **Leaderboard:** N/A
- **Point of Contact:** [Christos Papaloukas](mailto:[email protected])
### Dataset Summary
Greek_Legal_Code (GLC) is a dataset consisting of approx. 47k legal resources from Greek legislation. The origin of GLC is “Permanent Greek Legislation Code - Raptarchis”, a collection of Greek legislative documents classified into multi-level (from broader to more specialized) categories.
**Topics**
GLC consists of 47 legislative volumes and each volume corresponds to a main thematic topic. Each volume is divided into thematic sub categories which are called chapters and subsequently, each chapter breaks down to subjects which contain the legal resources. The total number of chapters is 389 while the total number of subjects is 2285, creating an interlinked thematic hierarchy. So, for the upper thematic level (volume) GLC has 47 classes. For the next thematic level (chapter) GLC offers 389 classes and for the inner and last thematic level (subject), GLC has 2285 classes.
GLC classes are divided into three categories for each thematic level: frequent classes, which occur in more than 10 training documents and can be found in all three subsets (training, development and test); few-shot classes which appear in 1 to 10 training documents and also appear in the documents of the development and test sets, and zero-shot classes which appear in the development and/or test, but not in the training documents.
### Supported Tasks and Leaderboards
The dataset supports:
**Multi-class Text Classification:** Given the text of a document, a model predicts the corresponding class.
**Few-shot and Zero-shot learning:** As already noted, the classes can be divided into three groups: frequent, few-shot, and zero- shot, depending on whether they were assigned to more than 10, fewer than 10 but at least one, or no training documents, respectively.
| Level | Total | Frequent | Few-Shot (<10) | Zero-Shot |
|---|---|---|---|---|
|Volume|47|47|0|0|
|Chapter|389|333|53|3|
|Subject|2285|712|1431|142|
### Languages
All documents are written in Greek.
## Dataset Structure
### Data Instances
```json
{
"text": "179. ΑΠΟΦΑΣΗ ΥΠΟΥΡΓΟΥ ΜΕΤΑΦΟΡΩΝ ΚΑΙ ΕΠΙΚΟΙΝΩΝΙΩΝ Αριθ. Β-οικ. 68425/4765 της 2/17 Νοεμ. 2000 (ΦΕΚ Β΄ 1404) Τροποποίηση της 42000/2030/81 κοιν. απόφασης του Υπουργού Συγκοινωνιών «Κωδικοποίηση και συμπλήρωση καν. Αποφάσεων» που εκδόθηκαν κατ’ εξουσιοδότηση του Ν.Δ. 102/73 «περί οργανώσεως των δια λεωφορείων αυτοκινήτων εκτελουμένων επιβατικών συγκοινωνιών». ",
"volume": 24, # "ΣΥΓΚΟΙΝΩΝΙΕΣ"
}
```
### Data Fields
The following data fields are provided for documents (`train`, `dev`, `test`):
`text`: (**str**) The full content of each document, which is represented by its `header` and `articles` (i.e., the `main_body`).\
`label`: (**class label**): Depending on the configurarion, the volume/chapter/subject of the document. For volume-level class it belongs to specifically: ["ΚΟΙΝΩΝΙΚΗ ΠΡΟΝΟΙΑ",
"ΓΕΩΡΓΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΡΑΔΙΟΦΩΝΙΑ ΚΑΙ ΤΥΠΟΣ",
"ΒΙΟΜΗΧΑΝΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΥΓΕΙΟΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΠΟΛΕΜΙΚΟ ΝΑΥΤΙΚΟ",
"ΤΑΧΥΔΡΟΜΕΙΑ - ΤΗΛΕΠΙΚΟΙΝΩΝΙΕΣ",
"ΔΑΣΗ ΚΑΙ ΚΤΗΝΟΤΡΟΦΙΑ",
"ΕΛΕΓΚΤΙΚΟ ΣΥΝΕΔΡΙΟ ΚΑΙ ΣΥΝΤΑΞΕΙΣ",
"ΠΟΛΕΜΙΚΗ ΑΕΡΟΠΟΡΙΑ",
"ΝΟΜΙΚΑ ΠΡΟΣΩΠΑ ΔΗΜΟΣΙΟΥ ΔΙΚΑΙΟΥ",
"ΝΟΜΟΘΕΣΙΑ ΑΝΩΝΥΜΩΝ ΕΤΑΙΡΕΙΩΝ ΤΡΑΠΕΖΩΝ ΚΑΙ ΧΡΗΜΑΤΙΣΤΗΡΙΩΝ",
"ΠΟΛΙΤΙΚΗ ΑΕΡΟΠΟΡΙΑ",
"ΕΜΜΕΣΗ ΦΟΡΟΛΟΓΙΑ",
"ΚΟΙΝΩΝΙΚΕΣ ΑΣΦΑΛΙΣΕΙΣ",
"ΝΟΜΟΘΕΣΙΑ ΔΗΜΩΝ ΚΑΙ ΚΟΙΝΟΤΗΤΩΝ",
"ΝΟΜΟΘΕΣΙΑ ΕΠΙΜΕΛΗΤΗΡΙΩΝ ΣΥΝΕΤΑΙΡΙΣΜΩΝ ΚΑΙ ΣΩΜΑΤΕΙΩΝ",
"ΔΗΜΟΣΙΑ ΕΡΓΑ",
"ΔΙΟΙΚΗΣΗ ΔΙΚΑΙΟΣΥΝΗΣ",
"ΑΣΦΑΛΙΣΤΙΚΑ ΤΑΜΕΙΑ",
"ΕΚΚΛΗΣΙΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΕΚΠΑΙΔΕΥΤΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΔΗΜΟΣΙΟ ΛΟΓΙΣΤΙΚΟ",
"ΤΕΛΩΝΕΙΑΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΣΥΓΚΟΙΝΩΝΙΕΣ",
"ΕΘΝΙΚΗ ΑΜΥΝΑ",
"ΣΤΡΑΤΟΣ ΞΗΡΑΣ",
"ΑΓΟΡΑΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΔΗΜΟΣΙΟΙ ΥΠΑΛΛΗΛΟΙ",
"ΠΕΡΙΟΥΣΙΑ ΔΗΜΟΣΙΟΥ ΚΑΙ ΝΟΜΙΣΜΑ",
"ΟΙΚΟΝΟΜΙΚΗ ΔΙΟΙΚΗΣΗ",
"ΛΙΜΕΝΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΑΣΤΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΠΟΛΙΤΙΚΗ ΔΙΚΟΝΟΜΙΑ",
"ΔΙΠΛΩΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΔΙΟΙΚΗΤΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΑΜΕΣΗ ΦΟΡΟΛΟΓΙΑ",
"ΤΥΠΟΣ ΚΑΙ ΤΟΥΡΙΣΜΟΣ",
"ΕΘΝΙΚΗ ΟΙΚΟΝΟΜΙΑ",
"ΑΣΤΥΝΟΜΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΑΓΡΟΤΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΕΡΓΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΠΟΙΝΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΕΜΠΟΡΙΚΗ ΝΟΜΟΘΕΣΙΑ",
"ΕΠΙΣΤΗΜΕΣ ΚΑΙ ΤΕΧΝΕΣ",
"ΕΜΠΟΡΙΚΗ ΝΑΥΤΙΛΙΑ",
"ΣΥΝΤΑΓΜΑΤΙΚΗ ΝΟΜΟΘΕΣΙΑ"
] \
The labels can also be a the chapter-level or subject-level class it belongs to. Some chapter labels are omitted due to size (389 classes). Some subject labels are also omitted due to size (2285 classes).
### Data Splits
| Split | No of Documents | Avg. words |
| ------------------- | ------------------------------------ | --- |
| Train | 28,536 | 600 |
|Development | 9,511 | 574 |
|Test | 9,516 | 595 |
## Dataset Creation
### Curation Rationale
The dataset was curated by Papaloukas et al. (2021) with the hope to support and encourage further research in NLP for the Greek language.
### Source Data
#### Initial Data Collection and Normalization
The ``Permanent Greek Legislation Code - Raptarchis`` is a thorough catalogue of Greek legislation since the creation of the Greek state in 1834 until 2015. It includes Laws, Royal and Presidential Decrees, Regulations and Decisions, retrieved from the Official Government Gazette, where Greek legislation is published. This collection is one of the official, publicly available sources of classified Greek legislation suitable for classification tasks.
Currently, the original catalogue is publicly offered in MS Word (.doc) format through the portal e-Themis, the legal database and management service of it, under the administration of the Ministry of the Interior (Affairs). E-Themis is primarily focused on providing legislation on a multitude of predefined thematic categories, as described in the catalogue. The main goal is to help users find legislation of interest using the thematic index.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset does not include personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Papaloukas et al. (2021)
### Licensing Information
[More Information Needed]
### Citation Information
*Christos Papaloukas, Ilias Chalkidis, Konstantinos Athinaios, Despina-Athanasia Pantazi and Manolis Koubarakis.*
*Multi-granular Legal Topic Classification on Greek Legislation.*
*Proceedings of the 3rd Natural Legal Language Processing (NLLP) Workshop, Punta Cana, Dominican Republic, 2021*
```
@inproceedings{papaloukas-etal-2021-glc,
title = "Multi-granular Legal Topic Classification on Greek Legislation",
author = "Papaloukas, Christos and Chalkidis, Ilias and Athinaios, Konstantinos and Pantazi, Despina-Athanasia and Koubarakis, Manolis",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2021",
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2109.15298",
doi = "10.48550/arXiv.2109.15298",
pages = "63--75"
}
```
### Contributions
Thanks to [@christospi](https://github.com/christospi) for adding this dataset. |
google-research-datasets/go_emotions | google-research-datasets | 2024-01-04T11:56:51Z | 13,307 | 204 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2005.00547",
"region:us",
"emotion"
] | [
"text-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
paperswithcode_id: goemotions
pretty_name: GoEmotions
config_names:
- raw
- simplified
tags:
- emotion
dataset_info:
- config_name: raw
features:
- name: text
dtype: string
- name: id
dtype: string
- name: author
dtype: string
- name: subreddit
dtype: string
- name: link_id
dtype: string
- name: parent_id
dtype: string
- name: created_utc
dtype: float32
- name: rater_id
dtype: int32
- name: example_very_unclear
dtype: bool
- name: admiration
dtype: int32
- name: amusement
dtype: int32
- name: anger
dtype: int32
- name: annoyance
dtype: int32
- name: approval
dtype: int32
- name: caring
dtype: int32
- name: confusion
dtype: int32
- name: curiosity
dtype: int32
- name: desire
dtype: int32
- name: disappointment
dtype: int32
- name: disapproval
dtype: int32
- name: disgust
dtype: int32
- name: embarrassment
dtype: int32
- name: excitement
dtype: int32
- name: fear
dtype: int32
- name: gratitude
dtype: int32
- name: grief
dtype: int32
- name: joy
dtype: int32
- name: love
dtype: int32
- name: nervousness
dtype: int32
- name: optimism
dtype: int32
- name: pride
dtype: int32
- name: realization
dtype: int32
- name: relief
dtype: int32
- name: remorse
dtype: int32
- name: sadness
dtype: int32
- name: surprise
dtype: int32
- name: neutral
dtype: int32
splits:
- name: train
num_bytes: 55343102
num_examples: 211225
download_size: 24828322
dataset_size: 55343102
- config_name: simplified
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': admiration
'1': amusement
'2': anger
'3': annoyance
'4': approval
'5': caring
'6': confusion
'7': curiosity
'8': desire
'9': disappointment
'10': disapproval
'11': disgust
'12': embarrassment
'13': excitement
'14': fear
'15': gratitude
'16': grief
'17': joy
'18': love
'19': nervousness
'20': optimism
'21': pride
'22': realization
'23': relief
'24': remorse
'25': sadness
'26': surprise
'27': neutral
- name: id
dtype: string
splits:
- name: train
num_bytes: 4224138
num_examples: 43410
- name: validation
num_bytes: 527119
num_examples: 5426
- name: test
num_bytes: 524443
num_examples: 5427
download_size: 3464371
dataset_size: 5275700
configs:
- config_name: raw
data_files:
- split: train
path: raw/train-*
- config_name: simplified
data_files:
- split: train
path: simplified/train-*
- split: validation
path: simplified/validation-*
- split: test
path: simplified/test-*
default: true
---
# Dataset Card for GoEmotions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/google-research/google-research/tree/master/goemotions
- **Repository:** https://github.com/google-research/google-research/tree/master/goemotions
- **Paper:** https://arxiv.org/abs/2005.00547
- **Leaderboard:**
- **Point of Contact:** [Dora Demszky](https://nlp.stanford.edu/~ddemszky/index.html)
### Dataset Summary
The GoEmotions dataset contains 58k carefully curated Reddit comments labeled for 27 emotion categories or Neutral.
The raw data is included as well as the smaller, simplified version of the dataset with predefined train/val/test
splits.
### Supported Tasks and Leaderboards
This dataset is intended for multi-class, multi-label emotion classification.
### Languages
The data is in English.
## Dataset Structure
### Data Instances
Each instance is a reddit comment with a corresponding ID and one or more emotion annotations (or neutral).
### Data Fields
The simplified configuration includes:
- `text`: the reddit comment
- `labels`: the emotion annotations
- `comment_id`: unique identifier of the comment (can be used to look up the entry in the raw dataset)
In addition to the above, the raw data includes:
* `author`: The Reddit username of the comment's author.
* `subreddit`: The subreddit that the comment belongs to.
* `link_id`: The link id of the comment.
* `parent_id`: The parent id of the comment.
* `created_utc`: The timestamp of the comment.
* `rater_id`: The unique id of the annotator.
* `example_very_unclear`: Whether the annotator marked the example as being very unclear or difficult to label (in this
case they did not choose any emotion labels).
In the raw data, labels are listed as their own columns with binary 0/1 entries rather than a list of ids as in the
simplified data.
### Data Splits
The simplified data includes a set of train/val/test splits with 43,410, 5426, and 5427 examples respectively.
## Dataset Creation
### Curation Rationale
From the paper abstract:
> Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to
detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a
fine-grained typology, adaptable to multiple downstream tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.
#### Who are the source language producers?
English-speaking Reddit users.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Annotations were produced by 3 English-speaking crowdworkers in India.
### Personal and Sensitive Information
This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames
are typically disasociated from personal real-world identities, this is not always the case. It may therefore be
possible to discover the identities of the individuals who created this content in some cases.
## Considerations for Using the Data
### Social Impact of Dataset
Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer
interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases
to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance
pricing, and student attentiveness (see
[this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)).
### Discussion of Biases
From the authors' github page:
> Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547).
### Licensing Information
The GitHub repository which houses this dataset has an
[Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE).
### Citation Information
@inproceedings{demszky2020goemotions,
author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
year = {2020}
}
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
uoft-cs/cifar100 | uoft-cs | 2024-01-04T06:57:47Z | 9,673 | 46 | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-80-Million-Tiny-Images
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-100
pretty_name: Cifar100
dataset_info:
config_name: cifar100
features:
- name: img
dtype: image
- name: fine_label
dtype:
class_label:
names:
'0': apple
'1': aquarium_fish
'2': baby
'3': bear
'4': beaver
'5': bed
'6': bee
'7': beetle
'8': bicycle
'9': bottle
'10': bowl
'11': boy
'12': bridge
'13': bus
'14': butterfly
'15': camel
'16': can
'17': castle
'18': caterpillar
'19': cattle
'20': chair
'21': chimpanzee
'22': clock
'23': cloud
'24': cockroach
'25': couch
'26': cra
'27': crocodile
'28': cup
'29': dinosaur
'30': dolphin
'31': elephant
'32': flatfish
'33': forest
'34': fox
'35': girl
'36': hamster
'37': house
'38': kangaroo
'39': keyboard
'40': lamp
'41': lawn_mower
'42': leopard
'43': lion
'44': lizard
'45': lobster
'46': man
'47': maple_tree
'48': motorcycle
'49': mountain
'50': mouse
'51': mushroom
'52': oak_tree
'53': orange
'54': orchid
'55': otter
'56': palm_tree
'57': pear
'58': pickup_truck
'59': pine_tree
'60': plain
'61': plate
'62': poppy
'63': porcupine
'64': possum
'65': rabbit
'66': raccoon
'67': ray
'68': road
'69': rocket
'70': rose
'71': sea
'72': seal
'73': shark
'74': shrew
'75': skunk
'76': skyscraper
'77': snail
'78': snake
'79': spider
'80': squirrel
'81': streetcar
'82': sunflower
'83': sweet_pepper
'84': table
'85': tank
'86': telephone
'87': television
'88': tiger
'89': tractor
'90': train
'91': trout
'92': tulip
'93': turtle
'94': wardrobe
'95': whale
'96': willow_tree
'97': wolf
'98': woman
'99': worm
- name: coarse_label
dtype:
class_label:
names:
'0': aquatic_mammals
'1': fish
'2': flowers
'3': food_containers
'4': fruit_and_vegetables
'5': household_electrical_devices
'6': household_furniture
'7': insects
'8': large_carnivores
'9': large_man-made_outdoor_things
'10': large_natural_outdoor_scenes
'11': large_omnivores_and_herbivores
'12': medium_mammals
'13': non-insect_invertebrates
'14': people
'15': reptiles
'16': small_mammals
'17': trees
'18': vehicles_1
'19': vehicles_2
splits:
- name: train
num_bytes: 112545106.0
num_examples: 50000
- name: test
num_bytes: 22564261.0
num_examples: 10000
download_size: 142291368
dataset_size: 135109367.0
configs:
- config_name: cifar100
data_files:
- split: train
path: cifar100/train-*
- split: test
path: cifar100/test-*
default: true
---
# Dataset Card for CIFAR-100
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Repository:**
- **Paper:** [Paper](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images
per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses.
There are two labels per image - fine label (actual class) and coarse label (superclass).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-100).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'fine_label': 19,
'coarse_label': 11
}
```
### Data Fields
- `img`: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `fine_label`: an `int` classification label with the following mapping:
`0`: apple
`1`: aquarium_fish
`2`: baby
`3`: bear
`4`: beaver
`5`: bed
`6`: bee
`7`: beetle
`8`: bicycle
`9`: bottle
`10`: bowl
`11`: boy
`12`: bridge
`13`: bus
`14`: butterfly
`15`: camel
`16`: can
`17`: castle
`18`: caterpillar
`19`: cattle
`20`: chair
`21`: chimpanzee
`22`: clock
`23`: cloud
`24`: cockroach
`25`: couch
`26`: cra
`27`: crocodile
`28`: cup
`29`: dinosaur
`30`: dolphin
`31`: elephant
`32`: flatfish
`33`: forest
`34`: fox
`35`: girl
`36`: hamster
`37`: house
`38`: kangaroo
`39`: keyboard
`40`: lamp
`41`: lawn_mower
`42`: leopard
`43`: lion
`44`: lizard
`45`: lobster
`46`: man
`47`: maple_tree
`48`: motorcycle
`49`: mountain
`50`: mouse
`51`: mushroom
`52`: oak_tree
`53`: orange
`54`: orchid
`55`: otter
`56`: palm_tree
`57`: pear
`58`: pickup_truck
`59`: pine_tree
`60`: plain
`61`: plate
`62`: poppy
`63`: porcupine
`64`: possum
`65`: rabbit
`66`: raccoon
`67`: ray
`68`: road
`69`: rocket
`70`: rose
`71`: sea
`72`: seal
`73`: shark
`74`: shrew
`75`: skunk
`76`: skyscraper
`77`: snail
`78`: snake
`79`: spider
`80`: squirrel
`81`: streetcar
`82`: sunflower
`83`: sweet_pepper
`84`: table
`85`: tank
`86`: telephone
`87`: television
`88`: tiger
`89`: tractor
`90`: train
`91`: trout
`92`: tulip
`93`: turtle
`94`: wardrobe
`95`: whale
`96`: willow_tree
`97`: wolf
`98`: woman
`99`: worm
- `coarse_label`: an `int` coarse classification label with following mapping:
`0`: aquatic_mammals
`1`: fish
`2`: flowers
`3`: food_containers
`4`: fruit_and_vegetables
`5`: household_electrical_devices
`6`: household_furniture
`7`: insects
`8`: large_carnivores
`9`: large_man-made_outdoor_things
`10`: large_natural_outdoor_scenes
`11`: large_omnivores_and_herbivores
`12`: medium_mammals
`13`: non-insect_invertebrates
`14`: people
`15`: reptiles
`16`: small_mammals
`17`: trees
`18`: vehicles_1
`19`: vehicles_2
### Data Splits
| name |train|test|
|----------|----:|---------:|
|cifar100|50000| 10000|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset. |
uoft-cs/cifar10 | uoft-cs | 2024-01-04T06:53:11Z | 52,877 | 73 | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-classification"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-80-Million-Tiny-Images
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-10
pretty_name: Cifar10
dataset_info:
config_name: plain_text
features:
- name: img
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
splits:
- name: train
num_bytes: 113648310.0
num_examples: 50000
- name: test
num_bytes: 22731580.0
num_examples: 10000
download_size: 143646105
dataset_size: 136379890.0
configs:
- config_name: plain_text
data_files:
- split: train
path: plain_text/train-*
- split: test
path: plain_text/test-*
default: true
---
# Dataset Card for CIFAR-10
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cs.toronto.edu/~kriz/cifar.html
- **Repository:**
- **Paper:** Learning Multiple Layers of Features from Tiny Images by Alex Krizhevsky
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-10).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x201FA6EE748>,
'label': 0
}
```
### Data Fields
- img: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- label: 0-9 with the following correspondence
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
### Data Splits
Train and Test
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@czabo](https://github.com/czabo) for adding this dataset. |
openbmb/UltraFeedback | openbmb | 2023-12-29T14:11:19Z | 2,366 | 360 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.01377",
"region:us"
] | [
"text-generation"
] | 2023-09-23T15:41:04Z | null | ---
license: mit
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
## Introduction
- [GitHub Repo](https://github.com/thunlp/UltraFeedback)
- [UltraRM-13b](https://huggingface.co/openbmb/UltraRM-13b)
- [UltraCM-13b](https://huggingface.co/openbmb/UltraCM-13b)
UltraFeedback is a **large-scale, fine-grained, diverse preference dataset**, used for training powerful reward models and critic models. We collect about 64k prompts from diverse resources (including UltraChat, ShareGPT, Evol-Instruct, TruthfulQA, FalseQA, and FLAN). We then use these prompts to query multiple LLMs (see Table for model lists) and generate 4 different responses for each prompt, resulting in a total of 256k samples.
To collect high-quality preference and textual feedback, we design a fine-grained annotation instruction, which contains 4 different aspects, namely **instruction-following**, **truthfulness**, **honesty** and **helpfulness**. We then ask GPT-4 to annotate the collected samples based on the instructions.
## Features
- 🆚 **Scale**: UltraFeedback consists of 64k prompts, 256k responses and 380k high-quality feedback. RLHF researchers could further construct around 1 million comparison pairs to train their reward models.
- 🌈 **Diversity**: As a preference dataset, diversity is the core requirement for UltraFeedback. We collect prompts from various sources and query a diverse set of state-of-the-art open-source and prestigious models. To further increase diversity, we intended to select different base models, i.e., LLaMA, Falcon, StarChat, MPT, GPT and Bard. We also apply various principles to stimulate models completing instructions in different ways.
- 🤯 **High-density**: UltraFeedback provides both numerical and textual feedback. Moreover, we wrote fine-grained annotation documents to help rate responses in all dimensions
## Dataset Construction
### Instruction Sampling
We sample 63,967 instructions from 6 public available and high-quality datasets. We include all instructions from TruthfulQA and FalseQA, randomly sampling 10k instructions from Evol-Instruct, 10k from UltraChat, and 20k from ShareGPT. For Flan, we adopt a stratified sampling strtegy, randomly samping 3k instructions from"Co" subset whereas sampling 10 instructions per task for the other three subsets, excluding those with overly long instructions.
```json
{
"evol_instruct": 10000,
"false_qa": 2339,
"flan": 20939,
"sharegpt": 19949,
"truthful_qa": 811,
"ultrachat": 9929
}
```
### Model Sampling
To prevent reward model from overfiting to certain text style or capturing spurious correlation between text style and rewards, we select different base models of all levels, with varying sizes, architectures and training data, to complete the instructions. We set up a pool of 17 models:
- Commercial Models: GPT-4, GPT-3.5 Turbo, Bard
- LLaMA family:
1. LLaMA-2-7B-chat, LLaMA-2-13B-chat, LLaMA-2-70B-chat
2. UltraLM-13B, UltraLM-65B
3. WizardLM-7B, WizardLM-13B, WizardLM-70B
4. Vicuna-33B
5. Alpaca-7B
- Non-LLaMA series:
1. Falcon-40B-instruct
2. MPT-30B-chat
3. StarChat-Beta
4. Pythia-12B
### Principle Sampling
Following [1] and [2], we define a set of principles to explicitly align model behaviors from different aspects. We set up a pool of 5 principles: Helpfulness, Truthfulness, Honesty, Verbalized Calibration and Harmless. For each instruction, we randomly sample 4 models to complete the instruction, and for each completion, we sample a principle and add it to system prompt to align the model behavior. Considering different datasets outline different characteristics, not all dataset are suitable for all principles. We provide the following table to show the principle distribution for each dataset.
| Datset | Principle |
| ------------- | ------------------------------------------------------------ |
| Evol Instruct | 100% Helpful |
| FalseQA | 100% TruthfulQA |
| Flan | 60% Helpful, 20% Truthful, 20% Verbalized Calibration |
| ShareGPT | 60% Helpful, 20% Truthful, 18% Honesty, 2% Verbalized Calibration |
| TruthfulQA | 100% Truthful |
| UltraChat | 60% Helpful, 20% Truthful, 18% Honesty, 2% Verbalized Calibration |
[1] Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. Sun et al.
[2] Orca: Progressive Learning from Complex Explanation Traces of GPT-4. Mukherjee et al.
## Dataset Format
```JSONC
{
"source": "sharegpt", // the dataset where the instruction comes from
"id": "D2TC9GP_0", // the id of the instruction in the source dataset
"instruction": "I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports",
"models": [ // four models that are randomly sampled from the model pool to complete the instruction
"falcon-40b-instruct",
"gpt-4",
"starchat",
"wizardlm-7b"
],
"completions": [ // four completions corresponding to the four models
{ // completion 1
"model": "falcon-40b-instruct",
"principle": "helpfulness", // priciple that used to align model behavior
"custom_system_prompt": "As an AI assistant, ensure your response offers the perfect blend of accuracy, positivity, and intrigue. Strive to be educational while keeping the user engaged.", // the system prompt corresponding to the principle, sampled from a pool of GPT-4 generated system prompts
"response": "Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!",
"annotations": {
"instruction_following": [
{
"Rating": "2",
"Rationale": "The text only partially addresses the task goal by providing a general list of activities in Cairo without considering the specific needs of the family members, such as limited walking ability and adventure preferences."
}
],
"honesty": [
{
"Rating": "3",
"Rationale": "The response is confident but contains subtle mistakes. It suggests whitewater rafting on the Nile and diving in the Red Sea, which are not feasible for a 4-5 day trip to Cairo. The rest of the suggestions are accurate and relevant to the family's preferences."
}
],
"truthfulness": [
{
"Type": [
"1",
"2"
],
"Rationale": "The text suggests whitewater rafting on the Nile and diving in the Red Sea, which are not available in Cairo. These activities are factually incorrect and not aligned with the instruction.",
"Rating": "3",
"Rationale For Rating": "The text provides some truthful information, but the inclusion of whitewater rafting and diving in the Red Sea causes partial hallucination and misunderstanding."
}
],
"helpfulness": [
{
"Type": [
"1",
"2"
],
"Rationale": "The response is clear and relevant, providing useful information, but it does not address the limited walking ability of the grandparents or the adventure sports preferences of the sister, mom, and the user.",
"Rating": "3",
"Rationale For Rating": "The text is correct and provides useful information, but it does not fully address the specific needs and preferences of the family."
}
]
}
},
{ // completion 2
...
},
{ // completion 3
...
},
{ // completion 4
...
}
]
},
```
## Dataset Example
Here we present an example of UltraFeedback
> **User:** I am going to cairo in June of this year, thinking of 4-5 days, what are the best things I could do? Going with family of 5, granddad, grandmom, sister, mom and me. No days can have too much walking but sister, me and mom like adventure sports
> **Assisstant 1 (falcon-40b-instruct):** Cairo is a city that has something for everyone. Some of the best things you can do are visiting the Pyramids of Giza and the museums housing the ancient Egyptian artifacts, taking a camel ride in the desert, visiting the medieval neighborhoods such as Islamic Cairo or Khan El-Khalili, trying local food in traditional restaurants like Ful medames, or venturing into one of the city's many souks to shop for authentic goods. For adventure sports, you can go whitewater rafting on the Nile, hot air ballooning over the desert, or diving in the Red Sea. Just be mindful of the scorching heat and ensure everyone wears appropriate clothing and drinks enough water. Above all, have fun and enjoy your trip!
> **Assisstant 2 (gpt-4):** In Cairo for 4-5 days, the following activities cater to your family's preferences, with consideration for the limited walking ability of your grandparents:\n\n1. Visit the Pyramids of Giza and the Great Sphinx: A must-see in Cairo, it's an incredible experience for all ages. You can explore the premises using a hired camel, horse, or a horse-drawn carriage which accommodate the needs of your grandparents while keeping the experience enjoyable.\n\n2. Egyptian Museum: This museum in Tahrir Square contains over 120,000 artifacts of Egypt's history. A wheelchair can be arranged for the grandparents to comfortably explore the museum at their own pace.\n\n3. Khan el-Khalili Bazaar: A famous market in Cairo, suitable for souvenir shopping and experiencing the local culture. There are many cafes and restaurants your family can rest at during the visit.\n\n4. Nile River Cruise: Enjoy a dinner cruise with your family on the famous Nile River. It includes entertainment and a beautiful view of the city by night. Boats often have ramps and accessible facilities for limited mobility passangers.\n\n5. Felucca Ride: During the day, go for a more traditional sailboat ride called a Felucca. This can be done on an off-day or as a relaxation activity after an active day.\n\nFor adventure activities, you and your mom and sister can explore these options:\n\n1. Sandboarding: Head to the desert for a thrilling sandboarding experience on the dunes just outside Cairo.\n\n2. Hot Air Balloon Ride: Experience Egypt from above with a hot air balloon ride in Luxor. It would require a day trip from Cairo but is an incredible experience.\n\n3. Quad Biking: Go for a guided quad biking adventure in the desert, enjoying panoramic views of the iconic Pyramids.\n\nSpace the activities according to your family's preferences and ensure rest periods in the afternoon. Consider hiring a private guide or joining a wheelchair-accessible tour since you're concerned about your grandparents' walking ability.
> **Assisstant 3 (starchat):** Here are some of the top things to do in Cairo, Egypt, with a family of 5:\n\n- Giza Pyramids: The Giza Pyramids, located just outside Cairo, are a must-see for anyone visiting the city. The Great Pyramid of Khufu, the Great Pyramid of Khafre, and the Great Pyramid of Menkaure make up one of the Seven Wonders of the Ancient World and are an impressive sight to behold.\n\n- Egyptian Museum: The Egyptian Museum, located in the heart of Cairo, is home to one of the world's largest and most impressive collections of ancient Egyptian art and artifacts. Visitors can explore the museum's extensive collection, which includes items such as mummies, sarcophagi, statues, paintings, and more.\n\n- Citadel of Salah El-Din: Located on a hill overlooking Cairo, the Citadel of Salah El-Din offers stunning views of the city and is an excellent place to get a sense of Cairo's rich history. Visitors can explore the various museums and mosques within the citadel, and also take a stroll through the beautiful gardens.\n\n- Khan El-Khalili Bazaar: Khan El-Khalili Bazaar is one of Cairo's most popular attractions, a bustling market where visitors can find everything from spices and pottery to clothes and jewelry. It's a great place to do some shopping and explore the local culture.\n\n- Alexandria: Just an hour's drive from Cairo, Alexandria is a historic city with a rich legacy. Visitors can explore the ancient Roman Amphitheater, the Qaitbay Citadel, and the Catacombs of Kom el-Shoqafa. They can also take a boat trip to the island of Montazah, where they can see the famous Graffiti Wall and other street art.\n\nAll of these attractions are family-friendly, with plenty of things to see and do. The Giza Pyramids in particular are very accessible and well-maintained, making them suitable for all ages.
> **Assisstant 4 (wizardlm-7b):** If you are looking for adventure sports, you can explore the pyramids of Giza and go hot air ballooning over the desert. Another option could be to take a camel ride in the desert or go kayaking in the Nile River. For family-friendly activities, you can visit the Egyptian Museum, Cairo Tower, and Khan el-Khalili Bazaar. Additionally, you can take a day trip to the historical city of Alexandria and visit the Pompey's Pillar and the Roman Amphitheater. Another option could be to go on a guided tour of the ancient city of Petra in Jordan, which is a short flight from Cairo.
## Limitations
- Although GPT-4 can provide well-aligned annotation and textual feedback for most samples, we must note that GPT-4 also makes mistakes and provides inaccurate feedbacks.
## Citation
```bib
@misc{cui2023ultrafeedback,
title={UltraFeedback: Boosting Language Models with High-quality Feedback},
author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
year={2023},
eprint={2310.01377},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
ise-uiuc/Magicoder-Evol-Instruct-110K | ise-uiuc | 2023-12-28T03:23:17Z | 377 | 161 | [
"task_categories:text-generation",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"conversational"
] | 2023-12-03T20:05:56Z | null | ---
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- text-generation
- conversational
---
A decontaminated version of [evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1). Decontamination is done in the same way as StarCoder ([bigcode decontamination process](https://github.com/bigcode-project/bigcode-dataset/tree/main/decontamination)). |
TwoAbove/midjourney-messages | TwoAbove | 2023-12-26T00:12:42Z | 14,307 | 2 | [
"license:apache-2.0",
"region:us"
] | [] | 2023-11-16T13:10:46Z | null | ---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: string
- name: channel_id
dtype: string
- name: content
dtype: string
- name: timestamp
dtype: string
- name: image_id
dtype: string
- name: image
dtype: image
- name: url
dtype: string
- name: height
dtype: int64
- name: width
dtype: int64
- name: size
dtype: int64
splits:
- name: train
num_bytes: 0
num_examples: 0
configs:
- config_name: default
data_files:
- split: train
path: data/*
---
# midjourney-messages
## Description
This dataset contains the raw messages from Midjourney.
Initial dataset is https://huggingface.co/datasets/vivym/midjourney-messages, but this one has the images attached.
|
jmhessel/newyorker_caption_contest | jmhessel | 2023-12-22T19:13:58Z | 12,134 | 65 | [
"task_categories:image-to-text",
"task_categories:multiple-choice",
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:visual-question-answering",
"task_categories:other",
"task_categories:text2text-generation",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"task_ids:visual-question-answering",
"task_ids:explanation-generation",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:found",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2209.06293",
"region:us",
"humor",
"caption contest",
"new yorker"
] | [
"image-to-text",
"multiple-choice",
"text-classification",
"text-generation",
"visual-question-answering",
"other",
"text2text-generation"
] | 2022-09-29T17:28:05Z | null | ---
annotations_creators:
- expert-generated
- crowdsourced
- found
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- image-to-text
- multiple-choice
- text-classification
- text-generation
- visual-question-answering
- other
- text2text-generation
task_ids:
- multi-class-classification
- language-modeling
- visual-question-answering
- explanation-generation
pretty_name: newyorker_caption_contest
tags:
- humor
- caption contest
- new yorker
dataset_info:
- config_name: explanation
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 133827514.64
num_examples: 2340
- name: validation
num_bytes: 8039885.0
num_examples: 130
- name: test
num_bytes: 6863533.0
num_examples: 131
download_size: 139737042
dataset_size: 148730932.64
- config_name: explanation_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 136614332.45999998
num_examples: 2358
- name: validation
num_bytes: 7911995.0
num_examples: 128
- name: test
num_bytes: 8039885.0
num_examples: 130
download_size: 134637839
dataset_size: 152566212.45999998
- config_name: explanation_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 138337491.342
num_examples: 2346
- name: validation
num_bytes: 7460490.0
num_examples: 132
- name: test
num_bytes: 7911995.0
num_examples: 128
download_size: 138271185
dataset_size: 153709976.342
- config_name: explanation_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 138247435.342
num_examples: 2334
- name: validation
num_bytes: 7911920.0
num_examples: 130
- name: test
num_bytes: 7460490.0
num_examples: 132
download_size: 136862726
dataset_size: 153619845.342
- config_name: explanation_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
dtype: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 141175335.3
num_examples: 2340
- name: validation
num_bytes: 6863533.0
num_examples: 131
- name: test
num_bytes: 7911920.0
num_examples: 130
download_size: 140501251
dataset_size: 155950788.3
- config_name: explanation_from_pixels
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 23039316.0
num_examples: 390
- name: validation
num_bytes: 7956182.0
num_examples: 130
- name: test
num_bytes: 6778892.0
num_examples: 131
download_size: 37552582
dataset_size: 37774390.0
- config_name: explanation_from_pixels_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 21986652.0
num_examples: 393
- name: validation
num_bytes: 7831556.0
num_examples: 128
- name: test
num_bytes: 7956182.0
num_examples: 130
download_size: 37534409
dataset_size: 37774390.0
- config_name: explanation_from_pixels_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 22566608.0
num_examples: 391
- name: validation
num_bytes: 7376225.0
num_examples: 132
- name: test
num_bytes: 7831556.0
num_examples: 128
download_size: 37544724
dataset_size: 37774389.0
- config_name: explanation_from_pixels_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 22566629.0
num_examples: 389
- name: validation
num_bytes: 7831536.0
num_examples: 130
- name: test
num_bytes: 7376225.0
num_examples: 132
download_size: 37573931
dataset_size: 37774390.0
- config_name: explanation_from_pixels_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 23163962.0
num_examples: 390
- name: validation
num_bytes: 6778892.0
num_examples: 131
- name: test
num_bytes: 7831536.0
num_examples: 130
download_size: 37582524
dataset_size: 37774390.0
- config_name: matching
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 618272766.36
num_examples: 9792
- name: validation
num_bytes: 34157757.0
num_examples: 531
- name: test
num_bytes: 29813118.0
num_examples: 528
download_size: 594460072
dataset_size: 682243641.36
- config_name: matching_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 593200158.116
num_examples: 9684
- name: validation
num_bytes: 36712942.0
num_examples: 546
- name: test
num_bytes: 34157757.0
num_examples: 531
download_size: 563587231
dataset_size: 664070857.116
- config_name: matching_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 591676321.09
num_examples: 9630
- name: validation
num_bytes: 33697178.0
num_examples: 540
- name: test
num_bytes: 36712942.0
num_examples: 546
download_size: 571864348
dataset_size: 662086441.09
- config_name: matching_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 615620189.53
num_examples: 9630
- name: validation
num_bytes: 34829502.0
num_examples: 546
- name: test
num_bytes: 33697178.0
num_examples: 540
download_size: 571744845
dataset_size: 684146869.53
- config_name: matching_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 609696610.648
num_examples: 9702
- name: validation
num_bytes: 29813118.0
num_examples: 528
- name: test
num_bytes: 34829502.0
num_examples: 546
download_size: 592174904
dataset_size: 674339230.648
- config_name: matching_from_pixels
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 101439044.384
num_examples: 1632
- name: validation
num_bytes: 33714551.0
num_examples: 531
- name: test
num_bytes: 29368704.0
num_examples: 528
download_size: 139733134
dataset_size: 164522299.384
- config_name: matching_from_pixels_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 94090646.83
num_examples: 1614
- name: validation
num_bytes: 36257141.0
num_examples: 546
- name: test
num_bytes: 33714551.0
num_examples: 531
download_size: 137278691
dataset_size: 164062338.82999998
- config_name: matching_from_pixels_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 96253584.505
num_examples: 1605
- name: validation
num_bytes: 33236000.0
num_examples: 540
- name: test
num_bytes: 36257141.0
num_examples: 546
download_size: 137890850
dataset_size: 165746725.505
- config_name: matching_from_pixels_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 99928910.28
num_examples: 1605
- name: validation
num_bytes: 34380303.0
num_examples: 546
- name: test
num_bytes: 33236000.0
num_examples: 540
download_size: 139585876
dataset_size: 167545213.28
- config_name: matching_from_pixels_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 102509197.79
num_examples: 1617
- name: validation
num_bytes: 29368704.0
num_examples: 528
- name: test
num_bytes: 34380303.0
num_examples: 546
download_size: 138725891
dataset_size: 166258204.79000002
- config_name: ranking
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 594615535.632
num_examples: 9576
- name: validation
num_bytes: 32624105.0
num_examples: 507
- name: test
num_bytes: 28907567.0
num_examples: 513
download_size: 571604579
dataset_size: 656147207.632
- config_name: ranking_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 580099188.9
num_examples: 9450
- name: validation
num_bytes: 35332200.0
num_examples: 534
- name: test
num_bytes: 32624105.0
num_examples: 507
download_size: 546559254
dataset_size: 648055493.9
- config_name: ranking_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 566811450.504
num_examples: 9306
- name: validation
num_bytes: 32519173.0
num_examples: 531
- name: test
num_bytes: 35332200.0
num_examples: 534
download_size: 544444097
dataset_size: 634662823.504
- config_name: ranking_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 577828323.272
num_examples: 9324
- name: validation
num_bytes: 34072817.0
num_examples: 531
- name: test
num_bytes: 32519173.0
num_examples: 531
download_size: 548880699
dataset_size: 644420313.272
- config_name: ranking_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: image_location
dtype: string
- name: image_description
dtype: string
- name: image_uncanny_description
dtype: string
- name: entities
sequence: string
- name: questions
sequence: string
- name: caption_choices
sequence: string
- name: from_description
dtype: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 593388719.232
num_examples: 9432
- name: validation
num_bytes: 28907567.0
num_examples: 513
- name: test
num_bytes: 34072817.0
num_examples: 531
download_size: 562902941
dataset_size: 656369103.232
- config_name: ranking_from_pixels
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 101282973.752
num_examples: 1596
- name: validation
num_bytes: 32072331.0
num_examples: 506
- name: test
num_bytes: 28550057.0
num_examples: 513
download_size: 134283256
dataset_size: 161905361.752
- config_name: ranking_from_pixels_1
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 93123370.15
num_examples: 1575
- name: validation
num_bytes: 34965110.0
num_examples: 534
- name: test
num_bytes: 32072331.0
num_examples: 506
download_size: 130879365
dataset_size: 160160811.15
- config_name: ranking_from_pixels_2
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 93496576.85
num_examples: 1550
- name: validation
num_bytes: 32145436.0
num_examples: 531
- name: test
num_bytes: 34965110.0
num_examples: 534
download_size: 131637359
dataset_size: 160607122.85
- config_name: ranking_from_pixels_3
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 93840620.26
num_examples: 1553
- name: validation
num_bytes: 33718821.0
num_examples: 531
- name: test
num_bytes: 32145436.0
num_examples: 531
download_size: 133214495
dataset_size: 159704877.26
- config_name: ranking_from_pixels_4
features:
- name: image
dtype: image
- name: contest_number
dtype: int32
- name: caption_choices
sequence: string
- name: winner_source
dtype: string
- name: label
dtype: string
- name: n_tokens_label
dtype: int32
- name: instance_id
dtype: string
splits:
- name: train
num_bytes: 99008131.43
num_examples: 1571
- name: validation
num_bytes: 28550057.0
num_examples: 513
- name: test
num_bytes: 33718821.0
num_examples: 531
download_size: 136230399
dataset_size: 161277009.43
configs:
- config_name: explanation
data_files:
- split: train
path: explanation/train-*
- split: validation
path: explanation/validation-*
- split: test
path: explanation/test-*
- config_name: explanation_1
data_files:
- split: train
path: explanation_1/train-*
- split: validation
path: explanation_1/validation-*
- split: test
path: explanation_1/test-*
- config_name: explanation_2
data_files:
- split: train
path: explanation_2/train-*
- split: validation
path: explanation_2/validation-*
- split: test
path: explanation_2/test-*
- config_name: explanation_3
data_files:
- split: train
path: explanation_3/train-*
- split: validation
path: explanation_3/validation-*
- split: test
path: explanation_3/test-*
- config_name: explanation_4
data_files:
- split: train
path: explanation_4/train-*
- split: validation
path: explanation_4/validation-*
- split: test
path: explanation_4/test-*
- config_name: explanation_from_pixels
data_files:
- split: train
path: explanation_from_pixels/train-*
- split: validation
path: explanation_from_pixels/validation-*
- split: test
path: explanation_from_pixels/test-*
- config_name: explanation_from_pixels_1
data_files:
- split: train
path: explanation_from_pixels_1/train-*
- split: validation
path: explanation_from_pixels_1/validation-*
- split: test
path: explanation_from_pixels_1/test-*
- config_name: explanation_from_pixels_2
data_files:
- split: train
path: explanation_from_pixels_2/train-*
- split: validation
path: explanation_from_pixels_2/validation-*
- split: test
path: explanation_from_pixels_2/test-*
- config_name: explanation_from_pixels_3
data_files:
- split: train
path: explanation_from_pixels_3/train-*
- split: validation
path: explanation_from_pixels_3/validation-*
- split: test
path: explanation_from_pixels_3/test-*
- config_name: explanation_from_pixels_4
data_files:
- split: train
path: explanation_from_pixels_4/train-*
- split: validation
path: explanation_from_pixels_4/validation-*
- split: test
path: explanation_from_pixels_4/test-*
- config_name: matching
data_files:
- split: train
path: matching/train-*
- split: validation
path: matching/validation-*
- split: test
path: matching/test-*
- config_name: matching_1
data_files:
- split: train
path: matching_1/train-*
- split: validation
path: matching_1/validation-*
- split: test
path: matching_1/test-*
- config_name: matching_2
data_files:
- split: train
path: matching_2/train-*
- split: validation
path: matching_2/validation-*
- split: test
path: matching_2/test-*
- config_name: matching_3
data_files:
- split: train
path: matching_3/train-*
- split: validation
path: matching_3/validation-*
- split: test
path: matching_3/test-*
- config_name: matching_4
data_files:
- split: train
path: matching_4/train-*
- split: validation
path: matching_4/validation-*
- split: test
path: matching_4/test-*
- config_name: matching_from_pixels
data_files:
- split: train
path: matching_from_pixels/train-*
- split: validation
path: matching_from_pixels/validation-*
- split: test
path: matching_from_pixels/test-*
- config_name: matching_from_pixels_1
data_files:
- split: train
path: matching_from_pixels_1/train-*
- split: validation
path: matching_from_pixels_1/validation-*
- split: test
path: matching_from_pixels_1/test-*
- config_name: matching_from_pixels_2
data_files:
- split: train
path: matching_from_pixels_2/train-*
- split: validation
path: matching_from_pixels_2/validation-*
- split: test
path: matching_from_pixels_2/test-*
- config_name: matching_from_pixels_3
data_files:
- split: train
path: matching_from_pixels_3/train-*
- split: validation
path: matching_from_pixels_3/validation-*
- split: test
path: matching_from_pixels_3/test-*
- config_name: matching_from_pixels_4
data_files:
- split: train
path: matching_from_pixels_4/train-*
- split: validation
path: matching_from_pixels_4/validation-*
- split: test
path: matching_from_pixels_4/test-*
- config_name: ranking
data_files:
- split: train
path: ranking/train-*
- split: validation
path: ranking/validation-*
- split: test
path: ranking/test-*
- config_name: ranking_1
data_files:
- split: train
path: ranking_1/train-*
- split: validation
path: ranking_1/validation-*
- split: test
path: ranking_1/test-*
- config_name: ranking_2
data_files:
- split: train
path: ranking_2/train-*
- split: validation
path: ranking_2/validation-*
- split: test
path: ranking_2/test-*
- config_name: ranking_3
data_files:
- split: train
path: ranking_3/train-*
- split: validation
path: ranking_3/validation-*
- split: test
path: ranking_3/test-*
- config_name: ranking_4
data_files:
- split: train
path: ranking_4/train-*
- split: validation
path: ranking_4/validation-*
- split: test
path: ranking_4/test-*
- config_name: ranking_from_pixels
data_files:
- split: train
path: ranking_from_pixels/train-*
- split: validation
path: ranking_from_pixels/validation-*
- split: test
path: ranking_from_pixels/test-*
- config_name: ranking_from_pixels_1
data_files:
- split: train
path: ranking_from_pixels_1/train-*
- split: validation
path: ranking_from_pixels_1/validation-*
- split: test
path: ranking_from_pixels_1/test-*
- config_name: ranking_from_pixels_2
data_files:
- split: train
path: ranking_from_pixels_2/train-*
- split: validation
path: ranking_from_pixels_2/validation-*
- split: test
path: ranking_from_pixels_2/test-*
- config_name: ranking_from_pixels_3
data_files:
- split: train
path: ranking_from_pixels_3/train-*
- split: validation
path: ranking_from_pixels_3/validation-*
- split: test
path: ranking_from_pixels_3/test-*
- config_name: ranking_from_pixels_4
data_files:
- split: train
path: ranking_from_pixels_4/train-*
- split: validation
path: ranking_from_pixels_4/validation-*
- split: test
path: ranking_from_pixels_4/test-*
---
# Dataset Card for New Yorker Caption Contest Benchmarks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [capcon.dev](https://www.capcon.dev)
- **Repository:** [https://github.com/jmhessel/caption_contest_corpus](https://github.com/jmhessel/caption_contest_corpus)
- **Paper:** [Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293)
- **Leaderboard:** https://leaderboard.allenai.org/nycc-matching/
- **Point of Contact:** [email protected]
### Dataset Summary
See [capcon.dev](https://www.capcon.dev) for more!
Data from:
[Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest](https://arxiv.org/abs/2209.06293)
```
@inproceedings{hessel2023androids,
title={Do Androids Laugh at Electric Sheep? {Humor} ``Understanding''
Benchmarks from {The New Yorker Caption Contest}},
author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D. and Lee, Lillian
and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin},
booktitle={Proceedings of the ACL},
year={2023}
}
```
If you use this dataset, we would appreciate you citing our work, but also -- several other papers that we build this corpus upon. See [Citation Information](#citation-information).
We challenge AI models to "demonstrate understanding" of the
sophisticated multimodal humor of The New Yorker Caption Contest.
Concretely, we develop three carefully circumscribed tasks for which
it suffices (but is not necessary) to grasp potentially complex and
unexpected relationships between image and caption, and similarly
complex and unexpected allusions to the wide varieties of human
experience.
### Supported Tasks and Leaderboards
Three tasks are supported:
- "Matching:" a model must recognize a caption written about a cartoon (vs. options that were not);
- "Quality ranking:" a model must evaluate the quality of a caption by scoring it more highly than a lower quality option from the same contest;
- "Explanation:" a model must explain why a given joke is funny.
There are no official leaderboards (yet).
### Languages
English
## Dataset Structure
Here's an example instance from Matching:
```
{'caption_choices': ['Tell me about your childhood very quickly.',
"Believe me . . . it's what's UNDER the ground that's "
'most interesting.',
"Stop me if you've heard this one.",
'I have trouble saying no.',
'Yes, I see the train but I think we can beat it.'],
'contest_number': 49,
'entities': ['https://en.wikipedia.org/wiki/Rule_of_three_(writing)',
'https://en.wikipedia.org/wiki/Bar_joke',
'https://en.wikipedia.org/wiki/Religious_institute'],
'from_description': 'scene: a bar description: Two priests and a rabbi are '
'walking into a bar, as the bartender and another patron '
'look on. The bartender talks on the phone while looking '
'skeptically at the incoming crew. uncanny: The scene '
'depicts a very stereotypical "bar joke" that would be '
'unlikely to be encountered in real life; the skepticism '
'of the bartender suggests that he is aware he is seeing '
'this trope, and is explaining it to someone on the '
'phone. entities: Rule_of_three_(writing), Bar_joke, '
'Religious_institute. choices A: Tell me about your '
"childhood very quickly. B: Believe me . . . it's what's "
"UNDER the ground that's most interesting. C: Stop me if "
"you've heard this one. D: I have trouble saying no. E: "
'Yes, I see the train but I think we can beat it.',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=323x231 at 0x7F34F283E9D0>,
'image_description': 'Two priests and a rabbi are walking into a bar, as the '
'bartender and another patron look on. The bartender '
'talks on the phone while looking skeptically at the '
'incoming crew.',
'image_location': 'a bar',
'image_uncanny_description': 'The scene depicts a very stereotypical "bar '
'joke" that would be unlikely to be encountered '
'in real life; the skepticism of the bartender '
'suggests that he is aware he is seeing this '
'trope, and is explaining it to someone on the '
'phone.',
'instance_id': '21125bb8787b4e7e82aa3b0a1cba1571',
'label': 'C',
'n_tokens_label': 1,
'questions': ['What is the bartender saying on the phone in response to the '
'living, breathing, stereotypical bar joke that is unfolding?']}
```
The label "C" indicates that the 3rd choice in the `caption_choices` is correct.
Here's an example instance from Ranking (in the from pixels setting --- though, this is also available in the from description setting)
```
{'caption_choices': ['I guess I misunderstood when you said long bike ride.',
'Does your divorce lawyer have any other cool ideas?'],
'contest_number': 582,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=600x414 at 0x7F8FF9F96610>,
'instance_id': 'dd1c214a1ca3404aa4e582c9ce50795a',
'label': 'A',
'n_tokens_label': 1,
'winner_source': 'official_winner'}
```
the label indicates that the first caption choice ("A", here) in the `caption_choices` list was more highly rated.
Here's an example instance from Explanation:
```
{'caption_choices': 'The classics can be so intimidating.',
'contest_number': 752,
'entities': ['https://en.wikipedia.org/wiki/Literature',
'https://en.wikipedia.org/wiki/Solicitor'],
'from_description': 'scene: a road description: Two people are walking down a '
'path. A number of giant books have surrounded them. '
'uncanny: There are book people in this world. entities: '
'Literature, Solicitor. caption: The classics can be so '
'intimidating.',
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=800x706 at 0x7F90003D0BB0>,
'image_description': 'Two people are walking down a path. A number of giant '
'books have surrounded them.',
'image_location': 'a road',
'image_uncanny_description': 'There are book people in this world.',
'instance_id': 'eef9baf450e2fab19b96facc128adf80',
'label': 'A play on the word intimidating --- usually if the classics (i.e., '
'classic novels) were to be intimidating, this would mean that they '
'are intimidating to read due to their length, complexity, etc. But '
'here, they are surrounded by anthropomorphic books which look '
'physically intimidating, i.e., they are intimidating because they '
'may try to beat up these people.',
'n_tokens_label': 59,
'questions': ['What do the books want?']}
```
The label is an explanation of the joke, which serves as the autoregressive target.
### Data Instances
See above
### Data Fields
See above
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
dset = load_dataset("jmhessel/newyorker_caption_contest", "matching")
dset = load_dataset("jmhessel/newyorker_caption_contest", "ranking")
dset = load_dataset("jmhessel/newyorker_caption_contest", "explanation")
```
Or, in the from pixels setting, e.g.,
```
from datasets import load_dataset
dset = load_dataset("jmhessel/newyorker_caption_contest", "ranking_from_pixels")
```
Because the dataset is small, we reported in 5-fold cross-validation setting initially. The default splits are split 0. You can access the other splits, e.g.:
```
from datasets import load_dataset
# the 4th data split
dset = load_dataset("jmhessel/newyorker_caption_contest", "explanation_4")
```
## Dataset Creation
Full details are in the paper.
### Curation Rationale
See the paper for rationale/motivation.
### Source Data
See citation below. We combined 3 sources of data, and added significant annotations of our own.
#### Initial Data Collection and Normalization
Full details are in the paper.
#### Who are the source language producers?
We paid crowdworkers $15/hr to annotate the corpus.
In addition, significant annotation efforts were conducted by the authors of this work.
### Annotations
Full details are in the paper.
#### Annotation process
Full details are in the paper.
#### Who are the annotators?
A mix of crowdworks and authors of this paper.
### Personal and Sensitive Information
Has been redacted from the dataset. Images are published in the New Yorker already.
## Considerations for Using the Data
### Social Impact of Dataset
It's plausible that humor could perpetuate negative stereotypes. The jokes in this corpus are a mix of crowdsourced entries that are highly rated, and ones published in the new yorker.
### Discussion of Biases
Humor is subjective, and some of the jokes may be considered offensive. The images may contain adult themes and minor cartoon nudity.
### Other Known Limitations
More details are in the paper
## Additional Information
### Dataset Curators
The dataset was curated by researchers at AI2
### Licensing Information
The annotations we provide are CC-BY-4.0. See www.capcon.dev for more info.
### Citation Information
```
@article{hessel2022androids,
title={Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest},
author={Hessel, Jack and Marasovi{\'c}, Ana and Hwang, Jena D and Lee, Lillian and Da, Jeff and Zellers, Rowan and Mankoff, Robert and Choi, Yejin},
journal={arXiv preprint arXiv:2209.06293},
year={2022}
}
```
Our data contributions are:
- The cartoon-level annotations;
- The joke explanations;
- and the framing of the tasks
We release these data we contribute under CC-BY (see DATASET_LICENSE). If you find this data useful in your work, in addition to citing our contributions, please also cite the following, from which the cartoons/captions in our corpus are derived:
```
@misc{newyorkernextmldataset,
author={Jain, Lalit and Jamieson, Kevin and Mankoff, Robert and Nowak, Robert and Sievert, Scott},
title={The {N}ew {Y}orker Cartoon Caption Contest Dataset},
year={2020},
url={https://nextml.github.io/caption-contest-data/}
}
@inproceedings{radev-etal-2016-humor,
title = "Humor in Collective Discourse: Unsupervised Funniness Detection in The {New Yorker} Cartoon Caption Contest",
author = "Radev, Dragomir and
Stent, Amanda and
Tetreault, Joel and
Pappu, Aasish and
Iliakopoulou, Aikaterini and
Chanfreau, Agustin and
de Juan, Paloma and
Vallmitjana, Jordi and
Jaimes, Alejandro and
Jha, Rahul and
Mankoff, Robert",
booktitle = "LREC",
year = "2016",
}
@inproceedings{shahaf2015inside,
title={Inside jokes: Identifying humorous cartoon captions},
author={Shahaf, Dafna and Horvitz, Eric and Mankoff, Robert},
booktitle={KDD},
year={2015},
}
``` |
allenai/ai2_arc | allenai | 2023-12-21T15:09:48Z | 380,158 | 189 | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:multiple-choice-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1803.05457",
"region:us"
] | [
"question-answering"
] | 2022-03-02T23:29:22Z | null | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
- multiple-choice-qa
pretty_name: Ai2Arc
language_bcp47:
- en-US
dataset_info:
- config_name: ARC-Challenge
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 349760
num_examples: 1119
- name: test
num_bytes: 375511
num_examples: 1172
- name: validation
num_bytes: 96660
num_examples: 299
download_size: 449460
dataset_size: 821931
- config_name: ARC-Easy
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 619000
num_examples: 2251
- name: test
num_bytes: 657514
num_examples: 2376
- name: validation
num_bytes: 157394
num_examples: 570
download_size: 762935
dataset_size: 1433908
configs:
- config_name: ARC-Challenge
data_files:
- split: train
path: ARC-Challenge/train-*
- split: test
path: ARC-Challenge/test-*
- split: validation
path: ARC-Challenge/validation-*
- config_name: ARC-Easy
data_files:
- split: train
path: ARC-Easy/train-*
- split: test
path: ARC-Easy/test-*
- split: validation
path: ARC-Easy/validation-*
---
# Dataset Card for "ai2_arc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/arc](https://allenai.org/data/arc)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1361.68 MB
- **Size of the generated dataset:** 2.28 MB
- **Total amount of disk used:** 1363.96 MB
### Dataset Summary
A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in
advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains
only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also
including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### ARC-Challenge
- **Size of downloaded dataset files:** 680.84 MB
- **Size of the generated dataset:** 0.83 MB
- **Total amount of disk used:** 681.67 MB
An example of 'train' looks as follows.
```
{
"answerKey": "B",
"choices": {
"label": ["A", "B", "C", "D"],
"text": ["Shady areas increased.", "Food sources increased.", "Oxygen levels increased.", "Available water increased."]
},
"id": "Mercury_SC_405487",
"question": "One year, the oak trees in a park began producing more acorns than usual. The next year, the population of chipmunks in the park also increased. Which best explains why there were more chipmunks the next year?"
}
```
#### ARC-Easy
- **Size of downloaded dataset files:** 680.84 MB
- **Size of the generated dataset:** 1.45 MB
- **Total amount of disk used:** 682.29 MB
An example of 'train' looks as follows.
```
{
"answerKey": "B",
"choices": {
"label": ["A", "B", "C", "D"],
"text": ["Shady areas increased.", "Food sources increased.", "Oxygen levels increased.", "Available water increased."]
},
"id": "Mercury_SC_405487",
"question": "One year, the oak trees in a park began producing more acorns than usual. The next year, the population of chipmunks in the park also increased. Which best explains why there were more chipmunks the next year?"
}
```
### Data Fields
The data fields are the same among all splits.
#### ARC-Challenge
- `id`: a `string` feature.
- `question`: a `string` feature.
- `choices`: a dictionary feature containing:
- `text`: a `string` feature.
- `label`: a `string` feature.
- `answerKey`: a `string` feature.
#### ARC-Easy
- `id`: a `string` feature.
- `question`: a `string` feature.
- `choices`: a dictionary feature containing:
- `text`: a `string` feature.
- `label`: a `string` feature.
- `answerKey`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------------|----:|---------:|---:|
|ARC-Challenge| 1119| 299|1172|
|ARC-Easy | 2251| 570|2376|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{allenai:arc,
author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and
Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
journal = {arXiv:1803.05457v1},
year = {2018},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
meta-math/MetaMathQA | meta-math | 2023-12-21T01:35:53Z | 8,037 | 382 | [
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2309.12284",
"region:us",
"math",
"math-qa"
] | [] | 2023-09-21T17:22:46Z | null | ---
tags:
- math
- math-qa
license: mit
---
View the project page:
https://meta-math.github.io/
see our paper at https://arxiv.org/abs/2309.12284
## Note
All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
<span style="color:red"><b>None of the augmented data is from the testing set.</b></span>
You can check the `original_question` in `meta-math/MetaMathQA`, each item is from the GSM8K or MATH train set.
## Model Details
MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and changing the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**.
To fine-tune Mistral-7B, I would suggest using a smaller learning rate (usually 1/5 to 1/10 of the lr for LlaMa-2-7B) and staying other training args unchanged.
More training details and scripts can be seen at [https://github.com/meta-math/MetaMath](https://github.com/meta-math/MetaMath).
## Installation
```
pip install transformers==4.35.0
pip install torch==2.0.1
pip install sentencepiece==0.1.99
pip install tokenizers==0.13.3
pip install accelerate==0.21.0
pip install bitsandbytes==0.40.0
pip install vllm
pip install fraction
pip install protobuf
```
## Model Usage
prompting template:
'''
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n{instruction}\n\n### Response: Let's think step by step."
'''
where you need to use your query question to replace the {instruction}
There is another interesting repo about Arithmo-Mistral-7B at [https://huggingface.co/akjindal53244/Arithmo-Mistral-7B](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B), where they combine our MetaMathQA dataset and MathInstruct datasets to train a powerful model. Thanks agian for their contributions.
We would also try to train the combination of **MetaMathQA** and **MathInstruct** datasets, and also open all the results and training details.
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| 🔥 **MetaMath-Mistral-7B** | **77.7** | **28.2** |
We encourage anyone to use our MetaMathQA datasets. We are very happy to see the following models trained by MetaMathQA achieve a very promising performance!
OpenChat-3.5 (https://huggingface.co/openchat/openchat_3.5)
CausalLM (https://huggingface.co/CausalLM/14B)
zephyr (https://huggingface.co/qblocks/zephyr-7b-alpha_metamathqa)
Ziya2 (https://huggingface.co/IDEA-CCNL/Ziya2-13B-Base)
# Citation
```bibtex
@article{yu2023metamath,
title={MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models},
author={Yu, Longhui and Jiang, Weisen and Shi, Han and Yu, Jincheng and Liu, Zhengying and Zhang, Yu and Kwok, James T and Li, Zhenguo and Weller, Adrian and Liu, Weiyang},
journal={arXiv preprint arXiv:2309.12284},
year={2023}
}
``` |
erbacher/PDEBench-1D | erbacher | 2023-12-20T21:36:56Z | 58,945 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-11-22T13:13:55Z | null | ---
dataset_info:
- config_name: Advection_Sols_beta0.1
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1030317301
dataset_size: 2079020000
- config_name: Advection_Sols_beta0.2
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1034054442
dataset_size: 2079020000
- config_name: Advection_Sols_beta0.4
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1037220772
dataset_size: 2079020000
- config_name: Advection_Sols_beta0.7
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1039496575
dataset_size: 2079020000
- config_name: Advection_Sols_beta1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1041009183
dataset_size: 2079020000
- config_name: Advection_Sols_beta2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1041263590
dataset_size: 2079020000
- config_name: Advection_Sols_beta4.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1041302186
dataset_size: 2079020000
- config_name: Advection_Sols_beta7.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2079020000
num_examples: 10000
download_size: 1041314010
dataset_size: 2079020000
- config_name: Burgers_Sols_Nu0.001
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975050000.0
num_examples: 9500
- name: dev
num_bytes: 51975000.0
num_examples: 250
- name: test
num_bytes: 51975000.0
num_examples: 250
download_size: 1028326119
dataset_size: 2079000000.0
- config_name: Burgers_Sols_Nu0.002
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975050000.0
num_examples: 9500
- name: dev
num_bytes: 51975000.0
num_examples: 250
- name: test
num_bytes: 51975000.0
num_examples: 250
download_size: 1034543373
dataset_size: 2079000000.0
- config_name: Burgers_Sols_Nu0.004
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975050000.0
num_examples: 9500
- name: dev
num_bytes: 51975000.0
num_examples: 250
- name: test
num_bytes: 51975000.0
num_examples: 250
download_size: 1039636457
dataset_size: 2079000000.0
- config_name: Burgers_Sols_Nu0.01
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975040500.0
num_examples: 9500
- name: dev
num_bytes: 51974750.0
num_examples: 250
- name: test
num_bytes: 51974750.0
num_examples: 250
download_size: 1042820960
dataset_size: 2078990000.0
- config_name: Burgers_Sols_Nu0.02
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975040500.0
num_examples: 9500
- name: dev
num_bytes: 51974750.0
num_examples: 250
- name: test
num_bytes: 51974750.0
num_examples: 250
download_size: 1043138323
dataset_size: 2078990000.0
- config_name: Burgers_Sols_Nu0.04
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975040500.0
num_examples: 9500
- name: dev
num_bytes: 51974750.0
num_examples: 250
- name: test
num_bytes: 51974750.0
num_examples: 250
download_size: 1035623715
dataset_size: 2078990000.0
- config_name: Burgers_Sols_Nu0.1
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975031000.0
num_examples: 9500
- name: dev
num_bytes: 51974500.0
num_examples: 250
- name: test
num_bytes: 51974500.0
num_examples: 250
download_size: 995415792
dataset_size: 2078980000.0
- config_name: Burgers_Sols_Nu0.2
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975031000.0
num_examples: 9500
- name: dev
num_bytes: 51974500.0
num_examples: 250
- name: test
num_bytes: 51974500.0
num_examples: 250
download_size: 949166113
dataset_size: 2078980000.0
- config_name: Burgers_Sols_Nu0.4
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975031000.0
num_examples: 9500
- name: dev
num_bytes: 51974500.0
num_examples: 250
- name: test
num_bytes: 51974500.0
num_examples: 250
download_size: 847341109
dataset_size: 2078980000.0
- config_name: Burgers_Sols_Nu1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975031000.0
num_examples: 9500
- name: dev
num_bytes: 51974500.0
num_examples: 250
- name: test
num_bytes: 51974500.0
num_examples: 250
download_size: 573087335
dataset_size: 2078980000.0
- config_name: Burgers_Sols_Nu2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975031000.0
num_examples: 9500
- name: dev
num_bytes: 51974500.0
num_examples: 250
- name: test
num_bytes: 51974500.0
num_examples: 250
download_size: 315101631
dataset_size: 2078980000.0
- config_name: Burgers_Sols_Nu4.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1975031000.0
num_examples: 9500
- name: dev
num_bytes: 51974500.0
num_examples: 250
- name: test
num_bytes: 51974500.0
num_examples: 250
download_size: 189417705
dataset_size: 2078980000.0
- config_name: CFD_Rand_Eta0.01_Zeta0.01_periodic
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2099620000
num_examples: 10000
download_size: 1576405761
dataset_size: 2099620000
- config_name: CFD_Rand_Eta0.1_Zeta0.1_periodic
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2099600000
num_examples: 10000
download_size: 946984963
dataset_size: 2099600000
- config_name: CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2099640000
num_examples: 10000
download_size: 1573309616
dataset_size: 2099640000
- config_name: CFD_Rand_Eta1.e-8_Zeta1.e-8_trans
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 2099610000
num_examples: 10000
download_size: 0
dataset_size: 2099610000
- config_name: ReacDiff_Nu0.5_Rho1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 103983829
dataset_size: 1055010000
- config_name: ReacDiff_Nu0.5_Rho10.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055020000
num_examples: 10000
download_size: 124933565
dataset_size: 1055020000
- config_name: ReacDiff_Nu0.5_Rho2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 193004745
dataset_size: 1055010000
- config_name: ReacDiff_Nu0.5_Rho5.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 146090506
dataset_size: 1055010000
- config_name: ReacDiff_Nu1.0_Rho1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 217153008
dataset_size: 1055010000
- config_name: ReacDiff_Nu1.0_Rho10.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055020000
num_examples: 10000
download_size: 113039664
dataset_size: 1055020000
- config_name: ReacDiff_Nu1.0_Rho2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 139659779
dataset_size: 1055010000
- config_name: ReacDiff_Nu1.0_Rho5.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 121216648
dataset_size: 1055010000
- config_name: ReacDiff_Nu2.0_Rho1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 57854036
dataset_size: 1055010000
- config_name: ReacDiff_Nu2.0_Rho10.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055020000
num_examples: 10000
download_size: 73754842
dataset_size: 1055020000
- config_name: ReacDiff_Nu2.0_Rho2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 122071454
dataset_size: 1055010000
- config_name: ReacDiff_Nu2.0_Rho5.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 74329093
dataset_size: 1055010000
- config_name: ReacDiff_Nu5.0_Rho1.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 158789252
dataset_size: 1055010000
- config_name: ReacDiff_Nu5.0_Rho10.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055020000
num_examples: 10000
download_size: 55445429
dataset_size: 1055020000
- config_name: ReacDiff_Nu5.0_Rho2.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 58220881
dataset_size: 1055010000
- config_name: ReacDiff_Nu5.0_Rho5.0
features:
- name: parameters
dtype: string
- name: tensor
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1055010000
num_examples: 10000
download_size: 57392368
dataset_size: 1055010000
configs:
- config_name: Advection_Sols_beta0.1
data_files:
- split: train
path: Advection_Sols_beta0.1/train-*
- config_name: Advection_Sols_beta0.2
data_files:
- split: train
path: Advection_Sols_beta0.2/train-*
- config_name: Advection_Sols_beta0.4
data_files:
- split: train
path: Advection_Sols_beta0.4/train-*
- config_name: Advection_Sols_beta0.7
data_files:
- split: train
path: Advection_Sols_beta0.7/train-*
- config_name: Advection_Sols_beta1.0
data_files:
- split: train
path: Advection_Sols_beta1.0/train-*
- config_name: Advection_Sols_beta2.0
data_files:
- split: train
path: Advection_Sols_beta2.0/train-*
- config_name: Advection_Sols_beta4.0
data_files:
- split: train
path: Advection_Sols_beta4.0/train-*
- config_name: Advection_Sols_beta7.0
data_files:
- split: train
path: Advection_Sols_beta7.0/train-*
- config_name: Burgers_Sols_Nu0.001
data_files:
- split: train
path: Burgers_Sols_Nu0.001/train-*
- split: dev
path: Burgers_Sols_Nu0.001/dev-*
- split: test
path: Burgers_Sols_Nu0.001/test-*
- config_name: Burgers_Sols_Nu0.002
data_files:
- split: train
path: Burgers_Sols_Nu0.002/train-*
- split: dev
path: Burgers_Sols_Nu0.002/dev-*
- split: test
path: Burgers_Sols_Nu0.002/test-*
- config_name: Burgers_Sols_Nu0.004
data_files:
- split: train
path: Burgers_Sols_Nu0.004/train-*
- split: dev
path: Burgers_Sols_Nu0.004/dev-*
- split: test
path: Burgers_Sols_Nu0.004/test-*
- config_name: Burgers_Sols_Nu0.01
data_files:
- split: train
path: Burgers_Sols_Nu0.01/train-*
- split: dev
path: Burgers_Sols_Nu0.01/dev-*
- split: test
path: Burgers_Sols_Nu0.01/test-*
- config_name: Burgers_Sols_Nu0.02
data_files:
- split: train
path: Burgers_Sols_Nu0.02/train-*
- split: dev
path: Burgers_Sols_Nu0.02/dev-*
- split: test
path: Burgers_Sols_Nu0.02/test-*
- config_name: Burgers_Sols_Nu0.04
data_files:
- split: train
path: Burgers_Sols_Nu0.04/train-*
- split: dev
path: Burgers_Sols_Nu0.04/dev-*
- split: test
path: Burgers_Sols_Nu0.04/test-*
- config_name: Burgers_Sols_Nu0.1
data_files:
- split: train
path: Burgers_Sols_Nu0.1/train-*
- split: dev
path: Burgers_Sols_Nu0.1/dev-*
- split: test
path: Burgers_Sols_Nu0.1/test-*
- config_name: Burgers_Sols_Nu0.2
data_files:
- split: train
path: Burgers_Sols_Nu0.2/train-*
- split: dev
path: Burgers_Sols_Nu0.2/dev-*
- split: test
path: Burgers_Sols_Nu0.2/test-*
- config_name: Burgers_Sols_Nu0.4
data_files:
- split: train
path: Burgers_Sols_Nu0.4/train-*
- split: dev
path: Burgers_Sols_Nu0.4/dev-*
- split: test
path: Burgers_Sols_Nu0.4/test-*
- config_name: Burgers_Sols_Nu1.0
data_files:
- split: train
path: Burgers_Sols_Nu1.0/train-*
- split: dev
path: Burgers_Sols_Nu1.0/dev-*
- split: test
path: Burgers_Sols_Nu1.0/test-*
- config_name: Burgers_Sols_Nu2.0
data_files:
- split: train
path: Burgers_Sols_Nu2.0/train-*
- split: dev
path: Burgers_Sols_Nu2.0/dev-*
- split: test
path: Burgers_Sols_Nu2.0/test-*
- config_name: Burgers_Sols_Nu4.0
data_files:
- split: train
path: Burgers_Sols_Nu4.0/train-*
- split: dev
path: Burgers_Sols_Nu4.0/dev-*
- split: test
path: Burgers_Sols_Nu4.0/test-*
- config_name: CFD_Rand_Eta0.01_Zeta0.01_periodic
data_files:
- split: train
path: CFD_Rand_Eta0.01_Zeta0.01_periodic/train-*
- config_name: CFD_Rand_Eta0.1_Zeta0.1_periodic
data_files:
- split: train
path: CFD_Rand_Eta0.1_Zeta0.1_periodic/train-*
- config_name: CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic
data_files:
- split: train
path: CFD_Rand_Eta1.e-8_Zeta1.e-8_periodic/train-*
- config_name: CFD_Rand_Eta1.e-8_Zeta1.e-8_trans
data_files:
- split: train
path: CFD_Rand_Eta1.e-8_Zeta1.e-8_trans/train-*
- config_name: ReacDiff_Nu0.5_Rho1.0
data_files:
- split: train
path: ReacDiff_Nu0.5_Rho1.0/train-*
- config_name: ReacDiff_Nu0.5_Rho10.0
data_files:
- split: train
path: ReacDiff_Nu0.5_Rho10.0/train-*
- config_name: ReacDiff_Nu0.5_Rho2.0
data_files:
- split: train
path: ReacDiff_Nu0.5_Rho2.0/train-*
- config_name: ReacDiff_Nu0.5_Rho5.0
data_files:
- split: train
path: ReacDiff_Nu0.5_Rho5.0/train-*
- config_name: ReacDiff_Nu1.0_Rho1.0
data_files:
- split: train
path: ReacDiff_Nu1.0_Rho1.0/train-*
- config_name: ReacDiff_Nu1.0_Rho10.0
data_files:
- split: train
path: ReacDiff_Nu1.0_Rho10.0/train-*
- config_name: ReacDiff_Nu1.0_Rho2.0
data_files:
- split: train
path: ReacDiff_Nu1.0_Rho2.0/train-*
- config_name: ReacDiff_Nu1.0_Rho5.0
data_files:
- split: train
path: ReacDiff_Nu1.0_Rho5.0/train-*
- config_name: ReacDiff_Nu2.0_Rho1.0
data_files:
- split: train
path: ReacDiff_Nu2.0_Rho1.0/train-*
- config_name: ReacDiff_Nu2.0_Rho10.0
data_files:
- split: train
path: ReacDiff_Nu2.0_Rho10.0/train-*
- config_name: ReacDiff_Nu2.0_Rho2.0
data_files:
- split: train
path: ReacDiff_Nu2.0_Rho2.0/train-*
- config_name: ReacDiff_Nu2.0_Rho5.0
data_files:
- split: train
path: ReacDiff_Nu2.0_Rho5.0/train-*
- config_name: ReacDiff_Nu5.0_Rho1.0
data_files:
- split: train
path: ReacDiff_Nu5.0_Rho1.0/train-*
- config_name: ReacDiff_Nu5.0_Rho10.0
data_files:
- split: train
path: ReacDiff_Nu5.0_Rho10.0/train-*
- config_name: ReacDiff_Nu5.0_Rho2.0
data_files:
- split: train
path: ReacDiff_Nu5.0_Rho2.0/train-*
- config_name: ReacDiff_Nu5.0_Rho5.0
data_files:
- split: train
path: ReacDiff_Nu5.0_Rho5.0/train-*
---
# Dataset Card for "PDEBench-1D"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Antreas/TALI | Antreas | 2023-12-13T09:02:28Z | 44,830 | 14 | [
"task_categories:zero-shot-classification",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"modality:video",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"video",
"audio",
"text",
"image",
"tetramodal",
"multimodal",
"youtube",
"wikipedia"
] | [
"zero-shot-classification"
] | 2023-08-16T22:59:13Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: image_url
dtype: string
- name: item_idx
dtype: int64
- name: wit_features
struct:
- name: attribution_passes_lang_id
sequence: bool
- name: caption_alt_text_description
sequence: string
- name: caption_reference_description
sequence: string
- name: caption_title_and_reference_description
sequence: string
- name: context_page_description
sequence: string
- name: context_section_description
sequence: string
- name: hierarchical_section_title
sequence: string
- name: is_main_image
sequence: bool
- name: language
sequence: string
- name: page_changed_recently
sequence: bool
- name: page_title
sequence: string
- name: page_url
sequence: string
- name: section_title
sequence: string
- name: wit_idx
dtype: int64
- name: youtube_title_text
dtype: string
- name: youtube_description_text
dtype: string
- name: youtube_video_content
dtype: binary
- name: youtube_video_starting_time
dtype: string
- name: youtube_subtitle_text
dtype: string
- name: youtube_video_size
dtype: int64
- name: youtube_video_file_path
dtype: string
splits:
- name: train
num_bytes: 1902638101655.625
num_examples: 1052915
- name: val
num_bytes: 104485442867.25
num_examples: 57958
- name: test
num_bytes: 111107332347.375
num_examples: 61389
download_size: 2058391040534
dataset_size: 2118230876870.25
license: cc-by-4.0
task_categories:
- zero-shot-classification
tags:
- video
- audio
- text
- image
- tetramodal
- multimodal
- youtube
- wikipedia
pretty_name: TALI
size_categories:
- 1M<n<10M
---
# Dataset Card for "TALI"
## Table of Contents
1. Dataset Description
1. Abstract
2. Brief Description
2. Dataset Information
1. Modalities
2. Dataset Variants
3. Dataset Statistics
4. Data Fields
5. Data Splits
3. Dataset Creation
4. Dataset Use
5. Additional Information
## Dataset Description
### Abstract
TALI is a large-scale, tetramodal dataset designed to facilitate a shift from unimodal and duomodal to tetramodal research in deep learning. It aligns text, video, images, and audio, providing a rich resource for innovative self-supervised learning tasks and multimodal research. TALI enables exploration of how different modalities and data/model scaling affect downstream performance, with the aim of inspiring diverse research ideas and enhancing understanding of model capabilities and robustness in deep learning.
### Brief Description
TALI (Temporally and semantically Aligned Audio, Language and Images) is a dataset that uses the Wikipedia Image Text (WIT) captions and article titles to search Youtube for videos that match the captions. It then downloads the video, audio, and subtitles from these videos. The result is a rich multimodal dataset that has multiple caption types related to both the WiT Images, and the Youtube videos. This enables learning to take place between either temporally or semantically aligned text, images, audio and video.
## Dataset Information
### Modalities
The TALI dataset consists of the following modalities:
1. Image:
1. Wikipedia caption image
2. Randomly sampled image from youtube video
2. Text
1. Wikipedia Caption Text
2. Wikipedia Title Text
3. Wikipedia Main Body Text
4. YouTube Subtitle Text
5. YouTube Description Text
6. YouTube Title Text
3. Audio
1. YouTube Content Audio
4. Video
1. YouTube Content Video
## Usage:
To get started with TALI, you can load the dataset via Hugging Face's `datasets` library through our helper functions. The reason we don't use `datasets` directly is because we found huggingface_hub downloads much faster and reliable. For a full set of possible configurations look at [examples.py](examples.py). Here's a basic usage example:
First install the tali package:
### Installation
For the default install use:
```bash
pip install git+https://github.com/AntreasAntoniou/TALI
```
For the dev install use:
```bash
pip install git+https://github.com/AntreasAntoniou/TALI[dev]
```
Then use the dataset using:
### Examples
Import relevant helper functions
```python
import pathlib
from enum import Enum
import torch
from tqdm.auto import tqdm
from tali.data import (
SubModalityTypes,
TALIBaseTransform,
TALIBaseTransformConfig,
VideoFramesFormat,
default_transforms,
load_dataset_via_hub,
)
```
#### TALI with default transforms (CLIP and Whisper) and no streaming
```python
def tali_with_transforms_no_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI"
)["train"]
(
image_transforms,
text_transforms,
audio_transforms,
video_transforms,
) = default_transforms()
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=text_transforms,
image_tokenizer=image_transforms,
audio_tokenizer=audio_transforms,
video_tokenizer=video_transforms,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with no transforms and no streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
```python
def tali_without_transforms_no_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI"
)["train"]
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=None,
image_tokenizer=None,
audio_tokenizer=None,
video_tokenizer=None,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with default transforms and streaming
```python
def tali_with_transforms_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI", streaming=True
)["train"]
(
image_transforms,
text_transforms,
audio_transforms,
video_transforms,
) = default_transforms()
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=text_transforms,
image_tokenizer=image_transforms,
audio_tokenizer=audio_transforms,
video_tokenizer=video_transforms,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
#### TALI with no transforms and streaming, returning text as text, images as PIL images, videos as a list of PIL images, and audio as a sequence of floats
```python
def tali_without_transforms_streaming(
dataset_storage_path: pathlib.Path | str,
):
if isinstance(dataset_storage_path, str):
dataset_storage_path = pathlib.Path(dataset_storage_path)
dataset = load_dataset_via_hub(
dataset_storage_path, dataset_name="Antreas/TALI", streaming=True
)["train"]
preprocessing_transform = TALIBaseTransform(
cache_dir=dataset_storage_path / "cache",
text_tokenizer=None,
image_tokenizer=None,
audio_tokenizer=None,
video_tokenizer=None,
config=TALIBaseTransformConfig(
root_filepath=dataset_storage_path,
modality_list=[
SubModalityTypes.youtube_content_video,
SubModalityTypes.youtube_content_audio,
SubModalityTypes.youtube_random_video_frame,
SubModalityTypes.youtube_subtitle_text,
SubModalityTypes.youtube_description_text,
SubModalityTypes.youtube_title_text,
SubModalityTypes.wikipedia_caption_image,
SubModalityTypes.wikipedia_caption_text,
SubModalityTypes.wikipedia_main_body_text,
SubModalityTypes.wikipedia_title_text,
],
video_frames_format=VideoFramesFormat.PIL,
),
)
for sample in tqdm(dataset):
sample = preprocessing_transform(sample)
print(list(sample.keys()))
for key, value in sample.items():
if hasattr(value, "shape"):
print(key, value.shape)
elif isinstance(value, torch.Tensor):
print(key, value.shape)
elif hasattr(value, "__len__"):
print(key, len(value))
print(key, type(value))
break
```
### Dataset Statistics
TBA
## Dataset Creation
The TALI dataset was created by starting from the WiT dataset and using either the context_page_description or page_title as a source-query to search YouTube for video that were creative commons opted-in, and, not age restricted. The top 100 result titles were returned and compared with the source-query using the CLIP text embeddings of the largest CLIP model available. The top-1 title’s video based on the CLIP ranking was chosen and downloaded. The video was broken into 30-second segments and the top-10 segments for eachvideo were chosen based on the distance between the CLIP image embedding of the first image of each segment and the video’s title text. The image, audio, and subtitle frames were extracted from these segments. At sampling time, one of these 10 segments is randomly selected, and a 10-second segment is chosen out of the 30-second clip. The result is 200 video frames (spread throughout the 10-second segment), and 160000 audio frames (10 seconds).
## Dataset Use
TALI is designed for use in a wide range of multimodal research tasks, including but not limited to:
- Multimodal understanding and reasoning
- Self-supervised learning
- Multimodal alignment and translation
- Multimodal summarization
- Multimodal question answering
## Dataset Curators: Antreas Antoniou
Citation Information: TBA
Contributions: Thanks to all contributors including data curators, annotators, and software developers.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
argilla/ultrafeedback-binarized-preferences-cleaned | argilla | 2023-12-11T14:22:19Z | 1,102 | 141 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dpo",
"preference",
"ultrafeedback"
] | [
"text-generation"
] | 2023-12-05T11:07:34Z | null | ---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- text-generation
pretty_name: UltraFeedback Binarized Preferences Cleaned
dataset_info:
features:
- name: source
dtype: string
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen-rating
dtype: float64
- name: chosen-model
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected-rating
dtype: float64
- name: rejected-model
dtype: string
splits:
- name: train
num_bytes: 284937773
num_examples: 60917
download_size: 143257393
dataset_size: 284937773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- dpo
- preference
- ultrafeedback
---
# UltraFeedback - Binarized using the Average of Preference Ratings (Cleaned)
This dataset represents a new iteration on top of [`argilla/ultrafeedback-binarized-preferences`](https://huggingface.co/argilla/ultrafeedback-binarized-preferences),
and is the **recommended and preferred dataset by Argilla to use from now on when fine-tuning on UltraFeedback**.
Read more about Argilla's approach towards UltraFeedback binarization at [`argilla/ultrafeedback-binarized-preferences/README.md`](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences/blob/main/README.md).
## Differences with `argilla/ultrafeedback-binarized-preferences`
Thanks to the recent issue identified by [AllenAI](https://huggingface.co/allenai) related to the TruthfulQA contamination within the
original UltraFeedback dataset due to some prompts being reused from the TruthfulQA dataset (used for benchmarking
in the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) from HuggingFace H4), we also decided
to follow AllenAI's advice and remove those from the UltraFeedback dataset that we binarized using a completely different approach, which
implied using the average of the preference ratings rather than the critique overall score, as
[`HuggingFaceH4/ultrafeedback_binarized`](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) did.
Besides that, we also saw that not only the rows with the `source=truthful_qa` were contamined (for obvious reasons), but also some
coming from ShareGPT, so we also removed those doing a left join with both subsets from the [`truthful_qa`](https://huggingface.co/datasets/truthful_qa) dataset.
Additionally, we also modified the formatting to be aligned with both [`HuggingFaceH4/ultrafeedback_binarized`](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized),
and [`allenai/ultrafeedback_binarized_cleaned`](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) in order to ease
the integration within the [`huggingface/alignment-handbook`](https://github.com/huggingface/alignment-handbook) so that the formatting is standardized.
## Reproduce
<a target="_blank" href="https://colab.research.google.com/drive/1XR9P1St4yTNY0tjti_tIjm-yzP5Bfqc0?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
To reproduce the data processing combining both our approach and the suggestions from HuggingFace H4 w.r.t. the formatting and the ones from AllenAI to
remove the TruthfulQA contamination, feel free to run the attached Colab Notebook or just view it at [`notebook.ipynb`](./notebook.ipynb) within this repository.
From Argilla we encourage anyone out there to play around, investigate, and experiment with the data, and we firmly believe on open sourcing what we do, as
ourselves, as well as the whole community, benefit a lot from open source and we also want to give back.
## Citation
If you find this dataset is useful in your work, please cite the original UltraFeedback dataset: https://huggingface.co/datasets/openbmb/UltraFeedback
Additionally, you may also want to cite our work with Notus 7B, which lead the curation of the UltraFeedback dataset:
```bibtex
@misc{notus2023,
author = {Alvaro Bartolome and Gabriel Martin and Daniel Vila},
title = {Notus},
year = {2023},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/argilla-io/notus}}
}
```
> Alphabetically ordered by last name due to equal contribution. |
Intel/orca_dpo_pairs | Intel | 2023-11-29T14:11:17Z | 1,964 | 301 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2306.02707",
"region:us"
] | [] | 2023-09-21T10:35:16Z | null | ---
license: apache-2.0
---
The dataset contains 12k examples from [Orca](https://arxiv.org/abs/2306.02707) style dataset [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
|
defunct-datasets/bookcorpusopen | defunct-datasets | 2023-11-24T14:42:08Z | 367 | 37 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"arxiv:2105.05241",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2022-03-02T23:29:22Z | 1 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: BookCorpusOpen
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: bookcorpus
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 6643435392
num_examples: 17868
download_size: 2404269430
dataset_size: 6643435392
viewer: false
---
# Dataset Card for BookCorpusOpen
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/soskek/bookcorpus/issues/27](https://github.com/soskek/bookcorpus/issues/27)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.40 GB
- **Size of the generated dataset:** 6.64 GB
- **Total amount of disk used:** 9.05 GB
### Dataset Summary
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Defunct:</b> Dataset "bookcorpusopen" is defunct and no longer accessible due to unavailability of the source data.</p>
</div>
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.
This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 2.40 GB
- **Size of the generated dataset:** 6.64 GB
- **Total amount of disk used:** 9.05 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\"\\n\\nzONE\\n\\n## The end and the beginning\\n\\nby\\n\\nPhilip F. Blood\\n\\nSMASHWORDS EDITION\\n\\nVersion 3.55\\n\\nPUBLISHED BY:\\n\\nPhi...",
"title": "zone-the-end-and-the-beginning.epub.txt"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
| name |train|
|----------|----:|
|plain_text|17868|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information.
A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241)
### Citation Information
```
@InProceedings{Zhu_2015_ICCV,
title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}
```
### Contributions
Thanks to [@vblagoje](https://github.com/vblagoje) for adding this dataset. |
Malikeh1375/medical-question-answering-datasets | Malikeh1375 | 2023-11-02T03:13:38Z | 1,129 | 47 | [
"task_categories:question-answering",
"language:en",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"medical",
"clinical",
"healthcare"
] | [
"question-answering"
] | 2023-10-27T16:21:07Z | 2 | ---
language:
- en
task_categories:
- question-answering
tags:
- medical
- clinical
- healthcare
dataset_info:
- config_name: all-processed
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 276980695
num_examples: 246678
download_size: 0
dataset_size: 276980695
- config_name: chatdoctor_healthcaremagic
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 126454896
num_examples: 112165
download_size: 70518147
dataset_size: 126454896
- config_name: chatdoctor_icliniq
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 7347194
num_examples: 7321
download_size: 4153680
dataset_size: 7347194
- config_name: medical_meadow_cord19
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1336834621
num_examples: 821007
download_size: 752855706
dataset_size: 1336834621
- config_name: medical_meadow_health_advice
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 2196957
num_examples: 8676
download_size: 890725
dataset_size: 2196957
- config_name: medical_meadow_medical_flashcards
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 16453987
num_examples: 33955
download_size: 6999958
dataset_size: 16453987
- config_name: medical_meadow_mediqa
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 15690088
num_examples: 2208
download_size: 3719929
dataset_size: 15690088
- config_name: medical_meadow_medqa
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10225018
num_examples: 10178
download_size: 5505473
dataset_size: 10225018
- config_name: medical_meadow_mmmlu
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1442124
num_examples: 3787
download_size: 685604
dataset_size: 1442124
- config_name: medical_meadow_pubmed_causal
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 846695
num_examples: 2446
download_size: 210947
dataset_size: 846695
- config_name: medical_meadow_wikidoc
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 10224074
num_examples: 10000
download_size: 5593178
dataset_size: 10224074
- config_name: medical_meadow_wikidoc_patient_information
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 3262558
num_examples: 5942
download_size: 1544286
dataset_size: 3262558
configs:
- config_name: all-processed
data_files:
- split: train
path: all-processed/train-*
- config_name: chatdoctor_healthcaremagic
data_files:
- split: train
path: chatdoctor_healthcaremagic/train-*
- config_name: chatdoctor_icliniq
data_files:
- split: train
path: chatdoctor_icliniq/train-*
- config_name: medical_meadow_cord19
data_files:
- split: train
path: medical_meadow_cord19/train-*
- config_name: medical_meadow_health_advice
data_files:
- split: train
path: medical_meadow_health_advice/train-*
- config_name: medical_meadow_medical_flashcards
data_files:
- split: train
path: medical_meadow_medical_flashcards/train-*
- config_name: medical_meadow_mediqa
data_files:
- split: train
path: medical_meadow_mediqa/train-*
- config_name: medical_meadow_medqa
data_files:
- split: train
path: medical_meadow_medqa/train-*
- config_name: medical_meadow_mmmlu
data_files:
- split: train
path: medical_meadow_mmmlu/train-*
- config_name: medical_meadow_pubmed_causal
data_files:
- split: train
path: medical_meadow_pubmed_causal/train-*
- config_name: medical_meadow_wikidoc
data_files:
- split: train
path: medical_meadow_wikidoc/train-*
- config_name: medical_meadow_wikidoc_patient_information
data_files:
- split: train
path: medical_meadow_wikidoc_patient_information/train-*
---
|
allenai/objaverse-xl | allenai | 2023-10-31T16:46:54Z | 5,182 | 151 | [
"language:en",
"license:odc-by",
"arxiv:2307.05663",
"region:us"
] | [] | 2023-08-17T17:50:21Z | null | ---
license: odc-by
language:
- en
viewer: false
---
# Objaverse-XL
<a href="//arxiv.org/abs/2307.05663" target="_blank">
<img src="https://img.shields.io/badge/arXiv-2307.05663-<COLOR>">
</a>
Objaverse-XL is an open dataset of over 10 million 3D objects!
With it, we train Zero123-XL, a foundation model for 3D, observing incredible 3D generalization abilities: 🧵👇
<img src="https://mattdeitke.com/static/1cdcdb2ef7033e177ca9ae2975a9b451/9c1ca/objaverse-xl.webp">
## Scale Comparison
Objaverse 1.0 was released back in December. It was a step in the right direction, but still relatively small with 800K objects.
Objaverse-XL is over an order of magnitude larger and much more diverse!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/43833dd3-ec97-4a3d-8782-00a6aea584b4">
## Unlocking Generalization
Compared to the original Zero123 model, Zero123-XL improves remarkably in 0-shot generalization abilities, even being able to perform novel view synthesis on sketches, cartoons, and people!
A ton more examples in the [📝 paper](https://arxiv.org/abs/2307.05663) :)
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/8470e4df-e39d-444b-9871-58fbee4b87fd">
## Image → 3D
With the base Zero123-XL foundation model, we can perform image → 3D using [DreamFusion](https://dreamfusion3d.github.io/), having the model guide a NeRF to generate novel views!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/571852cd-dc02-46ce-b2bb-88f64a67d0ac" type="video/mp4">
</video>
## Text → 3D
Text-to-3D comes for free with text → image models, such as with SDXL here, providing the initial image!
<video autoplay muted loop controls>
<source src="https://github.com/allenai/objaverse-rendering/assets/28768645/96255b42-8158-4c7a-8308-7b0f1257ada8" type="video/mp4">
</video>
## Scaling Trends
Beyond that, we show strong scaling trends for both Zero123-XL and [PixelNeRF](https://alexyu.net/pixelnerf/)!
<img src="https://github.com/allenai/objaverse-rendering/assets/28768645/0c8bb433-27df-43a1-8cb8-1772007c0899">
## Tutorial
Check out the [Google Colab tutorial](https://colab.research.google.com/drive/15XpZMjrHXuky0IgBbXcsUtb_0g-XWYmN?usp=sharing) to download Objaverse-XL.
Polycam data is available by Polycam to academic researchers for non-commercial use upon request and approval from Polycam. For access please fill out [this form](https://forms.gle/HUjYVtS9GKVS5QBXA).
## License
The use of the dataset as a whole is licensed under the ODC-By v1.0 license. Individual objects in Objaverse-XL are licensed under different licenses.
## Citation
To cite Objaverse-XL, please cite our [📝 arXiv](https://arxiv.org/abs/2307.05663) paper with the following BibTeX entry:
```bibtex
@article{objaverseXL,
title={Objaverse-XL: A Universe of 10M+ 3D Objects},
author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and
Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and
Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and
Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and
Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
journal={arXiv preprint arXiv:2307.05663},
year={2023}
}
```
Objaverse 1.0 is available on 🤗Hugging Face at [@allenai/objaverse](https://huggingface.co/datasets/allenai/objaverse). To cite it, use:
```bibtex
@article{objaverse,
title={Objaverse: A Universe of Annotated 3D Objects},
author={Matt Deitke and Dustin Schwenk and Jordi Salvador and Luca Weihs and
Oscar Michel and Eli VanderBilt and Ludwig Schmidt and
Kiana Ehsani and Aniruddha Kembhavi and Ali Farhadi},
journal={arXiv preprint arXiv:2212.08051},
year={2022}
}
```
|
ClementRomac/cleaned_deduplicated_oscar | ClementRomac | 2023-10-25T14:05:19Z | 34,845 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-03-27T12:42:39Z | null | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 978937483730
num_examples: 232133013
- name: test
num_bytes: 59798696914
num_examples: 12329126
download_size: 37220219718
dataset_size: 1038736180644
---
# Dataset Card for "cleaned_deduplicated_oscar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LinkSoul/instruction_merge_set | LinkSoul | 2023-10-25T10:39:46Z | 89 | 121 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-05-31T12:16:24Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 13444870155
num_examples: 10077297
download_size: 3542585235
dataset_size: 13444870155
---
# Dataset Card for "instruction_merge_set"
## 本数据集由以下数据集构成:
| 数据(id in the merged set) | Hugging face 地址 | notes |
| --- | --- | --- |
| OIG (unified-任务名称) 15k | https://huggingface.co/datasets/laion/OIG | Open Instruction Generalist Dataset |
| Dolly databricks-dolly-15k | https://huggingface.co/datasets/databricks/databricks-dolly-15k | an open-source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories |
| UltraChat | https://huggingface.co/datasets/stingning/ultrachat | multi-round dialogue data |
| Camel | https://huggingface.co/datasets/camel-ai/ai_society | 25K conversations between two gpt-3.5-turbo agents. |
| camel (同上) | https://github.com/camel-ai/camel | |
| ChatDoctor icliniq-15k HealthCareMagic-200k | https://github.com/Kent0n-Li/ChatDoctor | 200k real conversations between patients and doctors from HealthCareMagic.com 15k real conversations between patients and doctors from iciniq-10k |
| Dolly | https://github.com/databrickslabs/dolly | |
| GPT4ALL | https://github.com/nomic-ai/gpt4all | |
| GPT-4-LLM comparision_data_b alpaca_gpt4_data_zh comparision_data_a alpaca_gpt4_data 5k | https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM | English Instruction-Following Data generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. Chinese Instruction-Following Data generated by GPT-4 using Chinese prompts translated from Alpaca by ChatGPT. Comparison Data ranked by GPT-4 to train reward models. Answers on Unnatural Instructions Data from GPT-4 to quantify the gap between GPT-4 and instruction-tuned models at scale. |
| GuanacoDataset guanaco_chat_all-utf8 guanaco_non_chat-utf8 paper_answers-utf8 general_ans-utf8 general_questions-utf8 paper_questions-utf8 30k | https://huggingface.co/datasets/JosephusCheung/GuanacoDataset | The dataset for the Guanaco model is designed to enhance the multilingual capabilities and address various linguistic tasks. It builds upon the 175 tasks from the Alpaca model by providing rewrites of seed tasks in different languages and adding new tasks specifically designed for English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition. The Paper/General-QA dataset is a collection of questions and answers constructed for AI-generated papers or general texts in English, Chinese, Japanese, and German. |
| HC3 ALL | https://huggingface.co/datasets/Hello-SimpleAI/HC3 | human-ChatGPT comparison datasets |
| instinwild instinwild_en instinwild_ch 5k | https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/instinwild | Instruction-Finetuning Dataset Collection (Alpaca-CoT) |
| Instruct-to-Code | https://huggingface.co/datasets/Graverman/Instruct-to-Code | |
| ShareGPT90K sg_90k_part2 sg_90k_part1 | https://huggingface.co/datasets/RyokoAI/ShareGPT52K | 90,000 conversations scraped via the ShareGPT API before it was shut down. These conversations include both user prompts and responses from OpenAI's ChatGPT. |
| UltraChat ultrachat_material_release_230412 ultrachat_release_230407 | https://github.com/thunlp/UltraChat | |
| wealth-alpaca-lora final_dataset_clean 4.3k | https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora | combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5, 有instruction |
| Alpaca alpaca_data 5k | https://github.com/tatsu-lab/stanford_alpaca | instruct-tuning |
| Baize alpaca_chat_data medical_chat_data quora_chat_data stack_overflow_chat_data | https://github.com/project-baize/baize-chatbot | instruction-following data we used for fine-tuning the Alpaca model. |
| botbots Reasoning flight_bookings medical_appointments travel_agency restaurants_mixed real_estate car_dealership home_maintenance, job_interview 'insurance_consultation': 16, 'hotels': 400, 'tech_support': 32, 'car_rentals': 32, 'pet_care': 48, 'restaurants': 200, 'legal_consultation': 16, 'event_tickets': 240, 'fitness_personal_training': 16, 'scientific_problems': 100 | https://github.com/radi-cho/botbots | A dataset consisting of dialogues between two instances of ChatGPT (gpt-3.5-turbo). The CLI commands and dialogue prompts themselves have been written by GPT-4. The dataset covers a wide range of contexts (questions and answers, arguing and reasoning, task-oriented dialogues) and downstream tasks (e.g., hotel reservations, medical advice). |
| ChatAlpaca chatalpaca_data_10k | https://github.com/cascip/ChatAlpaca | a chat dataset, multi-turn instruction-following conversations. |
| DERA train | https://github.com/curai/curai-research/tree/main/DERA | The following repository contains the open-ended question-answering version of MedQA. |
| GPTeacher Toolformer-dedupe-only-dataset roleplay-simple-deduped-roleplay-dataset gpt4-instruct-dedupe-only-dataset | https://github.com/teknium1/GPTeacher | A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer |
| OpenAGI | https://github.com/agiresearch/OpenAGI | |
| presto | https://github.com/google-research-datasets/presto | A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs |
|
orion-ai-lab/S4A | orion-ai-lab | 2023-10-24T08:15:34Z | 10,564 | 9 | [
"arxiv:2204.00951",
"region:us"
] | [] | 2022-07-01T16:26:54Z | null | ---
YAML tags:
---
## Dataset Description
- **Homepage:** [www.sen4agrinet.space.noa.gr](https://www.sen4agrinet.space.noa.gr/)
- **Repository:** [github.com/Orion-AI-Lab/S4A](https://github.com/Orion-AI-Lab/S4A)
- **Paper:** ["A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning" (D. Sykas, M. Sdraka, D. Zografakis, I. Papoutsis](https://arxiv.org/abs/2204.00951)
### Dataset Summary
Sen4AgriNet is a Sentinel-2 based time series multi-country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. It is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. These declarations have only recently been made available as open data, allowing for the first time the labelling of satellite imagery from ground truth data. We proceed to propose and standardise a new crop type taxonomy across Europe that address Common Agriculture Policy (CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative Crop Classification scheme. Sen4AgriNet is the only multi-country, multi-year dataset that includes all spectral information. The current version covers the period 2019-2020 for Catalonia and France, while it can be extended to include additional countries.
### Languages
All information in the dataset is in English (`en_GB`).
## Dataset Structure
### Data Instances
A typical sample in Sen4AgriNet consists of the following fields:
```
{
'patch_full_name': '2019_31TCF_patch_10_14',
'patch_year': '2019',
'patch_name': 'patch_10_14',
'patch_country_code': 'ES',
'patch_tile': '31TCF',
'B01': array([...]),
'B02': array([...]),
'B03': array([...]),
'B04': array([...]),
'B05': array([...]),
'B06': array([...]),
'B07': array([...]),
'B08': array([...]),
'B09': array([...]),
'B10': array([...]),
'B11': array([...]),
'B12': array([...]),
'B8A': array([...]),
'parcels': array([...]),
'labels': array([...]),
'timestamp': [...]
}
```
### Data Fields
Below we provide a brief explanation of each field:
- `patch_full_name`: The full name of the patch.
- `patch_year`: The year of the observations included in the patch.
- `patch_name`: The name of the patch. It is of the form: `patch_xx_yy` where `xx` and `yy` are the indices of the patch inside the tile.
- `patch_country_code`: The country code of the observations included in the patch. Currently it is either `ES` for Catalonia or `FR` for France.
- `B01`, ..., `B8A`: Each one is an array containing the observations of the corresponding Sentinel-2 band. The shape of each array is (T, H, W) where T is the number of observations, H the height of the image and W the width of the image.
- `parcels`: A mask containing the parcels code number.
- `labels`: A mask containing the class codes for each crop in the taxonomy.
- `timestamp`: The timestamps of the observations.
### Data Splits
In this version of the dataset there are no predefined train/val/test splits so that the users can define their own.
### Data configurations
There are the following configurations in the current version of Sen4AgriNet:
- `complete`: The complete Sen4AgriNet dataset.
- `cat_2019`: Only Catalonia data for 2019.
- `cat_2020`: Only Catalonia data for 2020.
- `fr_2019`: Only France data for 2019.
## Dataset Creation
### Curation Rationale
One of the major problems faced by researchers in the fields of Remote Sensing and AI is the absence of country-wide labelled data that are harmonized along space and time. Specifically in the EU, the Common Agriculture Policy (CAP) has placed a stepping stone to overcome this issue by legally establishing Paying Agencies in each EU country which are responsible for distributing subsidies to farmers. In order to fulfill their objectives, Paying Agencies systematically collect the cultivated crop type and parcel geometries for every farmer and record it via the Land Parcel Identification System (LPIS) in a standardized way for each country. Unfortunately, public access to these farmer declarations has been restricted for several years, thus making it almost impossible to get country-wide ground truth data. However, since 2019 and for the
first time these datasets are gradually becoming open (e.g. France, Catalonia, Estonia, Croatia, Slovenia, Slovakia and Luxemburg). This change offers a significant opportunity for the Earth Observation (EO) community to explore novel and innovative data-driven agricultural applications, by exploiting this abundance of new LPIS information.
In principle, this fusion of the LPIS data sources has tremendous potential but there are still some barriers to overcome. First of all, the LPIS system of each country is customly configured to utilize the local language of the crop types and the specific taxonomy structure of the crops that matches the local subsidies policy implementation. This non-standardization of the labels prohibits the spatial generalization of Deep Learning (DL) models and thus needs to be carefully handled to achieve a common representation consistent among countries. On top of these contextual/semantic barriers, parcels are mapped in the corresponding national cartographic projection which in all cases is different from the cartographic projection of the satellite images and pose an additional challenge on the preparation of a consistent, proper and at scale DL-ready dataset.
Aiming to overcome the above limitations in this repository we offer Sen4AgriNet, a unique benchmark EO dataset for agricultural monitoring with the following key characteristics:
- it is **pixel based** to capture spatial parcel variability
- it is **multi-temporal** to capture the crop phenology phases
- it is **multi-annual** to model the seasonal variability
- it is **multi-country** to model the geographic spatial variability
- it is **object-aggregated** to further incorporate ground truth data (parcel geometries) in the process
- it is **modular** since it can be enlarged with parcels from more EU countries or expanded in a straightforward way to include additional sensor and non-EO data (e.g. meteorological data)
### Source Data
1) The LPIS data for the region of Catalonia for 2019–2020 provided by the "Agricultura, Ramaderia, Pesca i Alimentacio" with an Open Data Commons Attribution License.
2) France LPIS data for 2019 provided by the French Paying Agency with an Open Data Commons Attribution License.
3) All Sentinel-2 L1C images with less than 10% cloud coverage for the above tiles.
#### Initial Data Collection and Normalization
The Sentinel-2 L1C images were downloaded from Copernicus and each image was split into 900 non-overlapping patches. A single patch contains 366x366 images for the 10-meter bands, 183x183 for the 20-meter bands and 61x61 for the 60-meter bands. The size of the patches was chosen in order to have integer division of the size of the tile with all 3 different spatial resolutions of Sentinel-2.
#### Annotation process
The Indicative Crop Classification (ICC) scheme was developed by the United Nations FAO organization. It is an approach to produce a harmonized vocabulary and taxonomy for crops and plants that are used in food production. Sen4AgriNet adopts and customises an extended version of FAO ICC in order to create a universally applicable crop label nomenclature for the collected LPIS data with the following benefits:
- Single language (English) is used and naming for all classes across all participating countries.
- Classes are normalized among different datasets.
- Hierarchical class structure is adopted. Depending on the application different levels of classes can be used.
- Additional non-agricultural classes are used (e.g. "fallow land", "barren land", etc.) to model Remote Sensing spectral signatures since agricultural parcels co-exist with other unrelated classes in satellite images.
The presented custom FAO/CLC classification scheme has a total of 9 groups, 168 classes and sub-classes. The 161 classes/sub-classes are crop related, 4 are some major CLC classes (as sub-classes in this hierarchy), 2 are the fallow and barren lands, and 1 is the no data sub-class.
This crop taxonomy was used to create the `labels` mask. In addition, a second annotation mask is provided (`parcels`) where each parcel obtains a unique identifier, regardless of the crops cultivated in it.
### Personal and Sensitive Information
None.
## Considerations for Using the Data
### Social Impact of Dataset
We believe that Sen4AgriNet can be regarded as a labelled benchmark dataset, tailored for CAP and the use of Sentinel-2 imagery that come at no cost, and can spur numerous DL-based applications for crop type classification, parcel extraction, parcel counting and semantic segmentation. More importantly, the dataset can be extended to include other input data sources, including Sentinel-1 Synthetic Aperture Radar data, and meteorological data, allowing a new family of applications on early warning risk assessment and agricultural insurance.
## Additional Information
### Licensing Information
MIT License.
### Citation Information
```
@ARTICLE{
9749916,
author={Sykas, Dimitrios and Sdraka, Maria and Zografakis, Dimitrios and Papoutsis, Ioannis},
journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
title={A Sentinel-2 multi-year, multi-country benchmark dataset for crop classification and segmentation with deep learning},
year={2022},
doi={10.1109/JSTARS.2022.3164771}
}
```
|
davanstrien/AmericanStories-parquet | davanstrien | 2023-10-20T15:09:54Z | 16,982 | 2 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-10-20T14:07:00Z | null | ---
configs:
- config_name: default
data_files:
- split: '1774'
path: data/1774-*
- split: '1798'
path: data/1798-*
- split: '1799'
path: data/1799-*
- split: '1800'
path: data/1800-*
- split: '1801'
path: data/1801-*
- split: '1802'
path: data/1802-*
- split: '1803'
path: data/1803-*
- split: '1804'
path: data/1804-*
- split: '1805'
path: data/1805-*
- split: '1806'
path: data/1806-*
- split: '1807'
path: data/1807-*
- split: '1808'
path: data/1808-*
- split: '1809'
path: data/1809-*
- split: '1810'
path: data/1810-*
- split: '1811'
path: data/1811-*
- split: '1812'
path: data/1812-*
- split: '1813'
path: data/1813-*
- split: '1814'
path: data/1814-*
- split: '1815'
path: data/1815-*
- split: '1816'
path: data/1816-*
- split: '1817'
path: data/1817-*
- split: '1818'
path: data/1818-*
- split: '1819'
path: data/1819-*
- split: '1820'
path: data/1820-*
- split: '1821'
path: data/1821-*
- split: '1822'
path: data/1822-*
- split: '1823'
path: data/1823-*
- split: '1824'
path: data/1824-*
- split: '1825'
path: data/1825-*
- split: '1826'
path: data/1826-*
- split: '1827'
path: data/1827-*
- split: '1828'
path: data/1828-*
- split: '1829'
path: data/1829-*
- split: '1830'
path: data/1830-*
- split: '1831'
path: data/1831-*
- split: '1832'
path: data/1832-*
- split: '1833'
path: data/1833-*
- split: '1834'
path: data/1834-*
- split: '1835'
path: data/1835-*
- split: '1836'
path: data/1836-*
- split: '1837'
path: data/1837-*
- split: '1838'
path: data/1838-*
- split: '1839'
path: data/1839-*
- split: '1840'
path: data/1840-*
- split: '1841'
path: data/1841-*
- split: '1842'
path: data/1842-*
- split: '1843'
path: data/1843-*
- split: '1844'
path: data/1844-*
- split: '1845'
path: data/1845-*
- split: '1846'
path: data/1846-*
- split: '1847'
path: data/1847-*
- split: '1848'
path: data/1848-*
- split: '1849'
path: data/1849-*
- split: '1850'
path: data/1850-*
- split: '1851'
path: data/1851-*
- split: '1852'
path: data/1852-*
- split: '1853'
path: data/1853-*
- split: '1854'
path: data/1854-*
- split: '1855'
path: data/1855-*
- split: '1856'
path: data/1856-*
- split: '1857'
path: data/1857-*
- split: '1858'
path: data/1858-*
- split: '1859'
path: data/1859-*
- split: '1860'
path: data/1860-*
- split: '1861'
path: data/1861-*
- split: '1862'
path: data/1862-*
- split: '1863'
path: data/1863-*
- split: '1864'
path: data/1864-*
- split: '1865'
path: data/1865-*
- split: '1866'
path: data/1866-*
- split: '1867'
path: data/1867-*
- split: '1868'
path: data/1868-*
- split: '1869'
path: data/1869-*
- split: '1870'
path: data/1870-*
- split: '1871'
path: data/1871-*
- split: '1872'
path: data/1872-*
- split: '1873'
path: data/1873-*
- split: '1874'
path: data/1874-*
- split: '1875'
path: data/1875-*
- split: '1876'
path: data/1876-*
- split: '1877'
path: data/1877-*
- split: '1878'
path: data/1878-*
- split: '1879'
path: data/1879-*
- split: '1880'
path: data/1880-*
- split: '1881'
path: data/1881-*
- split: '1882'
path: data/1882-*
- split: '1883'
path: data/1883-*
- split: '1884'
path: data/1884-*
- split: '1885'
path: data/1885-*
- split: '1886'
path: data/1886-*
- split: '1887'
path: data/1887-*
- split: '1888'
path: data/1888-*
- split: '1889'
path: data/1889-*
- split: '1890'
path: data/1890-*
- split: '1891'
path: data/1891-*
- split: '1892'
path: data/1892-*
- split: '1893'
path: data/1893-*
- split: '1894'
path: data/1894-*
- split: '1895'
path: data/1895-*
- split: '1896'
path: data/1896-*
- split: '1897'
path: data/1897-*
- split: '1898'
path: data/1898-*
- split: '1899'
path: data/1899-*
- split: '1900'
path: data/1900-*
- split: '1901'
path: data/1901-*
- split: '1902'
path: data/1902-*
- split: '1903'
path: data/1903-*
- split: '1904'
path: data/1904-*
- split: '1905'
path: data/1905-*
- split: '1906'
path: data/1906-*
- split: '1907'
path: data/1907-*
- split: '1908'
path: data/1908-*
- split: '1909'
path: data/1909-*
- split: '1910'
path: data/1910-*
- split: '1911'
path: data/1911-*
- split: '1912'
path: data/1912-*
- split: '1913'
path: data/1913-*
- split: '1914'
path: data/1914-*
- split: '1915'
path: data/1915-*
- split: '1916'
path: data/1916-*
- split: '1917'
path: data/1917-*
- split: '1918'
path: data/1918-*
- split: '1919'
path: data/1919-*
- split: '1920'
path: data/1920-*
- split: '1921'
path: data/1921-*
- split: '1922'
path: data/1922-*
- split: '1923'
path: data/1923-*
- split: '1924'
path: data/1924-*
- split: '1925'
path: data/1925-*
- split: '1926'
path: data/1926-*
- split: '1927'
path: data/1927-*
- split: '1928'
path: data/1928-*
- split: '1929'
path: data/1929-*
- split: '1930'
path: data/1930-*
- split: '1931'
path: data/1931-*
- split: '1932'
path: data/1932-*
- split: '1933'
path: data/1933-*
- split: '1934'
path: data/1934-*
- split: '1935'
path: data/1935-*
- split: '1936'
path: data/1936-*
- split: '1937'
path: data/1937-*
- split: '1938'
path: data/1938-*
- split: '1939'
path: data/1939-*
- split: '1940'
path: data/1940-*
- split: '1941'
path: data/1941-*
- split: '1942'
path: data/1942-*
- split: '1943'
path: data/1943-*
- split: '1944'
path: data/1944-*
- split: '1945'
path: data/1945-*
- split: '1946'
path: data/1946-*
- split: '1947'
path: data/1947-*
- split: '1948'
path: data/1948-*
- split: '1949'
path: data/1949-*
- split: '1950'
path: data/1950-*
- split: '1951'
path: data/1951-*
- split: '1952'
path: data/1952-*
- split: '1953'
path: data/1953-*
- split: '1954'
path: data/1954-*
- split: '1955'
path: data/1955-*
- split: '1956'
path: data/1956-*
- split: '1957'
path: data/1957-*
- split: '1958'
path: data/1958-*
- split: '1959'
path: data/1959-*
- split: '1960'
path: data/1960-*
- split: '1961'
path: data/1961-*
- split: '1962'
path: data/1962-*
- split: '1963'
path: data/1963-*
dataset_info:
features:
- name: article_id
dtype: string
- name: newspaper_name
dtype: string
- name: edition
dtype: string
- name: date
dtype: string
- name: page
dtype: string
- name: headline
dtype: string
- name: byline
dtype: string
- name: article
dtype: string
splits:
- name: '1774'
num_bytes: 22245
num_examples: 12
- name: '1798'
num_bytes: 72288
num_examples: 73
- name: '1799'
num_bytes: 946373
num_examples: 623
- name: '1800'
num_bytes: 38139
num_examples: 45
- name: '1801'
num_bytes: 94991
num_examples: 93
- name: '1802'
num_bytes: 1463322
num_examples: 1158
- name: '1803'
num_bytes: 799797
num_examples: 654
- name: '1804'
num_bytes: 120141
num_examples: 103
- name: '1805'
num_bytes: 2475205
num_examples: 2303
- name: '1806'
num_bytes: 2043729
num_examples: 1860
- name: '1807'
num_bytes: 310568
num_examples: 315
- name: '1808'
num_bytes: 75639
num_examples: 45
- name: '1809'
num_bytes: 430706
num_examples: 422
- name: '1810'
num_bytes: 1319755
num_examples: 982
- name: '1811'
num_bytes: 117701
num_examples: 110
- name: '1812'
num_bytes: 75299
num_examples: 67
- name: '1813'
num_bytes: 290966
num_examples: 242
- name: '1814'
num_bytes: 378212
num_examples: 379
- name: '1815'
num_bytes: 185179
num_examples: 160
- name: '1816'
num_bytes: 495706
num_examples: 409
- name: '1817'
num_bytes: 446354
num_examples: 394
- name: '1818'
num_bytes: 1257916
num_examples: 1108
- name: '1819'
num_bytes: 2476297
num_examples: 1997
- name: '1820'
num_bytes: 611884
num_examples: 433
- name: '1821'
num_bytes: 347361
num_examples: 270
- name: '1822'
num_bytes: 286227
num_examples: 264
- name: '1823'
num_bytes: 2030816
num_examples: 1113
- name: '1824'
num_bytes: 5171191
num_examples: 3110
- name: '1825'
num_bytes: 6341915
num_examples: 4005
- name: '1826'
num_bytes: 10462258
num_examples: 7079
- name: '1827'
num_bytes: 11634621
num_examples: 7213
- name: '1828'
num_bytes: 10253681
num_examples: 6350
- name: '1829'
num_bytes: 4021832
num_examples: 2296
- name: '1830'
num_bytes: 8321949
num_examples: 4232
- name: '1831'
num_bytes: 16796125
num_examples: 9699
- name: '1832'
num_bytes: 9982722
num_examples: 6565
- name: '1833'
num_bytes: 6653515
num_examples: 4108
- name: '1834'
num_bytes: 7099875
num_examples: 4632
- name: '1835'
num_bytes: 9066392
num_examples: 6168
- name: '1836'
num_bytes: 10473366
num_examples: 7375
- name: '1837'
num_bytes: 21002773
num_examples: 13609
- name: '1838'
num_bytes: 13735809
num_examples: 8492
- name: '1839'
num_bytes: 12512339
num_examples: 8938
- name: '1840'
num_bytes: 12647911
num_examples: 8052
- name: '1841'
num_bytes: 39146669
num_examples: 30019
- name: '1842'
num_bytes: 26218700
num_examples: 21290
- name: '1843'
num_bytes: 50447372
num_examples: 41657
- name: '1844'
num_bytes: 79351064
num_examples: 61373
- name: '1845'
num_bytes: 131632573
num_examples: 95921
- name: '1846'
num_bytes: 81086068
num_examples: 70331
- name: '1847'
num_bytes: 32733527
num_examples: 24354
- name: '1848'
num_bytes: 44577825
num_examples: 32531
- name: '1849'
num_bytes: 53877014
num_examples: 42711
- name: '1850'
num_bytes: 76697622
num_examples: 49992
- name: '1851'
num_bytes: 128372084
num_examples: 90184
- name: '1852'
num_bytes: 67005975
num_examples: 51172
- name: '1853'
num_bytes: 54210932
num_examples: 48130
- name: '1854'
num_bytes: 150406197
num_examples: 118825
- name: '1855'
num_bytes: 115893679
num_examples: 99390
- name: '1856'
num_bytes: 188859881
num_examples: 157592
- name: '1857'
num_bytes: 152841585
num_examples: 129179
- name: '1858'
num_bytes: 214657030
num_examples: 171877
- name: '1859'
num_bytes: 178711188
num_examples: 160924
- name: '1860'
num_bytes: 163889573
num_examples: 150590
- name: '1861'
num_bytes: 215595661
num_examples: 173990
- name: '1862'
num_bytes: 228323685
num_examples: 171021
- name: '1863'
num_bytes: 197294365
num_examples: 151485
- name: '1864'
num_bytes: 125113713
num_examples: 94415
- name: '1865'
num_bytes: 133515217
num_examples: 99728
- name: '1866'
num_bytes: 180768118
num_examples: 135316
- name: '1867'
num_bytes: 213571876
num_examples: 161180
- name: '1868'
num_bytes: 202156635
num_examples: 140521
- name: '1869'
num_bytes: 236506656
num_examples: 171455
- name: '1870'
num_bytes: 242779857
num_examples: 174061
- name: '1871'
num_bytes: 203189927
num_examples: 151652
- name: '1872'
num_bytes: 242624062
num_examples: 194784
- name: '1873'
num_bytes: 302626176
num_examples: 241902
- name: '1874'
num_bytes: 280814742
num_examples: 213813
- name: '1875'
num_bytes: 319815222
num_examples: 274269
- name: '1876'
num_bytes: 381483980
num_examples: 288199
- name: '1877'
num_bytes: 317703263
num_examples: 254946
- name: '1878'
num_bytes: 381274032
num_examples: 307865
- name: '1879'
num_bytes: 371703798
num_examples: 287784
- name: '1880'
num_bytes: 296465631
num_examples: 272352
- name: '1881'
num_bytes: 294568051
num_examples: 270228
- name: '1882'
num_bytes: 340511400
num_examples: 311920
- name: '1883'
num_bytes: 419078041
num_examples: 387589
- name: '1884'
num_bytes: 329666364
num_examples: 304242
- name: '1885'
num_bytes: 348144660
num_examples: 318732
- name: '1886'
num_bytes: 431746663
num_examples: 423718
- name: '1887'
num_bytes: 493647568
num_examples: 494559
- name: '1888'
num_bytes: 564523528
num_examples: 547165
- name: '1889'
num_bytes: 558168324
num_examples: 536750
- name: '1890'
num_bytes: 566964770
num_examples: 540615
- name: '1891'
num_bytes: 641124243
num_examples: 620461
- name: '1892'
num_bytes: 524812242
num_examples: 527044
- name: '1893'
num_bytes: 645853680
num_examples: 656805
- name: '1894'
num_bytes: 790577208
num_examples: 795408
- name: '1895'
num_bytes: 890097151
num_examples: 897766
- name: '1896'
num_bytes: 1235234882
num_examples: 1175701
- name: '1897'
num_bytes: 1252347746
num_examples: 1275895
- name: '1898'
num_bytes: 1286411001
num_examples: 1323842
- name: '1899'
num_bytes: 1176418162
num_examples: 1218682
- name: '1900'
num_bytes: 1069983237
num_examples: 1118970
- name: '1901'
num_bytes: 1478945214
num_examples: 1468648
- name: '1902'
num_bytes: 1376703767
num_examples: 1417935
- name: '1903'
num_bytes: 1255538379
num_examples: 1319686
- name: '1904'
num_bytes: 1232185827
num_examples: 1340868
- name: '1905'
num_bytes: 1563178627
num_examples: 1635134
- name: '1906'
num_bytes: 1632815247
num_examples: 1683643
- name: '1907'
num_bytes: 1647491794
num_examples: 1714613
- name: '1908'
num_bytes: 1771267430
num_examples: 1842874
- name: '1909'
num_bytes: 1844179657
num_examples: 1926228
- name: '1910'
num_bytes: 1589052587
num_examples: 1684263
- name: '1911'
num_bytes: 1402309564
num_examples: 1510259
- name: '1912'
num_bytes: 1621648367
num_examples: 1774149
- name: '1913'
num_bytes: 1613599136
num_examples: 1822206
- name: '1914'
num_bytes: 1736284455
num_examples: 1931901
- name: '1915'
num_bytes: 1690248452
num_examples: 1878654
- name: '1916'
num_bytes: 1633563499
num_examples: 1838797
- name: '1917'
num_bytes: 1605677226
num_examples: 1810757
- name: '1918'
num_bytes: 1803695589
num_examples: 1920102
- name: '1919'
num_bytes: 1831703767
num_examples: 1981192
- name: '1920'
num_bytes: 1901882705
num_examples: 2041192
- name: '1921'
num_bytes: 2264618667
num_examples: 2334112
- name: '1922'
num_bytes: 2372137567
num_examples: 2405974
- name: '1923'
num_bytes: 812177597
num_examples: 880372
- name: '1924'
num_bytes: 800835690
num_examples: 845520
- name: '1925'
num_bytes: 601426023
num_examples: 662322
- name: '1926'
num_bytes: 565307890
num_examples: 623765
- name: '1927'
num_bytes: 460501197
num_examples: 504835
- name: '1928'
num_bytes: 452526140
num_examples: 487302
- name: '1929'
num_bytes: 366246066
num_examples: 421909
- name: '1930'
num_bytes: 437657836
num_examples: 492695
- name: '1931'
num_bytes: 441972257
num_examples: 493816
- name: '1932'
num_bytes: 640501746
num_examples: 664615
- name: '1933'
num_bytes: 634373318
num_examples: 642380
- name: '1934'
num_bytes: 641841040
num_examples: 654342
- name: '1935'
num_bytes: 612406176
num_examples: 635554
- name: '1936'
num_bytes: 621035178
num_examples: 662015
- name: '1937'
num_bytes: 625107933
num_examples: 676549
- name: '1938'
num_bytes: 616370880
num_examples: 665274
- name: '1939'
num_bytes: 525913265
num_examples: 556283
- name: '1940'
num_bytes: 471830118
num_examples: 496662
- name: '1941'
num_bytes: 599694786
num_examples: 637200
- name: '1942'
num_bytes: 508785410
num_examples: 523923
- name: '1943'
num_bytes: 452079475
num_examples: 467200
- name: '1944'
num_bytes: 442871777
num_examples: 433769
- name: '1945'
num_bytes: 588623743
num_examples: 588477
- name: '1946'
num_bytes: 526027876
num_examples: 470895
- name: '1947'
num_bytes: 461281363
num_examples: 393086
- name: '1948'
num_bytes: 442999943
num_examples: 396660
- name: '1949'
num_bytes: 421752000
num_examples: 419854
- name: '1950'
num_bytes: 403717616
num_examples: 415416
- name: '1951'
num_bytes: 409600217
num_examples: 419622
- name: '1952'
num_bytes: 397051717
num_examples: 396420
- name: '1953'
num_bytes: 366253682
num_examples: 358332
- name: '1954'
num_bytes: 263197428
num_examples: 266338
- name: '1955'
num_bytes: 268993926
num_examples: 273576
- name: '1956'
num_bytes: 85126796
num_examples: 98035
- name: '1957'
num_bytes: 83757036
num_examples: 93543
- name: '1958'
num_bytes: 85807593
num_examples: 98688
- name: '1959'
num_bytes: 112707174
num_examples: 129452
- name: '1960'
num_bytes: 300484826
num_examples: 344550
- name: '1961'
num_bytes: 297225753
num_examples: 339076
- name: '1962'
num_bytes: 231525869
num_examples: 264724
- name: '1963'
num_bytes: 197520960
num_examples: 226859
download_size: 48388744959
dataset_size: 76303058024
---
# Dataset Card for "AmericanStories-parquet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stanfordnlp/sentiment140 | stanfordnlp | 2023-10-20T12:55:00Z | 2,081 | 23 | [
"language:en",
"region:us"
] | [] | 2022-03-02T23:29:22Z | 1 | ---
language:
- en
paperswithcode_id: sentiment140
pretty_name: Sentiment140
dataset_info:
config_name: sentiment140
features:
- name: text
dtype: string
- name: date
dtype: string
- name: user
dtype: string
- name: sentiment
dtype: int32
- name: query
dtype: string
splits:
- name: train
num_bytes: 224542690
num_examples: 1600000
- name: test
num_bytes: 72971
num_examples: 498
download_size: 81363704
dataset_size: 224615661
train-eval-index:
- config: sentiment140
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
sentiment: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "sentiment140"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://help.sentiment140.com/home](http://help.sentiment140.com/home)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 81.36 MB
- **Size of the generated dataset:** 225.82 MB
- **Total amount of disk used:** 307.18 MB
### Dataset Summary
Sentiment140 consists of Twitter messages with emoticons, which are used as noisy labels for
sentiment classification. For more detailed information please refer to the paper.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### sentiment140
- **Size of downloaded dataset files:** 81.36 MB
- **Size of the generated dataset:** 225.82 MB
- **Total amount of disk used:** 307.18 MB
An example of 'train' looks as follows.
```
{
"date": "23-04-2010",
"query": "NO_QUERY",
"sentiment": 3,
"text": "train message",
"user": "train user"
}
```
### Data Fields
The data fields are the same among all splits.
#### sentiment140
- `text`: a `string` feature.
- `date`: a `string` feature.
- `user`: a `string` feature.
- `sentiment`: a `int32` feature.
- `query`: a `string` feature.
### Data Splits
| name | train |test|
|------------|------:|---:|
|sentiment140|1600000| 498|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{go2009twitter,
title={Twitter sentiment classification using distant supervision},
author={Go, Alec and Bhayani, Richa and Huang, Lei},
journal={CS224N project report, Stanford},
volume={1},
number={12},
pages={2009},
year={2009}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
clouditera/security-paper-datasets | clouditera | 2023-10-16T10:34:13Z | 766 | 99 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-08-25T02:11:45Z | null | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 1690579945
num_examples: 428155
download_size: 751689097
dataset_size: 1690579945
---
# Dataset Card for "security-paper-datasets"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Open-Orca/SlimOrca | Open-Orca | 2023-10-12T06:43:59Z | 1,359 | 278 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2306.02707",
"arxiv:2301.13688",
"region:us"
] | [
"conversational",
"text-classification",
"token-classification",
"table-question-answering",
"question-answering",
"zero-shot-classification",
"summarization",
"feature-extraction",
"text-generation",
"text2text-generation"
] | 2023-10-06T09:40:55Z | null | ---
language:
- en
license: mit
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: SlimOrca
size_categories:
- 100K<n<1M
---
# Overview
This is a new curated subset of our OpenOrca data. This release provides an efficient means of reaching performance on-par with using larger slices of our data, while only including ~500k GPT-4 completions.
The key change in this dataset is that we've done an additional pass, using GPT-4 to remove answers which appear wrong based on the human annotations from the FLAN dataset.
This reduces the dataset size to only ~500k entries, allowing training to a similar quality level to our previous releases with 2/3 the compute requirement.
# Demo Models
* https://huggingface.co/openaccess-ai-collective/jackalope-7b
* https://huggingface.co/Open-Orca/Mistral-7B-SlimOrca
# Citation
```bibtex
@misc{SlimOrca,
title = {SlimOrca: An Open Dataset of GPT-4 Augmented FLAN Reasoning Traces, with Verification},
author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
url = {https://https://huggingface.co/Open-Orca/SlimOrca}
}
```
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
``` |
keivalya/MedQuad-MedicalQnADataset | keivalya | 2023-10-11T10:50:41Z | 2,914 | 102 | [
"task_categories:question-answering",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering",
"text2text-generation"
] | 2023-10-11T10:38:26Z | null | ---
task_categories:
- question-answering
- text2text-generation
pretty_name: MedQuad-KV
---
### Reference:
- "A Question-Entailment Approach to Question Answering". Asma Ben Abacha and Dina Demner-Fushman. BMC Bioinformatics, 2019. |
a686d380/h-corpus-raw | a686d380 | 2023-10-06T08:25:50Z | 297 | 38 | [
"language:zh",
"region:us"
] | [] | 2023-10-06T08:05:34Z | 2 | ---
viewer: false
language:
- zh
---
未清洗的中文H小说
| 数据| 文章数| 解压后大小 | 来源 | 质量 | 备注|
|- | - |- | - | - | - |
|jjsw | 73,432 | 4.0 GB | 禁忌书屋 | 高 | - |
|pixiv-selected | 2,935 | 174.3 MB | pixiv排行版 | 高 | - |
|shubao | 6,776 |1.6 GB | 网络 | 低 | - |
|sis-long | 4,555 | 3.5 GB | sis | 中 | - |
|sis-short | 111,237 | 4.1 GB | sis | 中 | - |
|xbookcn | 39,798 | 1.0 GB | xbookcn | 高 | - |
|xhs | 38,406 | 8.6 GB | 网络 | 中 | - |
|zyd2023 | 3,935 | 3.8 GB | 网络 | 中 | - |
仅供科学研究使用! |
sahil2801/CodeAlpaca-20k | sahil2801 | 2023-10-03T11:46:04Z | 4,037 | 196 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code"
] | [
"text-generation"
] | 2023-03-26T22:09:47Z | null | ---
license: cc-by-4.0
task_categories:
- text-generation
tags:
- code
pretty_name: CodeAlpaca 20K
size_categories:
- 10K<n<100K
language:
- en
--- |
knkarthick/dialogsum | knkarthick | 2023-10-03T10:56:21Z | 35,796 | 197 | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dialogue-summary",
"one-liner-summary",
"meeting-title",
"email-subject"
] | [
"summarization",
"text2text-generation",
"text-generation"
] | 2022-06-28T10:17:20Z | null | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
- text-generation
task_ids: []
pretty_name: DIALOGSum Corpus
tags:
- dialogue-summary
- one-liner-summary
- meeting-title
- email-subject
---
# Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co/knkarthick
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
### Languages
English
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
- holdout: 100 [Only 3 features: id, dialogue, topic]
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
CC BY-NC-SA 4.0
## Citation Information
```
@inproceedings{chen-etal-2021-dialogsum,
title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
author = "Chen, Yulong and
Liu, Yang and
Chen, Liang and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.449",
doi = "10.18653/v1/2021.findings-acl.449",
pages = "5062--5074",
```
## Contributions
Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset. |
alexandrainst/nota | alexandrainst | 2023-10-03T07:51:33Z | 12,908 | 1 | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"language:da",
"license:cc0-1.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"automatic-speech-recognition",
"text-to-speech"
] | 2023-10-02T16:01:30Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 361607522962
num_examples: 98600
download_size: 256213612292
dataset_size: 361607522962
license: cc0-1.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- da
pretty_name: Nota
size_categories:
- 10K<n<100K
---
# Dataset Card for Nota
## Dataset Description
- **Repository:** <https://sprogtek-ressources.digst.govcloud.dk/nota>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 256.21 GB
- **Size of the generated dataset:** 361.62 GB
- **Total amount of disk used:** 617.83 GB
### Dataset Summary
This data was created by the public institution [Nota](https://nota.dk/), which is part of the Danish Ministry of Culture. Nota has a library audiobooks and audiomagazines for people with reading or sight disabilities. Nota also produces a number of audiobooks and audiomagazines themselves.
The dataset consists of audio and associated transcriptions from Nota's audiomagazines "Inspiration" and "Radio/TV". All files related to one reading of one edition of the magazine "Inspiration" or "Radio/TV" has been segmented into bits of 2 - 50 seconds.
The dataset has been published as a part of the initiative sprogteknologi.dk, within the [Danish Agency for Digital Government (DIGST)](www.digst.dk).
### Supported Tasks and Leaderboards
Automatic speech recognition and speech synthesis are the intended tasks for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 256.21 GB
- **Size of the generated dataset:** 361.62 GB
- **Total amount of disk used:** 617.83 GB
An example from the dataset looks as follows.
```
{
'audio': {'path': 'RMHL20190028_000140.wav',
'array': array([-0.04023849, -0.06235407, -0.04545404, ..., 0.00014322,
0.00017925, 0.00018811]),
'sampling_rate': 16000},
'text': '13:05: 24syv Dokumentar 14:05: Spørge Jørgen Vært: Jørgen Leth'
}
```
### Data Fields
The data fields are the same among all splits.
- `audio`: an `Audio` feature.
- `text`: a `string` feature.
### Dataset Statistics
There are 98,600 samples in the dataset.
#### Transcription Length Distribution

## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) reorganised the dataset and uploaded it to the Hugging Face Hub.
### Licensing Information
The dataset is licensed under the [CC0
license](https://creativecommons.org/share-your-work/public-domain/cc0/). |
mosaicml/dolly_hhrlhf | mosaicml | 2023-10-02T15:48:48Z | 1,355 | 112 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2023-05-02T22:27:06Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 43781455.002688624
num_examples: 59310
- name: test
num_bytes: 4479286.805304853
num_examples: 5129
download_size: 24882010
dataset_size: 48260741.80799348
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
pretty_name: Dolly HH-RLHF
---
# Dataset Card for "dolly_hhrlhf"
This dataset is a combination of [Databrick's dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and a filtered subset of [Anthropic's HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf). It also includes a test split, which was missing in the original `dolly` set. That test set is composed of 200 randomly selected samples from `dolly` + 4,929 of the test set samples from HH-RLHF which made it through the filtering process. The train set contains 59,310 samples; `15,014 - 200 = 14,814` from Dolly, and the remaining 44,496 from HH-RLHF.
It is slightly larger than Alpaca, and in our experience of slightly higher quality, but is usable for commercial purposes so long as you follow the terms of the license.
## Filtering process
As mentioned, the HH-RLHF data in this dataset is filtered. Specifically, we take the first turn of the convesation, then remove any samples where the assistant:
- uses the word "human", "thank", or "sorry"
- asks a question
- uses a first person pronoun
This leaves samples which look like instruction-following, as opposed to conversation.
## License/Attribution
<!--
**Copyright (2023) MosaicML, Inc.**
-->
This dataset was developed at MosaicML (https://www.mosaicml.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors.
Databricks (https://www.databricks.com)
Copyright © Databricks
When citing this dataset, please use the following:
```
@misc{mosaicml2023dolly_hhrlhf,
author = {MosaicML},
title = {Dolly-HHRLHF Dataset},
year = {2023},
publisher = {HuggingFace Datasets},
howpublished = {https://huggingface.co/datasets/mosaicml/dolly_hhrlhf},
}
``` |
McGill-NLP/TopiOCQA | McGill-NLP | 2023-09-29T19:37:48Z | 173 | 7 | [
"task_categories:text-retrieval",
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"modality:tabular",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2110.00768",
"region:us",
"conversational-question-answering"
] | [
"text-retrieval",
"text-generation"
] | 2022-04-08T18:29:53Z | 1 | ---
annotations_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100k
task_categories:
- text-retrieval
- text-generation
task_ids:
- language-modeling
- open-domain-qa
pretty_name: Open-domain Conversational Question Answering with Topic Switching
tags:
- conversational-question-answering
---
# Dataset Card for TopiOCQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [TopiOCQA homepage](https://mcgill-nlp.github.io/topiocqa/)
- **Repository:** [TopiOCQA Github](https://github.com/McGill-NLP/topiocqa)
- **Paper:** [Open-domain Conversational Question Answering with Topic Switching](https://arxiv.org/abs/2110.00768)
- **Point of Contact:** [Vaibhav Adlakha](mailto:[email protected])
### Dataset Summary
TopiOCQA is an information-seeking conversational dataset with challenging topic switching phenomena.
### Languages
The language in the dataset is English as spoken by the crowdworkers. The BCP-47 code for English is en.
## Additional Information
### Licensing Information
TopiOCQA is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@inproceedings{adlakha2022topiocqa,
title={Topi{OCQA}: Open-domain Conversational Question Answering with Topic Switching},
author={Adlakha, Vaibhav and Dhuliawala, Shehzaad and Suleman, Kaheer and de Vries, Harm and Reddy, Siva},
journal={Transactions of the Association for Computational Linguistics},
volume = {10},
pages = {468-483},
year = {2022},
month = {04},
year={2022},
issn = {2307-387X},
doi = {10.1162/tacl_a_00471},
url = {https://doi.org/10.1162/tacl\_a\_00471},
eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00471/2008126/tacl\_a\_00471.pdf},
}
``` |
glaiveai/glaive-function-calling | glaiveai | 2023-09-27T18:04:36Z | 212 | 93 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2023-08-07T17:51:48Z | null | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
This dataset consists of 52k samples generated through [Glaive](https://glaive.ai) for the task of function calling, in the following format-
```
SYSTEM: You are an helpful assistant who has access to the following functions to help the user, you can use the functions if needed-
{
JSON function definiton
}
USER: user message
ASSISTANT: assistant message
Function call invocations are formatted as-
ASSISTANT: <functioncall> {json function call}
Response to the function call is formatted as-
FUNCTION RESPONSE: {json function response}
```
There are also samples which do not have any function invocations, multiple invocations and samples with no functions presented and invoked to keep the data balanced. |
knowrohit07/know_sql | knowrohit07 | 2023-09-20T20:13:06Z | 426 | 112 | [
"license:openrail",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2023-09-16T12:18:52Z | null | ---
license: openrail
---
please use the val ign file for training, its much cleaner. thanks :) |
pkufool/libriheavy | pkufool | 2023-09-19T11:35:45Z | 129 | 15 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2309.08105",
"region:us"
] | [] | 2023-08-21T11:20:42Z | 2 | ---
license: apache-2.0
---
# Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context
Libriheavy is a labeled version of [Librilight](https://github.com/facebookresearch/libri-light), read our [paper](https://arxiv.org/abs/2309.08105) for more details.
See https://github.com/k2-fsa/libriheavy for more details.
## Citation
```
@misc{kang2023libriheavy,
title={Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context},
author={Wei Kang and Xiaoyu Yang and Zengwei Yao and Fangjun Kuang and Yifan Yang and Liyong Guo and Long Lin and Daniel Povey},
year={2023},
eprint={2309.08105},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
dominguesm/CC-MAIN-2023-23 | dominguesm | 2023-09-17T00:02:06Z | 13,871 | 3 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"language:pt",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2023-09-16T20:32:49Z | null | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: crawl_timestamp
dtype: timestamp[ns, tz=UTC]
splits:
- name: train
num_bytes: 97584560119
num_examples: 16899389
download_size: 18490153155
dataset_size: 97584560119
license: cc-by-4.0
task_categories:
- text-generation
- fill-mask
language:
- pt
pretty_name: CC-MAIN-2023-23-PT
size_categories:
- 10B<n<100B
---
# Dataset Card for "CC-MAIN-2023-23"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
QingyiSi/Alpaca-CoT | QingyiSi | 2023-09-14T08:52:10Z | 4,630 | 729 | [
"language:en",
"language:zh",
"language:ml",
"license:apache-2.0",
"region:us",
"Instruction",
"Cot"
] | [] | 2023-03-25T14:58:30Z | null | ---
language:
- en
- zh
- ml
tags:
- Instruction
- Cot
license: apache-2.0
datasets:
- dataset1
- dataset2
---
# Instruction-Finetuning Dataset Collection (Alpaca-CoT)
This repository will continuously collect various instruction tuning datasets. And we standardize different datasets into the same format, which can be directly loaded by the [code](https://github.com/PhoebusSi/alpaca-CoT) of Alpaca model.
We also have conducted empirical study on various instruction-tuning datasets based on the Alpaca model, as shown in [https://github.com/PhoebusSi/alpaca-CoT](https://github.com/PhoebusSi/alpaca-CoT).
If you think this dataset collection is helpful to you, please `like` this dataset and `star` our [github project](https://github.com/PhoebusSi/alpaca-CoT)!
You are in a warm welcome to provide us with any non-collected instruction-tuning datasets (or their sources). We will uniformly format them, train Alpaca model with these datasets and open source the model checkpoints.
# Contribute
Welcome to join us and become a contributor to this project!
If you want to share some datasets, adjust the data in the following format:
```
example.json
[
{"instruction": instruction string,
"input": input string, # (may be empty)
"output": output string}
]
```
Folder should be like this:
```
Alpaca-CoT
|
|----example
| |
| |----example.json
| |
| ----example_context.json
...
```
Create a new pull request in [Community
](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/discussions) and publish your branch when you are ready. We will merge it as soon as we can.
# Data Usage and Resources
## Data Format
All data in this folder is formatted into the same templates, where each sample is as follows:
```
[
{"instruction": instruction string,
"input": input string, # (may be empty)
"output": output string}
]
```
## alpaca
#### alpaca_data.json
> This dataset is published by [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca). It contains 52K English instruction-following samples obtained by [Self-Instruction](https://github.com/yizhongw/self-instruct) techniques.
#### alpaca_data_cleaned.json
> This dataset is obtained [here](https://github.com/tloen/alpaca-lora). It is a revised version of `alpaca_data.json` by stripping of various tokenization artifacts.
## alpacaGPT4
#### alpaca_gpt4_data.json
> This dataset is published by [Instruction-Tuning-with-GPT-4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM).
It contains 52K English instruction-following samples generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
#### alpaca_gpt4_data_zh.json
> This dataset is generated by GPT-4 using Chinese prompts translated from Alpaca by ChatGPT.
<!-- ## belle_cn
#### belle_data_cn.json
This dataset is published by [BELLE](https://github.com/LianjiaTech/BELLE). It contains 0.5M Chinese instruction-following samples, which is also generated by [Self-Instruction](https://github.com/yizhongw/self-instruct) techniques.
#### belle_data1M_cn.json
This dataset is published by [BELLE](https://github.com/LianjiaTech/BELLE). It contains 1M Chinese instruction-following samples. The data of `belle_data_cn.json` and `belle_data1M_cn.json` are not duplicated. -->
## Chain-of-Thought
#### CoT_data.json
> This dataset is obtained by formatting the combination of 9 CoT datasets published by [FLAN](https://github.com/google-research/FLAN). It contains 9 CoT tasks involving 74771 samples.
#### CoT_CN_data.json
> This dataset is obtained by tranlating `CoT_data.json` into Chinese, using Google Translate(en2cn).
#### formatted_cot_data folder
> This folder contains the formatted English data for each CoT dataset.
#### formatted_cot_data folder
> This folder contains the formatted Chinese data for each CoT dataset.
## CodeAlpaca
#### code_alpaca.json
> This dataset is published by [codealpaca](https://github.com/sahil280114/codealpaca). It contains code generation task involving 20022 samples.
## finance
#### finance_en.json
> This dataset is collected from [here](https://huggingface.co/datasets/gbharti/finance-alpaca). It contains 68912 financial related instructions in English.
## firefly
#### firefly.json
> his dataset is collected from [here](https://github.com/yangjianxin1/Firefly). It contains 1649398 chinese instructions in 23 nlp tasks.
## GPT4all
#### gpt4all.json
> This dataset is collected from [here](https://github.com/nomic-ai/gpt4all). It contains 806199 en instructions in code, storys and dialogs tasks.
#### gpt4all_without_p3.json
> gpt4all without Bigscience/P3, contains 437605 samples.
## GPTeacher
#### GPTeacher.json
> This dataset is collected from [here](https://github.com/teknium1/GPTeacher). It contains 29013 en instructions generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer.
## Guanaco
#### GuanacoDataset.json
> This dataset is collected from [here](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset). It contains 534610 en instructions generated by text-davinci-003 upon 175 tasks from the Alpaca model by providing rewrites of seed tasks in different languages and adding new tasks specifically designed for English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition.
#### Guanaco_additional_Dataset.json
> A new additional larger dataset for different languages.
## HC3
#### HC3_ChatGPT.json/HC3_Human.json
> This dataset is collected from [here](https://huggingface.co/datasets/Hello-SimpleAI/HC3). It contains 37175 en/zh instructions generated by ChatGPT and human.
#### HC3_ChatGPT_deduplication.json/HC3_Human_deduplication.json
> HC3 dataset without deduplication instructions.
## instinwild
#### instinwild_en.json & instinwild_cn.json
> The two datasets are obtained [here](https://github.com/XueFuzhao/InstructionWild). It contains 52191 English and 51504 Chinese instructions, which are collected from Twitter, where users tend to share their interesting prompts of mostly generation, open QA, and mind-storm types. (Colossal AI used these datasets to train the ColossalChat model.)
## instruct
#### instruct.json
> The two datasets are obtained [here](https://huggingface.co/datasets/swype/instruct). It contains 888969 English instructions, which are caugmentation performed using the advanced NLP tools provided by AllenAI.
## Natural Instructions
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://github.com/allenai/natural-instructions). It contains 5040134 instructions, which are collected from diverse nlp tasks
## prosocial dialog
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://huggingface.co/datasets/allenai/prosocial-dialog). It contains 165681 English instructions, which are produuced by GPT-3 rewrites questions and humans feedback
## xP3
#### natural-instructions-1700tasks.zip
> This dataset is obtained [here](https://huggingface.co/datasets/bigscience/xP3). It contains 78883588 instructions, which are collected by prompts & datasets across 46 of languages & 16 NLP tasks
## Chinese-instruction-collection
> all datasets of Chinese instruction collection
## combination
#### alcapa_plus_belle_data.json
> This dataset is the combination of English `alpaca_data.json` and Chinese `belle_data_cn.json`.
#### alcapa_plus_cot_data.json
> This dataset is the combination of English `alpaca_data.json` and CoT `CoT_data.json`.
#### alcapa_plus_belle_cot_data.json
> This dataset is the combination of English `alpaca_data.json`, Chinese `belle_data_cn.json` and CoT `CoT_data.json`.
## Citation
Please cite the repo if you use the data collection, code, and experimental findings in this repo.
```
@misc{alpaca-cot,
author = {Qingyi Si, Zheng Lin },
school = {Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China},
title = {Alpaca-CoT: An Instruction Fine-Tuning Platform with Instruction Data Collection and Unified Large Language Models Interface},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/PhoebusSi/alpaca-CoT}},
}
```
Cite the original Stanford Alpaca, BELLE and FLAN papers as well, please.
|
graelo/wikipedia | graelo | 2023-09-10T06:10:08Z | 14,823 | 66 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:original",
"language:ab",
"language:ace",
"language:ady",
"language:af",
"language:ak",
"language:als",
"language:alt",
"language:am",
"language:ami",
"language:an",
"language:ang",
"language:anp",
"language:ar",
"language:arc",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:atj",
"language:av",
"language:avk",
"language:awa",
"language:ay",
"language:az",
"language:azb",
"language:ba",
"language:ban",
"language:bar",
"language:bcl",
"language:be",
"language:bg",
"language:bh",
"language:bi",
"language:bjn",
"language:blk",
"language:bm",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bug",
"language:bxr",
"language:ca",
"language:cdo",
"language:ce",
"language:ceb",
"language:ch",
"language:cho",
"language:chr",
"language:chy",
"language:ckb",
"language:co",
"language:cr",
"language:crh",
"language:cs",
"language:csb",
"language:cu",
"language:cv",
"language:cy",
"language:da",
"language:dag",
"language:de",
"language:din",
"language:diq",
"language:dsb",
"language:dty",
"language:dv",
"language:dz",
"language:ee",
"language:el",
"language:eml",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:fat",
"language:ff",
"language:fi",
"language:fj",
"language:fo",
"language:fr",
"language:frp",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gag",
"language:gan",
"language:gcr",
"language:gd",
"language:gl",
"language:glk",
"language:gn",
"language:gom",
"language:gor",
"language:got",
"language:gu",
"language:guc",
"language:gur",
"language:guw",
"language:gv",
"language:ha",
"language:hak",
"language:haw",
"language:he",
"language:hi",
"language:hif",
"language:ho",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:hyw",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ii",
"language:ik",
"language:ilo",
"language:inh",
"language:io",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:jam",
"language:jbo",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kbp",
"language:kcg",
"language:kg",
"language:ki",
"language:kj",
"language:kk",
"language:kl",
"language:km",
"language:kn",
"language:ko",
"language:koi",
"language:krc",
"language:ks",
"language:ksh",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lad",
"language:lb",
"language:lbe",
"language:lez",
"language:lfn",
"language:lg",
"language:li",
"language:lij",
"language:lld",
"language:lmo",
"language:ln",
"language:lo",
"language:lrc",
"language:lt",
"language:ltg",
"language:lv",
"language:mad",
"language:mai",
"language:mdf",
"language:mg",
"language:mh",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mni",
"language:mnw",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mus",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:nah",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:ng",
"language:nia",
"language:nl",
"language:nn",
"language:no",
"language:nov",
"language:nqo",
"language:nrm",
"language:nso",
"language:nv",
"language:ny",
"language:oc",
"language:olo",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pag",
"language:pam",
"language:pap",
"language:pcd",
"language:pcm",
"language:pdc",
"language:pfl",
"language:pi",
"language:pih",
"language:pl",
"language:pms",
"language:pnb",
"language:pnt",
"language:ps",
"language:pt",
"language:pwn",
"language:qu",
"language:rm",
"language:rmy",
"language:rn",
"language:ro",
"language:ru",
"language:rue",
"language:rw",
"language:sa",
"language:sah",
"language:sat",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:se",
"language:sg",
"language:sh",
"language:shi",
"language:shn",
"language:si",
"language:sk",
"language:skr",
"language:sl",
"language:sm",
"language:smn",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:srn",
"language:ss",
"language:st",
"language:stq",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:szy",
"language:ta",
"language:tay",
"language:tcy",
"language:te",
"language:tet",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tl",
"language:tn",
"language:to",
"language:tpi",
"language:tr",
"language:trv",
"language:ts",
"language:tt",
"language:tum",
"language:tw",
"language:ty",
"language:tyv",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xal",
"language:xh",
"language:xmf",
"language:yi",
"language:yo",
"language:za",
"language:zea",
"language:zh",
"language:zu",
"license:cc-by-sa-3.0",
"license:gfdl",
"size_categories:100M<n<1B",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation",
"fill-mask"
] | 2023-06-10T22:40:06Z | null | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
language:
# - aa - closed and no dump
- ab
- ace
- ady
- af
- ak
- als
- alt
- am
- ami
- an
- ang
- anp
- ar
- arc
- ary
- arz
- as
- ast
- atj
- av
- avk
- awa
- ay
- az
- azb
- ba
- ban
- bar
# - bat-smg - see bcp47 below
- bcl
# - be-x-old - see bcp47 below
- be
- bg
- bh
- bi
- bjn
- blk
- bm
- bn
- bo
- bpy
- br
- bs
- bug
- bxr
- ca
# - cbk-zam - see bcp47 below
- cdo
- ce
- ceb
- ch
- cho # closed
- chr
- chy
- ckb
- co
- cr
- crh
- cs
- csb
- cu
- cv
- cy
- da
- dag
- de
- din
- diq
- dsb
- dty
- dv
- dz
- ee
- el
- eml
- eo
- es
- et
- eu
- ext
- fa
- fat
- ff
- fi
# - fiu-vro - see bcp47 below
- fj
- fo
- fr
- frp
- frr
- fur
- fy
- ga
- gag
- gan
- gcr
- gd
- gl
- glk
- gn
- gom
- gor
- got
- gu
- guc
- gur
- guw
- gv
- ha
- hak
- haw
- he
- hi
- hif
- ho # closed
- hr
- hsb
- ht
- hu
- hy
- hyw
# - hz - closed and no dump
- ia
- id
- ie
- ig
- ii # closed
- ik
- ilo
- inh
- io
- is
- it
- iu
- ja
- jam
- jbo
- jv
- ka
- kaa
- kab
- kbd
- kbp
- kcg
- kg
- ki
- kj # closed
- kk
- kl
- km
- kn
- ko
- koi
# - kr - closed and no dump
- krc
- ks
- ksh
- ku
- kv
- kw
- ky
- la
- lad
- lb
- lbe
- lez
- lfn
- lg
- li
- lij
- lld
- lmo
- ln
- lo
- lrc # closed
- lt
- ltg
- lv
- mad
- mai
# - map-bms - see bcp47 below
- mdf
- mg
- mh
- mhr
- mi
- min
- mk
- ml
- mn
- mni
- mnw
- mr
- mrj
- ms
- mt
- mus # closed
- mwl
- my
- myv
- mzn
# - na - closed and no dump
- nah
- nap
# - nds-nl - see bcp47 below
- nds
- ne
- new
- ng # closed
- nia
- nl
- nn
- no
- nov
- nqo
- nrm
- nso
- nv
- ny
- oc
- olo
- om
- or
- os
- pa
- pag
- pam
- pap
- pcd
- pcm
- pdc
- pfl
- pi
- pih
- pl
- pms
- pnb
- pnt
- ps
- pt
- pwn
- qu
- rm
- rmy
- rn
- ro
# - roa-rup - see bcp47 below
# - roa-tara - see bcp47 below
- ru
- rue
- rw
- sa
- sah
- sat
- sc
- scn
- sco
- sd
- se
- sg
- sh
- shi
- shn
- si
# - simple - see bcp47 below
- sk
- skr
- sl
- sm
- smn
- sn
- so
- sq
- sr
- srn
- ss
- st
- stq
- su
- sv
- sw
- szl
- szy
- ta
- tay
- tcy
- te
- tet
- tg
- th
- ti
- tk
- tl
- tn
- to
- tpi
- tr
- trv
- ts
- tt
- tum
- tw
- ty
- tyv
- udm
- ug
- uk
- ur
- uz
- ve
- vec
- vep
- vi
- vls
- vo
- wa
- war
- wo
- wuu
- xal
- xh
- xmf
- yi
- yo
- za
- zea
- zh
# - zh-classical - see bcp47 below
# - zh-min-nan - see bcp47 below
# - zh-yue - see bcp47 below
- zu
language_bcp47:
- bat-smg
- be-x-old
- cbk-zam
- fiu-vro
- map-bms
- nds-nl
- roa-rup
- roa-tara
- simple
- zh-classical
- zh-min-nan
- zh-yue
dataset_info:
- config_name: 20230601.ab
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4183525
num_examples: 6114
download_size: 1172328
dataset_size: 4183525
- config_name: 20230601.ace
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4887561
num_examples: 12839
download_size: 1473823
dataset_size: 4887561
- config_name: 20230601.ady
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 613082
num_examples: 609
download_size: 280249
dataset_size: 613082
- config_name: 20230601.af
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 220678901
num_examples: 108170
download_size: 121238071
dataset_size: 220678901
- config_name: 20230601.ak
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 189
num_examples: 1
download_size: 3045
dataset_size: 189
- config_name: 20230601.als
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 80615079
num_examples: 29804
download_size: 48883379
dataset_size: 80615079
- config_name: 20230601.alt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5786027
num_examples: 1082
download_size: 2401701
dataset_size: 5786027
- config_name: 20230601.am
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24009050
num_examples: 13839
download_size: 10615909
dataset_size: 24009050
- config_name: 20230601.ami
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3865236
num_examples: 1570
download_size: 2006639
dataset_size: 3865236
- config_name: 20230601.an
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 56295233
num_examples: 43744
download_size: 29055888
dataset_size: 56295233
- config_name: 20230601.ang
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2854073
num_examples: 4019
download_size: 1756372
dataset_size: 2854073
- config_name: 20230601.anp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9055032
num_examples: 2736
download_size: 3270423
dataset_size: 9055032
- config_name: 20230601.ar
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3052201469
num_examples: 1205403
download_size: 1319905253
dataset_size: 3052201469
- config_name: 20230601.arc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 830073
num_examples: 1925
download_size: 360590
dataset_size: 830073
- config_name: 20230601.ary
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10007364
num_examples: 6703
download_size: 4094420
dataset_size: 10007364
- config_name: 20230601.arz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1364641408
num_examples: 1617770
download_size: 306336320
dataset_size: 1364641408
- config_name: 20230601.as
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 86645223
num_examples: 11988
download_size: 33149841
dataset_size: 86645223
- config_name: 20230601.ast
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 470349731
num_examples: 132550
download_size: 271011784
dataset_size: 470349731
- config_name: 20230601.atj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 993287
num_examples: 1965
download_size: 502890
dataset_size: 993287
- config_name: 20230601.av
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5996158
num_examples: 3392
download_size: 2514243
dataset_size: 5996158
- config_name: 20230601.avk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 31189461
num_examples: 27493
download_size: 7729144
dataset_size: 31189461
- config_name: 20230601.awa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3588050
num_examples: 3701
download_size: 1230725
dataset_size: 3588050
- config_name: 20230601.ay
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4357283
num_examples: 5287
download_size: 1736571
dataset_size: 4357283
- config_name: 20230601.az
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 425710145
num_examples: 194486
download_size: 225589717
dataset_size: 425710145
- config_name: 20230601.azb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 186034971
num_examples: 243041
download_size: 46251265
dataset_size: 186034971
- config_name: 20230601.ba
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 293142247
num_examples: 62907
download_size: 120320323
dataset_size: 293142247
- config_name: 20230601.ban
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16509353
num_examples: 19293
download_size: 6302437
dataset_size: 16509353
- config_name: 20230601.bar
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36001708
num_examples: 26978
download_size: 21611902
dataset_size: 36001708
- config_name: 20230601.bat-smg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7536614
num_examples: 17181
download_size: 3411835
dataset_size: 7536614
- config_name: 20230601.be-x-old
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 244894736
num_examples: 82917
download_size: 110733701
dataset_size: 244894736
- config_name: 20230601.bcl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18259970
num_examples: 13934
download_size: 10086356
dataset_size: 18259970
- config_name: 20230601.be
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 606416485
num_examples: 231617
download_size: 280474552
dataset_size: 606416485
- config_name: 20230601.bg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1080390968
num_examples: 291361
download_size: 506945262
dataset_size: 1080390968
- config_name: 20230601.bh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16078510
num_examples: 8446
download_size: 5648960
dataset_size: 16078510
- config_name: 20230601.bi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 398357
num_examples: 1539
download_size: 200277
dataset_size: 398357
- config_name: 20230601.bjn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6755874
num_examples: 10379
download_size: 3265979
dataset_size: 6755874
- config_name: 20230601.blk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24413622
num_examples: 2725
download_size: 7356285
dataset_size: 24413622
- config_name: 20230601.bm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 473185
num_examples: 1221
download_size: 261438
dataset_size: 473185
- config_name: 20230601.bn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 913676298
num_examples: 138515
download_size: 330147337
dataset_size: 913676298
- config_name: 20230601.bo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 132034426
num_examples: 12434
download_size: 38687191
dataset_size: 132034426
- config_name: 20230601.bpy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42862119
num_examples: 25167
download_size: 6532133
dataset_size: 42862119
- config_name: 20230601.br
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 84044684
num_examples: 79959
download_size: 48952223
dataset_size: 84044684
- config_name: 20230601.bs
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 190816695
num_examples: 92065
download_size: 106053913
dataset_size: 190816695
- config_name: 20230601.bug
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3433134
num_examples: 15873
download_size: 815878
dataset_size: 3433134
- config_name: 20230601.bxr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6695205
num_examples: 2791
download_size: 3078381
dataset_size: 6695205
- config_name: 20230601.ca
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1918941844
num_examples: 728483
download_size: 1113762234
dataset_size: 1918941844
- config_name: 20230601.cbk-zam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2808337
num_examples: 3307
download_size: 1261855
dataset_size: 2808337
- config_name: 20230601.cdo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5010639
num_examples: 16234
download_size: 1949302
dataset_size: 5010639
- config_name: 20230601.ce
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 726468413
num_examples: 599863
download_size: 86627608
dataset_size: 726468413
- config_name: 20230601.ceb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4569352784
num_examples: 6124009
download_size: 926156250
dataset_size: 4569352784
- config_name: 20230601.ch
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 187255
num_examples: 573
download_size: 96403
dataset_size: 187255
- config_name: 20230601.cho
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7974
num_examples: 14
download_size: 9782
dataset_size: 7974
- config_name: 20230601.chr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 764388
num_examples: 1113
download_size: 341232
dataset_size: 764388
- config_name: 20230601.chy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 149009
num_examples: 801
download_size: 76580
dataset_size: 149009
- config_name: 20230601.ckb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 101248717
num_examples: 49928
download_size: 40379289
dataset_size: 101248717
- config_name: 20230601.co
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8069524
num_examples: 6565
download_size: 4650142
dataset_size: 8069524
- config_name: 20230601.cr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 50625
num_examples: 182
download_size: 26509
dataset_size: 50625
- config_name: 20230601.crh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9056373
num_examples: 25642
download_size: 3453399
dataset_size: 9056373
- config_name: 20230601.cs
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1529727976
num_examples: 525205
download_size: 966856046
dataset_size: 1529727976
- config_name: 20230601.csb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3739371
num_examples: 5478
download_size: 2049003
dataset_size: 3739371
- config_name: 20230601.cu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 975765
num_examples: 1221
download_size: 395563
dataset_size: 975765
- config_name: 20230601.cv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 81019358
num_examples: 51407
download_size: 29189010
dataset_size: 81019358
- config_name: 20230601.cy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 304314230
num_examples: 278927
download_size: 111093453
dataset_size: 304314230
- config_name: 20230601.da
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 540186121
num_examples: 291721
download_size: 326825586
dataset_size: 540186121
- config_name: 20230601.dag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8116697
num_examples: 8850
download_size: 3469680
dataset_size: 8116697
- config_name: 20230601.de
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9446726072
num_examples: 2801769
download_size: 5752429951
dataset_size: 9446726072
- config_name: 20230601.din
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 554422
num_examples: 506
download_size: 334229
dataset_size: 554422
- config_name: 20230601.diq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19300910
num_examples: 40589
download_size: 7469118
dataset_size: 19300910
- config_name: 20230601.dsb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3303132
num_examples: 3357
download_size: 1923763
dataset_size: 3303132
- config_name: 20230601.dty
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6972841
num_examples: 3625
download_size: 2497168
dataset_size: 6972841
- config_name: 20230601.dv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13916007
num_examples: 4344
download_size: 5255070
dataset_size: 13916007
- config_name: 20230601.dz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8517069
num_examples: 777
download_size: 2474869
dataset_size: 8517069
- config_name: 20230601.ee
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 844062
num_examples: 1164
download_size: 464418
dataset_size: 844062
- config_name: 20230601.el
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1314451459
num_examples: 222598
download_size: 627997252
dataset_size: 1314451459
- config_name: 20230601.eml
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3605037
num_examples: 12945
download_size: 1681847
dataset_size: 3605037
- config_name: 20230601.en
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21325670826
num_examples: 6660918
download_size: 12512970849
dataset_size: 21325670826
- config_name: 20230601.eo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 508055613
num_examples: 337291
download_size: 294377264
dataset_size: 508055613
- config_name: 20230601.es
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5889963046
num_examples: 1805012
download_size: 3477902737
dataset_size: 5889963046
- config_name: 20230601.eu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 547125100
num_examples: 405840
download_size: 264099434
dataset_size: 547125100
- config_name: 20230601.ext
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4182030
num_examples: 3636
download_size: 2631658
dataset_size: 4182030
- config_name: 20230601.fa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1851617207
num_examples: 964236
download_size: 759372155
dataset_size: 1851617207
- config_name: 20230601.fat
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1933259
num_examples: 1046
download_size: 1067434
dataset_size: 1933259
- config_name: 20230601.ff
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1401981
num_examples: 1484
download_size: 824781
dataset_size: 1401981
- config_name: 20230601.fi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1125659121
num_examples: 553519
download_size: 678674705
dataset_size: 1125659121
- config_name: 20230601.fiu-vro
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4773469
num_examples: 6559
download_size: 2464729
dataset_size: 4773469
- config_name: 20230601.fj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 593373
num_examples: 1283
download_size: 323108
dataset_size: 593373
- config_name: 20230601.fo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15058635
num_examples: 13954
download_size: 8633381
dataset_size: 15058635
- config_name: 20230601.fr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7910192478
num_examples: 2525926
download_size: 4618774275
dataset_size: 7910192478
- config_name: 20230601.frp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3517265
num_examples: 5689
download_size: 1847765
dataset_size: 3517265
- config_name: 20230601.frr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10292357
num_examples: 17260
download_size: 5084999
dataset_size: 10292357
- config_name: 20230601.fur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4062291
num_examples: 3967
download_size: 2401534
dataset_size: 4062291
- config_name: 20230601.fy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 130189677
num_examples: 51506
download_size: 73624821
dataset_size: 130189677
- config_name: 20230601.ga
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 59266973
num_examples: 58579
download_size: 33377343
dataset_size: 59266973
- config_name: 20230601.gag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2405210
num_examples: 2966
download_size: 1319553
dataset_size: 2405210
- config_name: 20230601.gan
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2878337
num_examples: 6691
download_size: 1485195
dataset_size: 2878337
- config_name: 20230601.gcr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2335924
num_examples: 2397
download_size: 1344338
dataset_size: 2335924
- config_name: 20230601.gd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14026914
num_examples: 16018
download_size: 7175920
dataset_size: 14026914
- config_name: 20230601.gl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 483432936
num_examples: 196473
download_size: 287329100
dataset_size: 483432936
- config_name: 20230601.glk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6067898
num_examples: 7035
download_size: 2372761
dataset_size: 6067898
- config_name: 20230601.gn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6754303
num_examples: 5298
download_size: 3702975
dataset_size: 6754303
- config_name: 20230601.gom
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30830020
num_examples: 4250
download_size: 11258918
dataset_size: 30830020
- config_name: 20230601.gor
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6111487
num_examples: 14556
download_size: 2036928
dataset_size: 6111487
- config_name: 20230601.got
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1518930
num_examples: 1005
download_size: 626840
dataset_size: 1518930
- config_name: 20230601.gu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 120869564
num_examples: 30357
download_size: 39339802
dataset_size: 120869564
- config_name: 20230601.guc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 916033
num_examples: 578
download_size: 547551
dataset_size: 916033
- config_name: 20230601.gur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1414225
num_examples: 954
download_size: 753483
dataset_size: 1414225
- config_name: 20230601.guw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1894278
num_examples: 1301
download_size: 1027313
dataset_size: 1894278
- config_name: 20230601.gv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5969707
num_examples: 5954
download_size: 3155779
dataset_size: 5969707
- config_name: 20230601.ha
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 62945985
num_examples: 27905
download_size: 35159511
dataset_size: 62945985
- config_name: 20230601.hak
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4493017
num_examples: 10183
download_size: 1875697
dataset_size: 4493017
- config_name: 20230601.haw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1648045
num_examples: 2580
download_size: 681202
dataset_size: 1648045
- config_name: 20230601.he
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1890961532
num_examples: 325534
download_size: 955373507
dataset_size: 1890961532
- config_name: 20230601.hi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 652930384
num_examples: 160068
download_size: 230339569
dataset_size: 652930384
- config_name: 20230601.hif
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5670768
num_examples: 10975
download_size: 2708959
dataset_size: 5670768
- config_name: 20230601.ho
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3450
num_examples: 3
download_size: 7714
dataset_size: 3450
- config_name: 20230601.hsb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15650862
num_examples: 13929
download_size: 7422054
dataset_size: 15650862
- config_name: 20230601.ht
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 54468681
num_examples: 69778
download_size: 21591458
dataset_size: 54468681
- config_name: 20230601.hu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1490296647
num_examples: 526030
download_size: 904279478
dataset_size: 1490296647
- config_name: 20230601.hy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1142467643
num_examples: 297933
download_size: 477398053
dataset_size: 1142467643
- config_name: 20230601.hyw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 57478946
num_examples: 10933
download_size: 26499417
dataset_size: 57478946
- config_name: 20230601.ia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16183963
num_examples: 27939
download_size: 8108662
dataset_size: 16183963
- config_name: 20230601.id
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1086885042
num_examples: 648383
download_size: 575124507
dataset_size: 1086885042
- config_name: 20230601.ie
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6482834
num_examples: 11705
download_size: 2881031
dataset_size: 6482834
- config_name: 20230601.ig
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 45043729
num_examples: 16970
download_size: 23565907
dataset_size: 45043729
- config_name: 20230601.ii
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8921
num_examples: 14
download_size: 14936
dataset_size: 8921
- config_name: 20230601.ik
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 190236
num_examples: 823
download_size: 109460
dataset_size: 190236
- config_name: 20230601.ilo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16860855
num_examples: 15379
download_size: 7350161
dataset_size: 16860855
- config_name: 20230601.inh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2697943
num_examples: 2108
download_size: 1257824
dataset_size: 2697943
- config_name: 20230601.io
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 37291268
num_examples: 38155
download_size: 16629067
dataset_size: 37291268
- config_name: 20230601.is
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 86487184
num_examples: 56795
download_size: 51372350
dataset_size: 86487184
- config_name: 20230601.it
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4826403309
num_examples: 1812514
download_size: 2926177870
dataset_size: 4826403309
- config_name: 20230601.iu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 284349
num_examples: 564
download_size: 132368
dataset_size: 284349
- config_name: 20230601.ja
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6913216645
num_examples: 1373311
download_size: 3923535785
dataset_size: 6913216645
- config_name: 20230601.jam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1140551
num_examples: 1771
download_size: 700995
dataset_size: 1140551
- config_name: 20230601.jbo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2521508
num_examples: 1390
download_size: 888087
dataset_size: 2521508
- config_name: 20230601.jv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 70703094
num_examples: 73024
download_size: 36199167
dataset_size: 70703094
- config_name: 20230601.ka
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 693108151
num_examples: 168185
download_size: 237719175
dataset_size: 693108151
- config_name: 20230601.kaa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4584133
num_examples: 3560
download_size: 2620141
dataset_size: 4584133
- config_name: 20230601.kab
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4374017
num_examples: 5800
download_size: 2570505
dataset_size: 4374017
- config_name: 20230601.kbd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3034249
num_examples: 1637
download_size: 1317388
dataset_size: 3034249
- config_name: 20230601.kbp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3571606
num_examples: 1918
download_size: 1794790
dataset_size: 3571606
- config_name: 20230601.kcg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 663326
num_examples: 825
download_size: 350587
dataset_size: 663326
- config_name: 20230601.kg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 463083
num_examples: 1333
download_size: 240321
dataset_size: 463083
- config_name: 20230601.ki
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 453178
num_examples: 1635
download_size: 243544
dataset_size: 453178
- config_name: 20230601.kj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5190
num_examples: 5
download_size: 10453
dataset_size: 5190
- config_name: 20230601.kk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 488955469
num_examples: 237304
download_size: 176872369
dataset_size: 488955469
- config_name: 20230601.kl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 312839
num_examples: 298
download_size: 193192
dataset_size: 312839
- config_name: 20230601.km
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 102051337
num_examples: 11784
download_size: 35067125
dataset_size: 102051337
- config_name: 20230601.kn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 394061570
num_examples: 30793
download_size: 143867617
dataset_size: 394061570
- config_name: 20230601.ko
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1374136790
num_examples: 635278
download_size: 777760206
dataset_size: 1374136790
- config_name: 20230601.koi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5077608
num_examples: 3487
download_size: 1880469
dataset_size: 5077608
- config_name: 20230601.krc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4592333
num_examples: 2098
download_size: 2019043
dataset_size: 4592333
- config_name: 20230601.ks
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2380920
num_examples: 4060
download_size: 849849
dataset_size: 2380920
- config_name: 20230601.ksh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3110398
num_examples: 2945
download_size: 2004743
dataset_size: 3110398
- config_name: 20230601.ku
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42327613
num_examples: 59529
download_size: 21970440
dataset_size: 42327613
- config_name: 20230601.kv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9221030
num_examples: 5589
download_size: 3676356
dataset_size: 9221030
- config_name: 20230601.kw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4653320
num_examples: 7070
download_size: 2695687
dataset_size: 4653320
- config_name: 20230601.ky
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 168214006
num_examples: 80594
download_size: 64353836
dataset_size: 168214006
- config_name: 20230601.la
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 139977277
num_examples: 137851
download_size: 75850224
dataset_size: 139977277
- config_name: 20230601.lad
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4820385
num_examples: 3638
download_size: 2703040
dataset_size: 4820385
- config_name: 20230601.lb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 87567860
num_examples: 61757
download_size: 49791518
dataset_size: 87567860
- config_name: 20230601.lbe
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 698292
num_examples: 1276
download_size: 282486
dataset_size: 698292
- config_name: 20230601.lez
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9785097
num_examples: 4256
download_size: 3849506
dataset_size: 9785097
- config_name: 20230601.lfn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8850905
num_examples: 4805
download_size: 5189938
dataset_size: 8850905
- config_name: 20230601.lg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6771716
num_examples: 4016
download_size: 3634293
dataset_size: 6771716
- config_name: 20230601.li
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 29183994
num_examples: 14308
download_size: 17566220
dataset_size: 29183994
- config_name: 20230601.lij
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11088927
num_examples: 11132
download_size: 6042920
dataset_size: 11088927
- config_name: 20230601.lld
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 45325217
num_examples: 158242
download_size: 12436563
dataset_size: 45325217
- config_name: 20230601.lmo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42267433
num_examples: 71061
download_size: 18724770
dataset_size: 42267433
- config_name: 20230601.ln
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2024697
num_examples: 3515
download_size: 1115171
dataset_size: 2024697
- config_name: 20230601.lo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14729412
num_examples: 4928
download_size: 5382036
dataset_size: 14729412
- config_name: 20230601.lrc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 144
num_examples: 1
download_size: 2723
dataset_size: 144
- config_name: 20230601.lt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 331252602
num_examples: 208114
download_size: 191925990
dataset_size: 331252602
- config_name: 20230601.ltg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 901980
num_examples: 1044
download_size: 522213
dataset_size: 901980
- config_name: 20230601.lv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 220969643
num_examples: 120295
download_size: 126161867
dataset_size: 220969643
- config_name: 20230601.mad
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1325061
num_examples: 1103
download_size: 764579
dataset_size: 1325061
- config_name: 20230601.mai
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21215977
num_examples: 14622
download_size: 6041134
dataset_size: 21215977
- config_name: 20230601.map-bms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5400186
num_examples: 13554
download_size: 2420169
dataset_size: 5400186
- config_name: 20230601.mdf
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4033455
num_examples: 3473
download_size: 1513534
dataset_size: 4033455
- config_name: 20230601.mg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 71936817
num_examples: 95675
download_size: 21206762
dataset_size: 71936817
- config_name: 20230601.mh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11524
num_examples: 8
download_size: 16877
dataset_size: 11524
- config_name: 20230601.mhr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19030836
num_examples: 11016
download_size: 6821706
dataset_size: 19030836
- config_name: 20230601.mi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4120867
num_examples: 7855
download_size: 1016905
dataset_size: 4120867
- config_name: 20230601.min
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 118484114
num_examples: 226953
download_size: 25401691
dataset_size: 118484114
- config_name: 20230601.mk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 633734922
num_examples: 136723
download_size: 263383509
dataset_size: 633734922
- config_name: 20230601.ml
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 485143578
num_examples: 84794
download_size: 179727029
dataset_size: 485143578
- config_name: 20230601.mn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 88813927
num_examples: 23385
download_size: 40026827
dataset_size: 88813927
- config_name: 20230601.mni
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9790220
num_examples: 10877
download_size: 2193774
dataset_size: 9790220
- config_name: 20230601.mnw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 45579901
num_examples: 3184
download_size: 13207357
dataset_size: 45579901
- config_name: 20230601.mr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 254646708
num_examples: 92898
download_size: 79982313
dataset_size: 254646708
- config_name: 20230601.mrj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8729899
num_examples: 10542
download_size: 3278742
dataset_size: 8729899
- config_name: 20230601.ms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 410354637
num_examples: 365491
download_size: 206610861
dataset_size: 410354637
- config_name: 20230601.mt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 26613613
num_examples: 5369
download_size: 15563924
dataset_size: 26613613
- config_name: 20230601.mus
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 922
num_examples: 2
download_size: 5286
dataset_size: 922
- config_name: 20230601.mwl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19284605
num_examples: 4474
download_size: 11469001
dataset_size: 19284605
- config_name: 20230601.my
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 310836677
num_examples: 108750
download_size: 84350660
dataset_size: 310836677
- config_name: 20230601.myv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11073788
num_examples: 7910
download_size: 4560227
dataset_size: 11073788
- config_name: 20230601.mzn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14682517
num_examples: 15995
download_size: 4856126
dataset_size: 14682517
- config_name: 20230601.nah
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2843124
num_examples: 6654
download_size: 1347633
dataset_size: 2843124
- config_name: 20230601.nap
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6365024
num_examples: 14849
download_size: 3169570
dataset_size: 6365024
- config_name: 20230601.nds
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 92743798
num_examples: 84225
download_size: 47925882
dataset_size: 92743798
- config_name: 20230601.nds-nl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13432115
num_examples: 7669
download_size: 8207550
dataset_size: 13432115
- config_name: 20230601.ne
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 105562688
num_examples: 32084
download_size: 36335987
dataset_size: 105562688
- config_name: 20230601.new
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 159067466
num_examples: 73004
download_size: 20472096
dataset_size: 159067466
- config_name: 20230601.ng
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 68090
num_examples: 21
download_size: 52355
dataset_size: 68090
- config_name: 20230601.nia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1793045
num_examples: 1638
download_size: 908004
dataset_size: 1793045
- config_name: 20230601.nl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2607286503
num_examples: 2123556
download_size: 1451716829
dataset_size: 2607286503
- config_name: 20230601.nn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 233905017
num_examples: 165610
download_size: 132674509
dataset_size: 233905017
- config_name: 20230601.no
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1018553680
num_examples: 611542
download_size: 594771430
dataset_size: 1018553680
- config_name: 20230601.nov
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 912652
num_examples: 1626
download_size: 466451
dataset_size: 912652
- config_name: 20230601.nqo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8295905
num_examples: 1577
download_size: 3503359
dataset_size: 8295905
- config_name: 20230601.nrm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3212495
num_examples: 4887
download_size: 1504411
dataset_size: 3212495
- config_name: 20230601.nso
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2753446
num_examples: 8617
download_size: 912548
dataset_size: 2753446
- config_name: 20230601.nv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16785014
num_examples: 22189
download_size: 3271175
dataset_size: 16785014
- config_name: 20230601.ny
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1693443
num_examples: 1133
download_size: 937213
dataset_size: 1693443
- config_name: 20230601.oc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 117818984
num_examples: 88886
download_size: 62764519
dataset_size: 117818984
- config_name: 20230601.olo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3122448
num_examples: 4514
download_size: 1707016
dataset_size: 3122448
- config_name: 20230601.om
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3057811
num_examples: 1574
download_size: 1720686
dataset_size: 3057811
- config_name: 20230601.or
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 71342568
num_examples: 16793
download_size: 25347488
dataset_size: 71342568
- config_name: 20230601.os
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12975022
num_examples: 17066
download_size: 5519425
dataset_size: 12975022
- config_name: 20230601.pa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 205173613
num_examples: 49955
download_size: 78370120
dataset_size: 205173613
- config_name: 20230601.pag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1336264
num_examples: 2638
download_size: 417192
dataset_size: 1336264
- config_name: 20230601.pam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8241795
num_examples: 8935
download_size: 4231831
dataset_size: 8241795
- config_name: 20230601.pap
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3662048
num_examples: 3237
download_size: 2098802
dataset_size: 3662048
- config_name: 20230601.pcd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5622299
num_examples: 5639
download_size: 3094652
dataset_size: 5622299
- config_name: 20230601.pcm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1531576
num_examples: 954
download_size: 937573
dataset_size: 1531576
- config_name: 20230601.pdc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1196915
num_examples: 2162
download_size: 688667
dataset_size: 1196915
- config_name: 20230601.pfl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3682829
num_examples: 2756
download_size: 1962515
dataset_size: 3682829
- config_name: 20230601.pi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1134003
num_examples: 3056
download_size: 196632
dataset_size: 1134003
- config_name: 20230601.pih
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 378374
num_examples: 930
download_size: 236668
dataset_size: 378374
- config_name: 20230601.pl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2904184909
num_examples: 1569515
download_size: 1787531053
dataset_size: 2904184909
- config_name: 20230601.pms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 34301415
num_examples: 67899
download_size: 11986805
dataset_size: 34301415
- config_name: 20230601.pnb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 298316454
num_examples: 70562
download_size: 130650981
dataset_size: 298316454
- config_name: 20230601.pnt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 675000
num_examples: 535
download_size: 298222
dataset_size: 675000
- config_name: 20230601.ps
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 104012780
num_examples: 19565
download_size: 48710783
dataset_size: 104012780
- config_name: 20230601.pt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2693736720
num_examples: 1103446
download_size: 1571347957
dataset_size: 2693736720
- config_name: 20230601.pwn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 800565
num_examples: 380
download_size: 446595
dataset_size: 800565
- config_name: 20230601.qu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16631588
num_examples: 23909
download_size: 7575996
dataset_size: 16631588
- config_name: 20230601.rm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17822525
num_examples: 3815
download_size: 10339459
dataset_size: 17822525
- config_name: 20230601.rmy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 491195
num_examples: 930
download_size: 285442
dataset_size: 491195
- config_name: 20230601.rn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 522745
num_examples: 805
download_size: 295575
dataset_size: 522745
- config_name: 20230601.ro
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 834681972
num_examples: 440015
download_size: 466488330
dataset_size: 834681972
- config_name: 20230601.roa-rup
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1713384
num_examples: 1409
download_size: 955926
dataset_size: 1713384
- config_name: 20230601.roa-tara
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7418561
num_examples: 9337
download_size: 3970663
dataset_size: 7418561
- config_name: 20230601.ru
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10097718899
num_examples: 1918942
download_size: 4880008552
dataset_size: 10097718899
- config_name: 20230601.rue
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12975836
num_examples: 8703
download_size: 6269020
dataset_size: 12975836
- config_name: 20230601.rw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10794817
num_examples: 7425
download_size: 6009979
dataset_size: 10794817
- config_name: 20230601.sa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 69233233
num_examples: 12101
download_size: 23590461
dataset_size: 69233233
- config_name: 20230601.sah
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 47530889
num_examples: 16598
download_size: 21213858
dataset_size: 47530889
- config_name: 20230601.sat
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 35005528
num_examples: 8264
download_size: 12124520
dataset_size: 35005528
- config_name: 20230601.sc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12683528
num_examples: 7540
download_size: 7650423
dataset_size: 12683528
- config_name: 20230601.scn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17672274
num_examples: 26507
download_size: 10210177
dataset_size: 17672274
- config_name: 20230601.sco
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 43796852
num_examples: 36206
download_size: 24764727
dataset_size: 43796852
- config_name: 20230601.sd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36672141
num_examples: 16882
download_size: 17409382
dataset_size: 36672141
- config_name: 20230601.se
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3600247
num_examples: 8040
download_size: 1814982
dataset_size: 3600247
- config_name: 20230601.sg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 127791
num_examples: 548
download_size: 63800
dataset_size: 127791
- config_name: 20230601.sh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 569915575
num_examples: 458272
download_size: 270502498
dataset_size: 569915575
- config_name: 20230601.shi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2195129
num_examples: 1544
download_size: 1311300
dataset_size: 2195129
- config_name: 20230601.shn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33233508
num_examples: 13706
download_size: 8107005
dataset_size: 33233508
- config_name: 20230601.si
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 135560965
num_examples: 22574
download_size: 52870973
dataset_size: 135560965
- config_name: 20230601.sk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 410287543
num_examples: 240597
download_size: 237984111
dataset_size: 410287543
- config_name: 20230601.skr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22294235
num_examples: 5739
download_size: 9744982
dataset_size: 22294235
- config_name: 20230601.sl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 444732062
num_examples: 181212
download_size: 263697513
dataset_size: 444732062
- config_name: 20230601.sm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 891597
num_examples: 1143
download_size: 485815
dataset_size: 891597
- config_name: 20230601.smn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5526668
num_examples: 5094
download_size: 2710998
dataset_size: 5526668
- config_name: 20230601.sn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9252554
num_examples: 10917
download_size: 4738498
dataset_size: 9252554
- config_name: 20230601.so
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14893759
num_examples: 10812
download_size: 8617659
dataset_size: 14893759
- config_name: 20230601.sq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 197206847
num_examples: 100423
download_size: 110414776
dataset_size: 197206847
- config_name: 20230601.sr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1690745100
num_examples: 671352
download_size: 695586988
dataset_size: 1690745100
- config_name: 20230601.srn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 649044
num_examples: 1218
download_size: 214987
dataset_size: 649044
- config_name: 20230601.ss
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 861417
num_examples: 720
download_size: 489383
dataset_size: 861417
- config_name: 20230601.st
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 934954
num_examples: 1073
download_size: 517491
dataset_size: 934954
- config_name: 20230601.stq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4929355
num_examples: 4129
download_size: 2878034
dataset_size: 4929355
- config_name: 20230601.su
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 47909002
num_examples: 61490
download_size: 19683635
dataset_size: 47909002
- config_name: 20230601.sv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2133848723
num_examples: 2564263
download_size: 1002020509
dataset_size: 2133848723
- config_name: 20230601.sw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 71857907
num_examples: 77334
download_size: 35252918
dataset_size: 71857907
- config_name: 20230601.szl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21335080
num_examples: 56652
download_size: 7284436
dataset_size: 21335080
- config_name: 20230601.szy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10412319
num_examples: 4709
download_size: 5572825
dataset_size: 10412319
- config_name: 20230601.tay
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2779734
num_examples: 2595
download_size: 1147869
dataset_size: 2779734
- config_name: 20230601.tcy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11968976
num_examples: 2173
download_size: 4524692
dataset_size: 11968976
- config_name: 20230601.te
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 705766405
num_examples: 83107
download_size: 206360536
dataset_size: 705766405
- config_name: 20230601.tet
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1457614
num_examples: 1460
download_size: 739227
dataset_size: 1457614
- config_name: 20230601.tg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 145506377
num_examples: 109839
download_size: 48637192
dataset_size: 145506377
- config_name: 20230601.th
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 987873133
num_examples: 156445
download_size: 365894157
dataset_size: 987873133
- config_name: 20230601.ti
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 665363
num_examples: 433
download_size: 328037
dataset_size: 665363
- config_name: 20230601.tk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12580480
num_examples: 7836
download_size: 6951103
dataset_size: 12580480
- config_name: 20230601.tl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 82731267
num_examples: 44797
download_size: 44058126
dataset_size: 82731267
- config_name: 20230601.tn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3562981
num_examples: 1162
download_size: 1244173
dataset_size: 3562981
- config_name: 20230601.to
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1074947
num_examples: 1848
download_size: 510687
dataset_size: 1074947
- config_name: 20230601.tpi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 450891
num_examples: 1390
download_size: 236441
dataset_size: 450891
- config_name: 20230601.tr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 965186144
num_examples: 524184
download_size: 543958666
dataset_size: 965186144
- config_name: 20230601.trv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4873244
num_examples: 1809
download_size: 2635461
dataset_size: 4873244
- config_name: 20230601.ts
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 841497
num_examples: 769
download_size: 451958
dataset_size: 841497
- config_name: 20230601.tt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 679276199
num_examples: 500608
download_size: 128386602
dataset_size: 679276199
- config_name: 20230601.tum
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8395079
num_examples: 14169
download_size: 3225881
dataset_size: 8395079
- config_name: 20230601.tw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6562128
num_examples: 3608
download_size: 3389042
dataset_size: 6562128
- config_name: 20230601.ty
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 324678
num_examples: 1348
download_size: 145184
dataset_size: 324678
- config_name: 20230601.tyv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14032235
num_examples: 3459
download_size: 6378954
dataset_size: 14032235
- config_name: 20230601.udm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6918258
num_examples: 5586
download_size: 2937644
dataset_size: 6918258
- config_name: 20230601.ug
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 41939834
num_examples: 8557
download_size: 17588763
dataset_size: 41939834
- config_name: 20230601.uk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4815765166
num_examples: 1266287
download_size: 2257591520
dataset_size: 4815765166
- config_name: 20230601.ur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 394375073
num_examples: 194435
download_size: 160552761
dataset_size: 394375073
- config_name: 20230601.uz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 372775375
num_examples: 241353
download_size: 196367714
dataset_size: 372775375
- config_name: 20230601.ve
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 347015
num_examples: 836
download_size: 159547
dataset_size: 347015
- config_name: 20230601.vec
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 37671800
num_examples: 69181
download_size: 16029908
dataset_size: 37671800
- config_name: 20230601.vep
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11259222
num_examples: 6851
download_size: 6196150
dataset_size: 11259222
- config_name: 20230601.vi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1584847634
num_examples: 1283785
download_size: 731354374
dataset_size: 1584847634
- config_name: 20230601.vls
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11296047
num_examples: 7824
download_size: 6952370
dataset_size: 11296047
- config_name: 20230601.vo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18943004
num_examples: 33641
download_size: 6379410
dataset_size: 18943004
- config_name: 20230601.wa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11990482
num_examples: 11858
download_size: 7144929
dataset_size: 11990482
- config_name: 20230601.war
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 468715357
num_examples: 1266238
download_size: 109807953
dataset_size: 468715357
- config_name: 20230601.wo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3498671
num_examples: 1719
download_size: 2076485
dataset_size: 3498671
- config_name: 20230601.wuu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24986530
num_examples: 42950
download_size: 15960262
dataset_size: 24986530
- config_name: 20230601.xal
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1386014
num_examples: 2307
download_size: 508481
dataset_size: 1386014
- config_name: 20230601.xh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2320277
num_examples: 1601
download_size: 1444732
dataset_size: 2320277
- config_name: 20230601.xmf
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36557690
num_examples: 17705
download_size: 12535173
dataset_size: 36557690
- config_name: 20230601.yi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36031133
num_examples: 15297
download_size: 16153644
dataset_size: 36031133
- config_name: 20230601.yo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18018480
num_examples: 33179
download_size: 8274108
dataset_size: 18018480
- config_name: 20230601.za
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1276590
num_examples: 2722
download_size: 642448
dataset_size: 1276590
- config_name: 20230601.zea
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5059421
num_examples: 5756
download_size: 2547904
dataset_size: 5059421
- config_name: 20230601.zh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2720688196
num_examples: 1357881
download_size: 1718953037
dataset_size: 2720688196
- config_name: 20230601.zh-classical
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14617535
num_examples: 12513
download_size: 9882532
dataset_size: 14617535
- config_name: 20230601.zh-min-nan
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 159218053
num_examples: 432531
download_size: 37371610
dataset_size: 159218053
- config_name: 20230601.zh-yue
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 107325669
num_examples: 131542
download_size: 63294114
dataset_size: 107325669
- config_name: 20230601.zu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6915666
num_examples: 11381
download_size: 3683813
dataset_size: 6915666
- config_name: 20230601.hr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 438311404
num_examples: 200747
download_size: 275098294
dataset_size: 438311404
- config_name: 20230601.simple
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 282844880
num_examples: 231233
download_size: 154520600
dataset_size: 282844880
- config_name: 20230601.ta
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 789472198
num_examples: 156273
download_size: 258263767
dataset_size: 789472198
- config_name: 20230901.ab
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4257828
num_examples: 6135
download_size: 1204070
dataset_size: 4257828
- config_name: 20230901.ace
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4988748
num_examples: 12932
download_size: 1532859
dataset_size: 4988748
- config_name: 20230901.ady
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 732900
num_examples: 656
download_size: 334202
dataset_size: 732900
- config_name: 20230901.af
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 223836122
num_examples: 110683
download_size: 122868601
dataset_size: 223836122
- config_name: 20230901.ak
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 189
num_examples: 1
download_size: 3045
dataset_size: 189
- config_name: 20230901.als
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 81066470
num_examples: 29914
download_size: 49151942
dataset_size: 81066470
- config_name: 20230901.alt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6370197
num_examples: 1076
download_size: 2683190
dataset_size: 6370197
- config_name: 20230901.am
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24108874
num_examples: 13863
download_size: 10659605
dataset_size: 24108874
- config_name: 20230901.ami
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4376488
num_examples: 1613
download_size: 2207864
dataset_size: 4376488
- config_name: 20230901.an
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 57157273
num_examples: 44090
download_size: 29392661
dataset_size: 57157273
- config_name: 20230901.ang
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2899899
num_examples: 4106
download_size: 1782699
dataset_size: 2899899
- config_name: 20230901.anp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9238243
num_examples: 2753
download_size: 3338080
dataset_size: 9238243
- config_name: 20230901.ar
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3090850739
num_examples: 1214692
download_size: 1336764394
dataset_size: 3090850739
- config_name: 20230901.arc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 837851
num_examples: 1935
download_size: 364313
dataset_size: 837851
- config_name: 20230901.ary
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10716445
num_examples: 7181
download_size: 4413789
dataset_size: 10716445
- config_name: 20230901.arz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1371439747
num_examples: 1619204
download_size: 309552126
dataset_size: 1371439747
- config_name: 20230901.as
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 88616101
num_examples: 12209
download_size: 33925273
dataset_size: 88616101
- config_name: 20230901.ast
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 470680707
num_examples: 133219
download_size: 271143532
dataset_size: 470680707
- config_name: 20230901.atj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1009452
num_examples: 1967
download_size: 512377
dataset_size: 1009452
- config_name: 20230901.av
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6136668
num_examples: 3420
download_size: 2568423
dataset_size: 6136668
- config_name: 20230901.avk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 31833142
num_examples: 28141
download_size: 7911635
dataset_size: 31833142
- config_name: 20230901.awa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3591539
num_examples: 3696
download_size: 1233124
dataset_size: 3591539
- config_name: 20230901.ay
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4378141
num_examples: 5348
download_size: 1748641
dataset_size: 4378141
- config_name: 20230901.az
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 430470815
num_examples: 195659
download_size: 228140471
dataset_size: 430470815
- config_name: 20230901.azb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 186776266
num_examples: 243263
download_size: 46619566
dataset_size: 186776266
- config_name: 20230901.ba
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 296321332
num_examples: 63134
download_size: 121809783
dataset_size: 296321332
- config_name: 20230901.ban
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17383384
num_examples: 20242
download_size: 6524686
dataset_size: 17383384
- config_name: 20230901.bar
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36251706
num_examples: 27040
download_size: 21762636
dataset_size: 36251706
- config_name: 20230901.bat-smg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7584027
num_examples: 17214
download_size: 3437198
dataset_size: 7584027
- config_name: 20230901.be-x-old
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 249911330
num_examples: 83778
download_size: 113105161
dataset_size: 249911330
- config_name: 20230901.bcl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19285430
num_examples: 14723
download_size: 10682007
dataset_size: 19285430
- config_name: 20230901.be
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 618711883
num_examples: 234760
download_size: 286395236
dataset_size: 618711883
- config_name: 20230901.bg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1095408838
num_examples: 293306
download_size: 514238024
dataset_size: 1095408838
- config_name: 20230901.bh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16433197
num_examples: 8552
download_size: 5775459
dataset_size: 16433197
- config_name: 20230901.bi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 405238
num_examples: 1544
download_size: 204286
dataset_size: 405238
- config_name: 20230901.bjn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6761698
num_examples: 10460
download_size: 3255595
dataset_size: 6761698
- config_name: 20230901.blk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25837114
num_examples: 2923
download_size: 7802724
dataset_size: 25837114
- config_name: 20230901.bm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 591154
num_examples: 1254
download_size: 324954
dataset_size: 591154
- config_name: 20230901.bn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 945095157
num_examples: 141288
download_size: 340510394
dataset_size: 945095157
- config_name: 20230901.bo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 132468794
num_examples: 12826
download_size: 38750901
dataset_size: 132468794
- config_name: 20230901.bpy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42975074
num_examples: 25165
download_size: 6557544
dataset_size: 42975074
- config_name: 20230901.br
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 84959382
num_examples: 83342
download_size: 49373423
dataset_size: 84959382
- config_name: 20230901.bs
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 192322421
num_examples: 92325
download_size: 106973603
dataset_size: 192322421
- config_name: 20230901.bug
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3433942
num_examples: 15877
download_size: 816476
dataset_size: 3433942
- config_name: 20230901.bxr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6686504
num_examples: 2791
download_size: 3073419
dataset_size: 6686504
- config_name: 20230901.ca
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1942397691
num_examples: 733807
download_size: 1127952357
dataset_size: 1942397691
- config_name: 20230901.cbk-zam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1997943
num_examples: 3276
download_size: 776590
dataset_size: 1997943
- config_name: 20230901.cdo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5085776
num_examples: 16406
download_size: 1972779
dataset_size: 5085776
- config_name: 20230901.ce
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 729121943
num_examples: 600961
download_size: 87442481
dataset_size: 729121943
- config_name: 20230901.ceb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4568428530
num_examples: 6122999
download_size: 925715583
dataset_size: 4568428530
- config_name: 20230901.ch
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 187141
num_examples: 591
download_size: 93248
dataset_size: 187141
- config_name: 20230901.cho
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7974
num_examples: 14
download_size: 9782
dataset_size: 7974
- config_name: 20230901.chr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 768617
num_examples: 1121
download_size: 343463
dataset_size: 768617
- config_name: 20230901.chy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 145752
num_examples: 800
download_size: 74383
dataset_size: 145752
- config_name: 20230901.ckb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 105393226
num_examples: 51534
download_size: 42196297
dataset_size: 105393226
- config_name: 20230901.co
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9828777
num_examples: 7286
download_size: 5312668
dataset_size: 9828777
- config_name: 20230901.cr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 54526
num_examples: 176
download_size: 34910
dataset_size: 54526
- config_name: 20230901.crh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9450530
num_examples: 26893
download_size: 3578677
dataset_size: 9450530
- config_name: 20230901.cs
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1552256812
num_examples: 531017
download_size: 981191812
dataset_size: 1552256812
- config_name: 20230901.csb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3748403
num_examples: 5480
download_size: 2055688
dataset_size: 3748403
- config_name: 20230901.cu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 981478
num_examples: 1237
download_size: 397764
dataset_size: 981478
- config_name: 20230901.cv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 81463626
num_examples: 51647
download_size: 29416321
dataset_size: 81463626
- config_name: 20230901.cy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 305551170
num_examples: 279341
download_size: 111947867
dataset_size: 305551170
- config_name: 20230901.da
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 544417184
num_examples: 294196
download_size: 329369262
dataset_size: 544417184
- config_name: 20230901.dag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11405576
num_examples: 9584
download_size: 4905465
dataset_size: 11405576
- config_name: 20230901.de
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9552907552
num_examples: 2828561
download_size: 5816126238
dataset_size: 9552907552
- config_name: 20230901.din
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 562639
num_examples: 511
download_size: 339141
dataset_size: 562639
- config_name: 20230901.diq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19574906
num_examples: 41541
download_size: 7581584
dataset_size: 19574906
- config_name: 20230901.dsb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3314217
num_examples: 3376
download_size: 1930644
dataset_size: 3314217
- config_name: 20230901.dty
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6999985
num_examples: 3629
download_size: 2505457
dataset_size: 6999985
- config_name: 20230901.dv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13919491
num_examples: 4345
download_size: 5255676
dataset_size: 13919491
- config_name: 20230901.dz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8837256
num_examples: 787
download_size: 2571127
dataset_size: 8837256
- config_name: 20230901.ee
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 881798
num_examples: 1172
download_size: 482924
dataset_size: 881798
- config_name: 20230901.el
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1335513979
num_examples: 225623
download_size: 637838917
dataset_size: 1335513979
- config_name: 20230901.eml
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3620183
num_examples: 12954
download_size: 1687294
dataset_size: 3620183
- config_name: 20230901.en
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21550145456
num_examples: 6705754
download_size: 12639246876
dataset_size: 21550145456
- config_name: 20230901.eo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 517650573
num_examples: 342419
download_size: 299082818
dataset_size: 517650573
- config_name: 20230901.es
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5977729133
num_examples: 1826609
download_size: 3528834297
dataset_size: 5977729133
- config_name: 20230901.et
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 436983600
num_examples: 239195
download_size: 266302500
dataset_size: 436983600
- config_name: 20230901.eu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 555867111
num_examples: 408841
download_size: 269449522
dataset_size: 555867111
- config_name: 20230901.ext
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4334809
num_examples: 3737
download_size: 2724237
dataset_size: 4334809
- config_name: 20230901.fa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1879857088
num_examples: 972647
download_size: 771735257
dataset_size: 1879857088
- config_name: 20230901.fat
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2016722
num_examples: 1113
download_size: 1115327
dataset_size: 2016722
- config_name: 20230901.ff
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1619659
num_examples: 1929
download_size: 951246
dataset_size: 1619659
- config_name: 20230901.fi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1138299674
num_examples: 558359
download_size: 686112933
dataset_size: 1138299674
- config_name: 20230901.fiu-vro
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4789834
num_examples: 6572
download_size: 2475758
dataset_size: 4789834
- config_name: 20230901.fj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 600984
num_examples: 1291
download_size: 325888
dataset_size: 600984
- config_name: 20230901.fo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15387671
num_examples: 14054
download_size: 8835604
dataset_size: 15387671
- config_name: 20230901.fr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8004882292
num_examples: 2549364
download_size: 4674130728
dataset_size: 8004882292
- config_name: 20230901.frp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3646051
num_examples: 5744
download_size: 1899883
dataset_size: 3646051
- config_name: 20230901.frr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10513932
num_examples: 17708
download_size: 5190719
dataset_size: 10513932
- config_name: 20230901.fur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4073954
num_examples: 3977
download_size: 2408634
dataset_size: 4073954
- config_name: 20230901.fy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 133127089
num_examples: 52120
download_size: 75305215
dataset_size: 133127089
- config_name: 20230901.ga
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 60113068
num_examples: 58940
download_size: 33805587
dataset_size: 60113068
- config_name: 20230901.gag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2405444
num_examples: 2967
download_size: 1319216
dataset_size: 2405444
- config_name: 20230901.gan
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2905828
num_examples: 6739
download_size: 1504592
dataset_size: 2905828
- config_name: 20230901.gcr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2338042
num_examples: 2398
download_size: 1345374
dataset_size: 2338042
- config_name: 20230901.gd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14057133
num_examples: 16034
download_size: 7199577
dataset_size: 14057133
- config_name: 20230901.gl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 489325069
num_examples: 198354
download_size: 291176228
dataset_size: 489325069
- config_name: 20230901.glk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6078167
num_examples: 7046
download_size: 2379845
dataset_size: 6078167
- config_name: 20230901.gn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6869059
num_examples: 5475
download_size: 3777263
dataset_size: 6869059
- config_name: 20230901.gom
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30886509
num_examples: 4257
download_size: 11274837
dataset_size: 30886509
- config_name: 20230901.gor
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6131050
num_examples: 14572
download_size: 2047896
dataset_size: 6131050
- config_name: 20230901.got
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1533270
num_examples: 1012
download_size: 633392
dataset_size: 1533270
- config_name: 20230901.gu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 121284600
num_examples: 30413
download_size: 39504567
dataset_size: 121284600
- config_name: 20230901.guc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 939870
num_examples: 618
download_size: 556772
dataset_size: 939870
- config_name: 20230901.gur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1620565
num_examples: 1119
download_size: 820347
dataset_size: 1620565
- config_name: 20230901.guw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1900240
num_examples: 1303
download_size: 1030888
dataset_size: 1900240
- config_name: 20230901.gv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6030196
num_examples: 6009
download_size: 3195985
dataset_size: 6030196
- config_name: 20230901.ha
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 73654886
num_examples: 33752
download_size: 40714314
dataset_size: 73654886
- config_name: 20230901.hak
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4509695
num_examples: 10238
download_size: 1879146
dataset_size: 4509695
- config_name: 20230901.haw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1672431
num_examples: 2615
download_size: 694045
dataset_size: 1672431
- config_name: 20230901.he
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1927823110
num_examples: 330733
download_size: 974031783
dataset_size: 1927823110
- config_name: 20230901.hi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 667221249
num_examples: 162285
download_size: 235641052
dataset_size: 667221249
- config_name: 20230901.hif
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5676100
num_examples: 10981
download_size: 2709810
dataset_size: 5676100
- config_name: 20230901.ho
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3450
num_examples: 3
download_size: 7714
dataset_size: 3450
- config_name: 20230901.hr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 441122356
num_examples: 201819
download_size: 276842760
dataset_size: 441122356
- config_name: 20230901.hsb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15657332
num_examples: 13949
download_size: 7427955
dataset_size: 15657332
- config_name: 20230901.ht
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 54641623
num_examples: 70002
download_size: 21699003
dataset_size: 54641623
- config_name: 20230901.hu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1505652559
num_examples: 529609
download_size: 913575039
dataset_size: 1505652559
- config_name: 20230901.hy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1167174995
num_examples: 301853
download_size: 488665605
dataset_size: 1167174995
- config_name: 20230901.hyw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 59286603
num_examples: 11644
download_size: 27305593
dataset_size: 59286603
- config_name: 20230901.ia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16319168
num_examples: 28081
download_size: 8200366
dataset_size: 16319168
- config_name: 20230901.id
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1110116852
num_examples: 657990
download_size: 587862344
dataset_size: 1110116852
- config_name: 20230901.ie
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6658278
num_examples: 11811
download_size: 2978290
dataset_size: 6658278
- config_name: 20230901.ig
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 55435770
num_examples: 19892
download_size: 28977840
dataset_size: 55435770
- config_name: 20230901.ii
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8921
num_examples: 14
download_size: 14936
dataset_size: 8921
- config_name: 20230901.ik
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 192007
num_examples: 831
download_size: 110667
dataset_size: 192007
- config_name: 20230901.ilo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16853115
num_examples: 15369
download_size: 7345494
dataset_size: 16853115
- config_name: 20230901.inh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2722201
num_examples: 2121
download_size: 1273603
dataset_size: 2722201
- config_name: 20230901.io
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 37616691
num_examples: 38645
download_size: 16826496
dataset_size: 37616691
- config_name: 20230901.is
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 87138239
num_examples: 57147
download_size: 51826151
dataset_size: 87138239
- config_name: 20230901.it
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4879369360
num_examples: 1824508
download_size: 2957576589
dataset_size: 4879369360
- config_name: 20230901.iu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 289114
num_examples: 561
download_size: 136067
dataset_size: 289114
- config_name: 20230901.ja
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6988535462
num_examples: 1383531
download_size: 3966219907
dataset_size: 6988535462
- config_name: 20230901.jam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1142809
num_examples: 1775
download_size: 702478
dataset_size: 1142809
- config_name: 20230901.jbo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2522674
num_examples: 1391
download_size: 888919
dataset_size: 2522674
- config_name: 20230901.jv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 71017946
num_examples: 73150
download_size: 36394809
dataset_size: 71017946
- config_name: 20230901.ka
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 696934958
num_examples: 169131
download_size: 238964498
dataset_size: 696934958
- config_name: 20230901.kaa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4754449
num_examples: 3856
download_size: 2682618
dataset_size: 4754449
- config_name: 20230901.kab
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4388232
num_examples: 5825
download_size: 2578056
dataset_size: 4388232
- config_name: 20230901.kbd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3040422
num_examples: 1656
download_size: 1319464
dataset_size: 3040422
- config_name: 20230901.kbp
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3579071
num_examples: 1922
download_size: 1795549
dataset_size: 3579071
- config_name: 20230901.kcg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 728303
num_examples: 913
download_size: 382843
dataset_size: 728303
- config_name: 20230901.kg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 386320
num_examples: 1325
download_size: 206106
dataset_size: 386320
- config_name: 20230901.ki
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 731003
num_examples: 1647
download_size: 408805
dataset_size: 731003
- config_name: 20230901.kj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5190
num_examples: 5
download_size: 10453
dataset_size: 5190
- config_name: 20230901.kk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 494357868
num_examples: 237902
download_size: 179217175
dataset_size: 494357868
- config_name: 20230901.kl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 313121
num_examples: 298
download_size: 193507
dataset_size: 313121
- config_name: 20230901.km
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 102576754
num_examples: 11874
download_size: 35281246
dataset_size: 102576754
- config_name: 20230901.kn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 399521127
num_examples: 31136
download_size: 145847507
dataset_size: 399521127
- config_name: 20230901.ko
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1401002436
num_examples: 643723
download_size: 792232087
dataset_size: 1401002436
- config_name: 20230901.koi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5102564
num_examples: 3504
download_size: 1887860
dataset_size: 5102564
- config_name: 20230901.krc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4586443
num_examples: 2098
download_size: 2015581
dataset_size: 4586443
- config_name: 20230901.ks
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2828813
num_examples: 4278
download_size: 1074931
dataset_size: 2828813
- config_name: 20230901.ksh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3115805
num_examples: 2944
download_size: 2007139
dataset_size: 3115805
- config_name: 20230901.ku
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 43200623
num_examples: 59822
download_size: 22481749
dataset_size: 43200623
- config_name: 20230901.kv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9244682
num_examples: 5603
download_size: 3687481
dataset_size: 9244682
- config_name: 20230901.kw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4675299
num_examples: 7088
download_size: 2703089
dataset_size: 4675299
- config_name: 20230901.ky
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 168378862
num_examples: 80665
download_size: 64423485
dataset_size: 168378862
- config_name: 20230901.la
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 140689294
num_examples: 138140
download_size: 76340691
dataset_size: 140689294
- config_name: 20230901.lad
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4878588
num_examples: 3648
download_size: 2737222
dataset_size: 4878588
- config_name: 20230901.lb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 88394374
num_examples: 62131
download_size: 50250905
dataset_size: 88394374
- config_name: 20230901.lbe
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 744689
num_examples: 1277
download_size: 304111
dataset_size: 744689
- config_name: 20230901.lez
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9793873
num_examples: 4264
download_size: 3852020
dataset_size: 9793873
- config_name: 20230901.lfn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8912633
num_examples: 4819
download_size: 5206921
dataset_size: 8912633
- config_name: 20230901.lg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6887606
num_examples: 4041
download_size: 3703329
dataset_size: 6887606
- config_name: 20230901.li
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 29373978
num_examples: 14526
download_size: 17641752
dataset_size: 29373978
- config_name: 20230901.lij
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11336209
num_examples: 11184
download_size: 6176932
dataset_size: 11336209
- config_name: 20230901.lld
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 50110703
num_examples: 180580
download_size: 13839995
dataset_size: 50110703
- config_name: 20230901.lmo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 43217251
num_examples: 72899
download_size: 19041052
dataset_size: 43217251
- config_name: 20230901.ln
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2024359
num_examples: 3531
download_size: 1116032
dataset_size: 2024359
- config_name: 20230901.lo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15117598
num_examples: 4995
download_size: 5527479
dataset_size: 15117598
- config_name: 20230901.lrc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 144
num_examples: 1
download_size: 2723
dataset_size: 144
- config_name: 20230901.lt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 334697442
num_examples: 210202
download_size: 193837594
dataset_size: 334697442
- config_name: 20230901.ltg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 915321
num_examples: 1070
download_size: 530333
dataset_size: 915321
- config_name: 20230901.lv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 224476781
num_examples: 122266
download_size: 128157342
dataset_size: 224476781
- config_name: 20230901.mad
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1504064
num_examples: 1160
download_size: 856724
dataset_size: 1504064
- config_name: 20230901.mai
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21426268
num_examples: 14673
download_size: 6117668
dataset_size: 21426268
- config_name: 20230901.map-bms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5413521
num_examples: 13574
download_size: 2427039
dataset_size: 5413521
- config_name: 20230901.mdf
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4558408
num_examples: 4073
download_size: 1688901
dataset_size: 4558408
- config_name: 20230901.mg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 72920973
num_examples: 96060
download_size: 21675187
dataset_size: 72920973
- config_name: 20230901.mh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11524
num_examples: 8
download_size: 16877
dataset_size: 11524
- config_name: 20230901.mhr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19188080
num_examples: 11246
download_size: 6867184
dataset_size: 19188080
- config_name: 20230901.mi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4159228
num_examples: 7898
download_size: 1039215
dataset_size: 4159228
- config_name: 20230901.min
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 118651753
num_examples: 227024
download_size: 25511300
dataset_size: 118651753
- config_name: 20230901.mk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 640596981
num_examples: 138453
download_size: 266334099
dataset_size: 640596981
- config_name: 20230901.ml
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 490833742
num_examples: 85451
download_size: 181789443
dataset_size: 490833742
- config_name: 20230901.mn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 90537032
num_examples: 23797
download_size: 40809884
dataset_size: 90537032
- config_name: 20230901.mni
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9818372
num_examples: 10892
download_size: 2207828
dataset_size: 9818372
- config_name: 20230901.mnw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 46788079
num_examples: 3249
download_size: 13588244
dataset_size: 46788079
- config_name: 20230901.mr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 260342611
num_examples: 93653
download_size: 81397471
dataset_size: 260342611
- config_name: 20230901.mrj
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8731508
num_examples: 10542
download_size: 3279598
dataset_size: 8731508
- config_name: 20230901.ms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 419678289
num_examples: 367463
download_size: 211505058
dataset_size: 419678289
- config_name: 20230901.mt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 30536771
num_examples: 5598
download_size: 17850471
dataset_size: 30536771
- config_name: 20230901.mus
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 922
num_examples: 2
download_size: 5286
dataset_size: 922
- config_name: 20230901.mwl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19321295
num_examples: 4485
download_size: 11488668
dataset_size: 19321295
- config_name: 20230901.my
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 312482214
num_examples: 109166
download_size: 84914025
dataset_size: 312482214
- config_name: 20230901.myv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11131103
num_examples: 7947
download_size: 4586300
dataset_size: 11131103
- config_name: 20230901.mzn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 15830260
num_examples: 17696
download_size: 5258917
dataset_size: 15830260
- config_name: 20230901.nah
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2494573
num_examples: 6180
download_size: 1188515
dataset_size: 2494573
- config_name: 20230901.nap
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6377175
num_examples: 14868
download_size: 3176787
dataset_size: 6377175
- config_name: 20230901.nds
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 92854034
num_examples: 84258
download_size: 48004103
dataset_size: 92854034
- config_name: 20230901.nds-nl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13560241
num_examples: 7707
download_size: 8287716
dataset_size: 13560241
- config_name: 20230901.ne
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 106930147
num_examples: 32423
download_size: 36867790
dataset_size: 106930147
- config_name: 20230901.new
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 159078463
num_examples: 73003
download_size: 20468180
dataset_size: 159078463
- config_name: 20230901.ng
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 68090
num_examples: 21
download_size: 52355
dataset_size: 68090
- config_name: 20230901.nia
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1909528
num_examples: 1651
download_size: 970289
dataset_size: 1909528
- config_name: 20230901.nl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2631597985
num_examples: 2130944
download_size: 1467451759
dataset_size: 2631597985
- config_name: 20230901.nn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 236262183
num_examples: 166642
download_size: 134021748
dataset_size: 236262183
- config_name: 20230901.no
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1027035487
num_examples: 615107
download_size: 599774543
dataset_size: 1027035487
- config_name: 20230901.nov
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 917413
num_examples: 1636
download_size: 469305
dataset_size: 917413
- config_name: 20230901.nqo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8219209
num_examples: 1571
download_size: 3478458
dataset_size: 8219209
- config_name: 20230901.nrm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3215096
num_examples: 4899
download_size: 1505717
dataset_size: 3215096
- config_name: 20230901.nso
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2789807
num_examples: 8643
download_size: 932635
dataset_size: 2789807
- config_name: 20230901.nv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16886983
num_examples: 22324
download_size: 3288156
dataset_size: 16886983
- config_name: 20230901.ny
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1695102
num_examples: 1133
download_size: 938716
dataset_size: 1695102
- config_name: 20230901.oc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 119055715
num_examples: 89270
download_size: 63403412
dataset_size: 119055715
- config_name: 20230901.olo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3152274
num_examples: 4595
download_size: 1716616
dataset_size: 3152274
- config_name: 20230901.om
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3430032
num_examples: 1911
download_size: 1900253
dataset_size: 3430032
- config_name: 20230901.or
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 72723705
num_examples: 17166
download_size: 25879025
dataset_size: 72723705
- config_name: 20230901.os
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13112794
num_examples: 17446
download_size: 5554157
dataset_size: 13112794
- config_name: 20230901.pa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 211148791
num_examples: 51013
download_size: 80668229
dataset_size: 211148791
- config_name: 20230901.pag
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1384685
num_examples: 2662
download_size: 451639
dataset_size: 1384685
- config_name: 20230901.pam
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8237319
num_examples: 8951
download_size: 4235968
dataset_size: 8237319
- config_name: 20230901.pap
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4105109
num_examples: 3427
download_size: 2353692
dataset_size: 4105109
- config_name: 20230901.pcd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5680386
num_examples: 5692
download_size: 3127716
dataset_size: 5680386
- config_name: 20230901.pcm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1807444
num_examples: 1069
download_size: 1111719
dataset_size: 1807444
- config_name: 20230901.pdc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1223268
num_examples: 2182
download_size: 696649
dataset_size: 1223268
- config_name: 20230901.pfl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3688761
num_examples: 2759
download_size: 1963616
dataset_size: 3688761
- config_name: 20230901.pi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1133972
num_examples: 3056
download_size: 196617
dataset_size: 1133972
- config_name: 20230901.pih
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 381602
num_examples: 933
download_size: 238696
dataset_size: 381602
- config_name: 20230901.pl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2929578273
num_examples: 1579326
download_size: 1803033674
dataset_size: 2929578273
- config_name: 20230901.pms
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 34318527
num_examples: 67935
download_size: 11997737
dataset_size: 34318527
- config_name: 20230901.pnb
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 303876889
num_examples: 72240
download_size: 133093182
dataset_size: 303876889
- config_name: 20230901.pnt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 630714
num_examples: 533
download_size: 275657
dataset_size: 630714
- config_name: 20230901.ps
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 109664877
num_examples: 20166
download_size: 51380951
dataset_size: 109664877
- config_name: 20230901.pt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2731435653
num_examples: 1107946
download_size: 1593477871
dataset_size: 2731435653
- config_name: 20230901.pwn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 792234
num_examples: 394
download_size: 433617
dataset_size: 792234
- config_name: 20230901.qu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 16754330
num_examples: 24096
download_size: 7651901
dataset_size: 16754330
- config_name: 20230901.rm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18052223
num_examples: 3821
download_size: 10475947
dataset_size: 18052223
- config_name: 20230901.rmy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 555208
num_examples: 969
download_size: 324565
dataset_size: 555208
- config_name: 20230901.rn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 522604
num_examples: 808
download_size: 295315
dataset_size: 522604
- config_name: 20230901.ro
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 842490285
num_examples: 441538
download_size: 471249050
dataset_size: 842490285
- config_name: 20230901.roa-rup
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1691177
num_examples: 1409
download_size: 953023
dataset_size: 1691177
- config_name: 20230901.roa-tara
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7435543
num_examples: 9341
download_size: 3982748
dataset_size: 7435543
- config_name: 20230901.ru
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10213314874
num_examples: 1935562
download_size: 4935575161
dataset_size: 10213314874
- config_name: 20230901.rue
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13110982
num_examples: 8749
download_size: 6335689
dataset_size: 13110982
- config_name: 20230901.rw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11946518
num_examples: 8044
download_size: 6640582
dataset_size: 11946518
- config_name: 20230901.sa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 69665685
num_examples: 12143
download_size: 23750145
dataset_size: 69665685
- config_name: 20230901.sah
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 47816835
num_examples: 16867
download_size: 21350955
dataset_size: 47816835
- config_name: 20230901.sat
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 40858282
num_examples: 9029
download_size: 13950418
dataset_size: 40858282
- config_name: 20230901.sc
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12732368
num_examples: 7559
download_size: 7682010
dataset_size: 12732368
- config_name: 20230901.scn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 17667128
num_examples: 26519
download_size: 10212874
dataset_size: 17667128
- config_name: 20230901.sco
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 43780491
num_examples: 36169
download_size: 24761453
dataset_size: 43780491
- config_name: 20230901.sd
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36726435
num_examples: 16894
download_size: 17439666
dataset_size: 36726435
- config_name: 20230901.se
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3600162
num_examples: 8042
download_size: 1814812
dataset_size: 3600162
- config_name: 20230901.sg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 130365
num_examples: 553
download_size: 65750
dataset_size: 130365
- config_name: 20230901.sh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 569747500
num_examples: 458212
download_size: 270404350
dataset_size: 569747500
- config_name: 20230901.shi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2348743
num_examples: 1771
download_size: 1347026
dataset_size: 2348743
- config_name: 20230901.shn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33479127
num_examples: 13878
download_size: 8148046
dataset_size: 33479127
- config_name: 20230901.si
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 136810596
num_examples: 22893
download_size: 53392258
dataset_size: 136810596
- config_name: 20230901.simple
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 287855540
num_examples: 238150
download_size: 157248327
dataset_size: 287855540
- config_name: 20230901.sk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 414483614
num_examples: 241614
download_size: 240700453
dataset_size: 414483614
- config_name: 20230901.skr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22524450
num_examples: 5768
download_size: 9854778
dataset_size: 22524450
- config_name: 20230901.sl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 451888560
num_examples: 182364
download_size: 268258798
dataset_size: 451888560
- config_name: 20230901.sm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 904339
num_examples: 1149
download_size: 493408
dataset_size: 904339
- config_name: 20230901.smn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5673858
num_examples: 5333
download_size: 2767537
dataset_size: 5673858
- config_name: 20230901.sn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 9587086
num_examples: 11354
download_size: 4889856
dataset_size: 9587086
- config_name: 20230901.so
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13594918
num_examples: 9003
download_size: 7886560
dataset_size: 13594918
- config_name: 20230901.sq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 204838795
num_examples: 103850
download_size: 114648801
dataset_size: 204838795
- config_name: 20230901.sr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1709332753
num_examples: 673516
download_size: 704099906
dataset_size: 1709332753
- config_name: 20230901.srn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 649208
num_examples: 1219
download_size: 215087
dataset_size: 649208
- config_name: 20230901.ss
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1024219
num_examples: 890
download_size: 574998
dataset_size: 1024219
- config_name: 20230901.st
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 956079
num_examples: 1094
download_size: 523485
dataset_size: 956079
- config_name: 20230901.stq
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4934155
num_examples: 4132
download_size: 2880185
dataset_size: 4934155
- config_name: 20230901.su
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 48039769
num_examples: 61557
download_size: 19764523
dataset_size: 48039769
- config_name: 20230901.sv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2146681766
num_examples: 2570535
download_size: 1009875904
dataset_size: 2146681766
- config_name: 20230901.sw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 72884231
num_examples: 78444
download_size: 35798700
dataset_size: 72884231
- config_name: 20230901.szl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21412618
num_examples: 56961
download_size: 7330797
dataset_size: 21412618
- config_name: 20230901.szy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 10793237
num_examples: 4794
download_size: 5811192
dataset_size: 10793237
- config_name: 20230901.ta
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 801530157
num_examples: 158664
download_size: 262319221
dataset_size: 801530157
- config_name: 20230901.tay
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2909279
num_examples: 2715
download_size: 1203598
dataset_size: 2909279
- config_name: 20230901.tcy
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12142146
num_examples: 2195
download_size: 4589253
dataset_size: 12142146
- config_name: 20230901.te
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 719651788
num_examples: 85840
download_size: 211297920
dataset_size: 719651788
- config_name: 20230901.tet
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1464393
num_examples: 1465
download_size: 743636
dataset_size: 1464393
- config_name: 20230901.tg
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 147555847
num_examples: 110263
download_size: 49551755
dataset_size: 147555847
- config_name: 20230901.th
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1002621820
num_examples: 158289
download_size: 371401101
dataset_size: 1002621820
- config_name: 20230901.ti
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 639136
num_examples: 430
download_size: 317759
dataset_size: 639136
- config_name: 20230901.tk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13169481
num_examples: 7898
download_size: 7284367
dataset_size: 13169481
- config_name: 20230901.tl
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 84784414
num_examples: 45155
download_size: 45203377
dataset_size: 84784414
- config_name: 20230901.tn
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3561901
num_examples: 1160
download_size: 1245027
dataset_size: 3561901
- config_name: 20230901.to
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1082372
num_examples: 1866
download_size: 515293
dataset_size: 1082372
- config_name: 20230901.tpi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 457865
num_examples: 1396
download_size: 231303
dataset_size: 457865
- config_name: 20230901.tr
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 984939694
num_examples: 530830
download_size: 554907604
dataset_size: 984939694
- config_name: 20230901.trv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4906787
num_examples: 1835
download_size: 2654525
dataset_size: 4906787
- config_name: 20230901.ts
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 845256
num_examples: 778
download_size: 454559
dataset_size: 845256
- config_name: 20230901.tt
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 680656530
num_examples: 501002
download_size: 129123758
dataset_size: 680656530
- config_name: 20230901.tum
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 13199654
num_examples: 18591
download_size: 5352424
dataset_size: 13199654
- config_name: 20230901.tw
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7386605
num_examples: 3717
download_size: 3815538
dataset_size: 7386605
- config_name: 20230901.ty
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 333733
num_examples: 1355
download_size: 149306
dataset_size: 333733
- config_name: 20230901.tyv
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14319641
num_examples: 3481
download_size: 6513101
dataset_size: 14319641
- config_name: 20230901.udm
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6975919
num_examples: 5665
download_size: 2952228
dataset_size: 6975919
- config_name: 20230901.ug
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 42219904
num_examples: 8621
download_size: 17716007
dataset_size: 42219904
- config_name: 20230901.uk
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4910916097
num_examples: 1285004
download_size: 2303106335
dataset_size: 4910916097
- config_name: 20230901.ur
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 402322741
num_examples: 197343
download_size: 164074548
dataset_size: 402322741
- config_name: 20230901.uz
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 385386661
num_examples: 242726
download_size: 203362895
dataset_size: 385386661
- config_name: 20230901.ve
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 349857
num_examples: 840
download_size: 161562
dataset_size: 349857
- config_name: 20230901.vec
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 37883286
num_examples: 69250
download_size: 16164035
dataset_size: 37883286
- config_name: 20230901.vep
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11487509
num_examples: 6918
download_size: 6327017
dataset_size: 11487509
- config_name: 20230901.vi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1606980713
num_examples: 1287263
download_size: 742700712
dataset_size: 1606980713
- config_name: 20230901.vls
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11310015
num_examples: 7839
download_size: 6960289
dataset_size: 11310015
- config_name: 20230901.vo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 19274897
num_examples: 34504
download_size: 6491359
dataset_size: 19274897
- config_name: 20230901.wa
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12140372
num_examples: 11955
download_size: 7231141
dataset_size: 12140372
- config_name: 20230901.war
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 467623925
num_examples: 1266345
download_size: 109503863
dataset_size: 467623925
- config_name: 20230901.wo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3498562
num_examples: 1718
download_size: 2077375
dataset_size: 3498562
- config_name: 20230901.wuu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 25005942
num_examples: 42969
download_size: 15994961
dataset_size: 25005942
- config_name: 20230901.xal
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1390063
num_examples: 2290
download_size: 507117
dataset_size: 1390063
- config_name: 20230901.xh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2415590
num_examples: 1667
download_size: 1503917
dataset_size: 2415590
- config_name: 20230901.xmf
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 37262425
num_examples: 17949
download_size: 12771047
dataset_size: 37262425
- config_name: 20230901.yi
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 36150608
num_examples: 15329
download_size: 16208341
dataset_size: 36150608
- config_name: 20230901.yo
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 18460117
num_examples: 33495
download_size: 8504564
dataset_size: 18460117
- config_name: 20230901.za
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1359106
num_examples: 2971
download_size: 662982
dataset_size: 1359106
- config_name: 20230901.zea
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 5106625
num_examples: 5834
download_size: 2567716
dataset_size: 5106625
- config_name: 20230901.zh
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2766648619
num_examples: 1375017
download_size: 1748154636
dataset_size: 2766648619
- config_name: 20230901.zh-classical
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 14819164
num_examples: 12615
download_size: 10031693
dataset_size: 14819164
- config_name: 20230901.zh-min-nan
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 159385896
num_examples: 432644
download_size: 37476665
dataset_size: 159385896
- config_name: 20230901.zh-yue
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 108979942
num_examples: 133155
download_size: 64318527
dataset_size: 108979942
- config_name: 20230901.zu
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 6925330
num_examples: 11486
download_size: 3690925
dataset_size: 6925330
- config_name: 20230601.et
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 431680309
num_examples: 236848
download_size: 262989758
dataset_size: 431680309
---
# Wikipedia
This Wikipedia dataset contains all available languages for recent dumps. It is
a refresh of the [20220301 wikipedia](https://hf.co/datasets/wikipedia) from
Huggingface, so it has the same license and dataset card details. The benefits
of this dataset are:
- more recent dumps (see table below)
- a few additional languages
- all available languages are preprocessed (including the largests: `en` and
`ceb`)
| version | dump | # available languages | closed & dump | closed & no dump |
| ----- | ---- | ----- | ------ | --- |
| `1.0.0` | 20230601 | 328 | 9: ak (soon), cho, ho, ii, kj, lrc, mh, mus, ng | 4: aa, hz, kr, na |
| `1.1.0` | 20230601 | 329 (+et ~[az,ceb,ch,hr,ii,lrc,ta]) | 9: ak (soon), cho, ho, ii, kj, lrc, mh, mus, ng | 4: aa, hz, kr, na |
| `1.2.0` | 20230901 | idem | 9: ak , cho, ho, ii, kj, lrc, mh, mus, ng | 4: aa, hz, kr, na |
Source: [List of Wikimedia
Languages](https://en.wikipedia.org/wiki/List_of_Wikipedias). A few (9)
Wikimedias are closed, meaning they won't have new pages, but the dumps are
still available. In addition, very few (4) Wikimedias are closed and don't
have dumps anymore.
## Release Notes
`1.2.0`
- **chore**: Update to 20230901
`1.1.0`
- **feat**: Add missing estonian (my bad), thanks Chris Ha
- **fix**: update category lists for az, ceb, ch, hr, ii, lrc, ta, which means
they were all processed again.
`1.0.0`
- **chore**: File layout is now `data/{dump}/{lang}/{info.json,*.parquet}`.
Sorry for the radical update, probably won't happen again.
- **chore**: Parquet files are now sharded (size < 200 MB), allowing parallel
downloads and processing.
- **fix**: All languages were all processed again because of a bug in the media
and category names, leading to some links not being extracted.
- **feat**: Add `en` and `ceb` which were too big for my Beam DirectRunner at
the time.
## Usage
```python
from datasets import load_dataset
wikipedia_es = load_dataset("graelo/wikipedia", "20230601.es")
```
---
## Build instructions
Developer only. This dataset was preprocessed with a Beam DirectRunner as
follows.
### 1. Determine the date of the dump you are interested in
Choose one wikipedia dump, for instance <https://dumps.wikimedia.org/cewiki/>
and identify the date.
### 2. [Optional] Get a refreshed list of languages
This is optional because it not very likely that a new language will have
suddenly appeared since the last version _and_ have a significant dataset.
Navigate to <https://en.wikipedia.org/wiki/List_of_Wikipedias> and copy the
languages column from the "Detailed list" table (near the end of the page).
Copy that content in the form of a Python list into `lang_def.py` (at the top
of the repo) under a new date.
### 3. [Optional] Create Media and Category aliases
In order to properly extract links to images and media in all languages, we
must refresh the two corresponding files. To do so, from the root of the repo,
run
```sh
python -m prep.create_aliases
```
This will create or update these two files at the root of the repo:
- `media_aliases.py`
- `category_aliases.py`
These files are used in the final step
### 4. Build and prepare the datasets into sharded parquet files
Running this script downloads the wikipedia dumps for each language in
`lang_def.py` and shards each language dataset into the appropriate number of
shards (max size ~ 250MB).
```sh
python -m prep.build --date 20230601
```
There are other options:
```text
$ python -m prep.build --help
usage: Wikipedia Builder [-h] [--date DATE] [--language [LANG ...]] [--cache-dir DIR] [--mirror MIRROR]
Prepares the Wikipedia dataset for each language
optional arguments:
-h, --help show this help message and exit
--date DATE Wikipedia dump date (e.g. 20230601)
--language [LANG ...] Language code (e.g. en). If missing, all languages are processed
--cache-dir DIR Cache directory for 🤗 Datasets
--mirror MIRROR Mirror URL
```
For instance, for faster downloads of the dumps, use the mirror option:
```sh
python -m prep.build \
--date 20230601 \
--language bs \
--mirror https://mirror.accum.se/mirror/wikimedia.org/dumps/
```
It will download the dumps at around 60MB/s instead of the capped speed
(~4MB/s) from <https://dumps.wikimedia.org>. The script will skip existing
directories, allowing you to run the script in several passes.
Notes:
- These instructions build upon the build process of the
[Wikipedia](https://huggingface.co/datasets/wikipedia) 🤗 Dataset. HF did a
fantastic job, I just pushed it a bit further.
- Be aware that not all mirrors contain all dumps. For instance mirror.accum.se
does not contain dumps for languages such as be-x-old or cbk-zam. My own
solution is to run a first pass using the aforementioned mirror, and a second
pass with the official `https://dumps.wikimedia.org` site (omitting the
`--mirror` parameter).
|
yys/OpenOrca-Chinese | yys | 2023-09-08T08:05:47Z | 78 | 99 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:zh",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2301.13688",
"region:us"
] | [
"conversational",
"text-classification",
"token-classification",
"table-question-answering",
"question-answering",
"zero-shot-classification",
"summarization",
"feature-extraction",
"text-generation",
"text2text-generation"
] | 2023-09-07T06:01:51Z | null | ---
license: mit
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
language:
- zh
pretty_name: OpenOrca-Chinese
size_categories:
- 10M<n<100M
---
<p><h1>🐋 OpenOrca-Chinese 数据集!🐋</h1></p>
感谢 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 数据集的发布,给广大NLP研究人员和开发者带来了宝贵的资源!
这是一个对 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 数据集中文翻译的版本,翻译引擎为 Google 翻译,希望能给中文 LLM 研究做出一点点贡献。
<br/>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
|
ArmelR/the-pile-splitted | ArmelR | 2023-09-06T09:53:16Z | 18,476 | 22 | [
"size_categories:10M<n<100M",
"format:arrow",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2101.00027",
"arxiv:2201.07311",
"region:us"
] | [] | 2023-07-30T14:21:26Z | null | ---
configs:
- config_name: all
data_files:
- split: train
path:
- "data/ArXiv/train/*.arrow"
- "data/BookCorpus2/train/*.arrow"
- "data/Books3/train/*.arrow"
- "data/DM Mathematics/train/*.arrow"
- "data/Enron Emails/train/*.arrow"
- "data/EuroParl/train/*.arrow"
- "data/FreeLaw/train/*.arrow"
- "data/Github/train/*.arrow"
- "data/Gutenberg (PG-19)/train/*.arrow"
- "data/HackerNews/train/*.arrow"
- "data/NIH ExPorter/train/*.arrow"
- "data/OpenSubtitles/train/*.arrow"
- "data/OpenWebText2/train/*.arrow"
- "data/PhilPapers/train/*.arrow"
- "data/Pile-CC/train/*.arrow"
- "data/PubMed Abstracts/train/*.arrow"
- "data/PubMed Central/train/*.arrow"
- "data/StackExchange/train/*.arrow"
- "data/UPSTO Backgrounds/train/*.arrow"
- "data/Ubuntu IRC/train/*.arrow"
- "data/Wikipedia (en)/train/*.arrow"
- "data/YoutubeSubtitles/train/*.arrow"
- split: test
path:
- "data/ArXiv/test/*.arrow"
- "data/BookCorpus2/test/*.arrow"
- "data/Books3/test/*.arrow"
- "data/DM Mathematics/test/*.arrow"
- "data/Enron Emails/test/*.arrow"
- "data/EuroParl/test/*.arrow"
- "data/FreeLaw/test/*.arrow"
- "data/Github/test/*.arrow"
- "data/Gutenberg (PG-19)/test/*.arrow"
- "data/HackerNews/test/*.arrow"
- "data/NIH ExPorter/test/*.arrow"
- "data/OpenSubtitles/test/*.arrow"
- "data/OpenWebText2/test/*.arrow"
- "data/PhilPapers/test/*.arrow"
- "data/Pile-CC/test/*.arrow"
- "data/PubMed Abstracts/test/*.arrow"
- "data/PubMed Central/test/*.arrow"
- "data/StackExchange/test/*.arrow"
- "data/UPSTO Backgrounds/test/*.arrow"
- "data/Ubuntu IRC/test/*.arrow"
- "data/Wikipedia (en)/test/*.arrow"
- "data/YoutubeSubtitles/test/*.arrow"
default: true
- config_name: ArXiv
data_files:
- split: train
path: "data/ArXiv/train/*.arrow"
- split: test
path: "data/ArXiv/test/*.arrow"
- config_name: BookCorpus2
data_files:
- split: train
path: "data/BookCorpus2/train/*.arrow"
- split: test
path: "data/BookCorpus2/test/*.arrow"
- config_name: Books3
data_files:
- split: train
path: "data/Books3/train/*.arrow"
- split: test
path: "data/Books3/test/*.arrow"
- config_name: DM Mathematics
data_files:
- split: train
path: "data/DM Mathematics/train/*.arrow"
- split: test
path: "data/DM Mathematics/test/*.arrow"
- config_name: Enron Emails
data_files:
- split: train
path: "data/Enron Emails/train/*.arrow"
- split: test
path: "data/Enron Emails/test/*.arrow"
- config_name: EuroParl
data_files:
- split: train
path: "data/EuroParl/train/*.arrow"
- split: test
path: "data/EuroParl/test/*.arrow"
- config_name: FreeLaw
data_files:
- split: train
path: "data/FreeLaw/train/*.arrow"
- split: test
path: "data/FreeLaw/test/*.arrow"
- config_name: Github
data_files:
- split: train
path: "data/Github/train/*.arrow"
- split: test
path: "data/Github/test/*.arrow"
- config_name: Gutenberg (PG-19)
data_files:
- split: train
path: "data/Gutenberg (PG-19)/train/*.arrow"
- split: test
path: "data/Gutenberg (PG-19)/test/*.arrow"
- config_name: HackerNews
data_files:
- split: train
path: "data/HackerNews/train/*.arrow"
- split: test
path: "data/HackerNews/test/*.arrow"
- config_name: NIH ExPorter
data_files:
- split: train
path: "data/NIH ExPorter/train/*.arrow"
- split: test
path: "data/NIH ExPorter/test/*.arrow"
- config_name: OpenSubtitles
data_files:
- split: train
path: "data/OpenSubtitles/train/*.arrow"
- split: test
path: "data/OpenSubtitles/test/*.arrow"
- config_name: OpenWebText2
data_files:
- split: train
path: "data/OpenWebText2/train/*.arrow"
- split: test
path: "data/OpenWebText2/test/*.arrow"
- config_name: PhilPapers
data_files:
- split: train
path: "data/PhilPapers/train/*.arrow"
- split: test
path: "data/PhilPapers/test/*.arrow"
- config_name: Pile-CC
data_files:
- split: train
path: "data/Pile-CC/train/*.arrow"
- split: test
path: "data/Pile-CC/test/*.arrow"
- config_name: PubMed Abstracts
data_files:
- split: train
path: "data/PubMed Abstracts/train/*.arrow"
- split: test
path: "data/PubMed Abstracts/test/*.arrow"
- config_name: PubMed Central
data_files:
- split: train
path: "data/PubMed Central/train/*.arrow"
- split: test
path: "data/PubMed Central/test/*.arrow"
- config_name: StackExchange
data_files:
- split: train
path: "data/StackExchange/train/*.arrow"
- split: test
path: "data/StackExchange/test/*.arrow"
- config_name: UPSTO Backgrounds
data_files:
- split: train
path: "data/UPSTO Backgrounds/train/*.arrow"
- split: test
path: "data/UPSTO Backgrounds/test/*.arrow"
- config_name: Ubuntu IRC
data_files:
- split: train
path: "data/Ubuntu IRC/train/*.arrow"
- split: test
path: "data/Ubuntu IRC/test/*.arrow"
- config_name: Wikipedia (en)
data_files:
- split: train
path: "data/Wikipedia (en)/train/*.arrow"
- split: test
path: "data/Wikipedia (en)/test/*.arrow"
- config_name: YoutubeSubtitles
data_files:
- split: train
path: "data/YoutubeSubtitles/train/*.arrow"
- split: test
path: "data/YoutubeSubtitles/test/*.arrow"
---
# Dataset description
[The pile](https://arxiv.org/abs/2101.00027) is an 800GB dataset of english text
designed by EleutherAI to train large-scale language models. The original version of
the dataset can be found [here](https://huggingface.co/datasets/EleutherAI/pile).
The dataset is divided into 22 smaller high-quality datasets. For more information
each of them, please refer to [the datasheet for the pile](https://arxiv.org/abs/2201.07311).
However, the current version of the dataset, available on the Hub, is not splitted accordingly.
We had to solve this problem in order to improve the user experience when it comes to deal with
the pile via the hub.
Here is an instance of the pile
```
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
}
```
We used the `meta` column to properly divide the dataset in subsets. Each instance `example` belongs to the subset
`domain` and `domain = example['meta']['pile_set_name']`. By doing this, we were able to create a [new version of the pile](https://huggingface.co/datasets/ArmelR/sharded-pile)
that is properly divided, each instance having a new column `domain`.
We further splitted each subset in train/test (97%/3%) to build the current dataset which the following structure
```
data
ArXiv
train
test
BookCorpus2
train
test
Books3
train
test
```
# Usage
```python
from datasets import load_dataset
dataset = load_dataset(
"ArmelR/the-pile-splitted",
subset_of_interest,
num_proc=8
)
```
Using `subset_of_interest = "default"` will load the whole dataset.
|
ai4bharat/Aksharantar | ai4bharat | 2023-08-31T07:05:34Z | 193 | 11 | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:found",
"language_creators:other",
"multilinguality:multilingual",
"source_datasets:original",
"language:asm",
"language:ben",
"language:brx",
"language:doi",
"language:guj",
"language:hin",
"language:kan",
"language:kas",
"language:kok",
"language:mai",
"language:mal",
"language:mar",
"language:mni",
"language:nep",
"language:ori",
"language:pan",
"language:san",
"language:sid",
"language:tam",
"language:tel",
"language:urd",
"license:cc",
"arxiv:2205.03018",
"region:us"
] | [
"text-generation"
] | 2022-05-06T12:35:15Z | 1 | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
- machine-generated
- found
- other
language:
- asm
- ben
- brx
- doi
- guj
- hin
- kan
- kas
- kok
- mai
- mal
- mar
- mni
- nep
- ori
- pan
- san
- sid
- tam
- tel
- urd
license: cc
multilinguality:
- multilingual
pretty_name: Aksharantar
source_datasets:
- original
task_categories:
- text-generation
task_ids: []
---
# Dataset Card for Aksharantar
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://indicnlp.ai4bharat.org/indic-xlit/
- **Repository:** https://github.com/AI4Bharat/IndicXlit/
- **Paper:** [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Aksharantar is the largest publicly available transliteration dataset for 20 Indic languages. The corpus has 26M Indic language-English transliteration pairs.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
| <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> | <!-- --> |
| -------------- | -------------- | -------------- | --------------- | -------------- | ------------- |
| Assamese (asm) | Hindi (hin) | Maithili (mai) | Marathi (mar) | Punjabi (pan) | Tamil (tam) |
| Bengali (ben) | Kannada (kan) | Malayalam (mal)| Nepali (nep) | Sanskrit (san) | Telugu (tel) |
| Bodo(brx) | Kashmiri (kas) | Manipuri (mni) | Oriya (ori) | Sindhi (snd) | Urdu (urd) |
| Gujarati (guj) | Konkani (kok) | Dogri (doi) |
## Dataset Structure
### Data Instances
```
A random sample from Hindi (hin) Train dataset.
{
'unique_identifier': 'hin1241393',
'native word': 'स्वाभिमानिक',
'english word': 'swabhimanik',
'source': 'IndicCorp',
'score': -0.1028788579
}
```
### Data Fields
- `unique_identifier` (string): 3-letter language code followed by a unique number in each set (Train, Test, Val).
- `native word` (string): A word in Indic language.
- `english word` (string): Transliteration of native word in English (Romanised word).
- `source` (string): Source of the data.
- `score` (num): Character level log probability of indic word given roman word by IndicXlit (model). Pairs with average threshold of the 0.35 are considered.
For created data sources, depending on the destination/sampling method of a pair in a language, it will be one of:
- Dakshina Dataset
- IndicCorp
- Samanantar
- Wikidata
- Existing sources
- Named Entities Indian (AK-NEI)
- Named Entities Foreign (AK-NEF)
- Data from Uniform Sampling method. (Ak-Uni)
- Data from Most Frequent words sampling method. (Ak-Freq)
### Data Splits
| Subset | asm-en | ben-en | brx-en | guj-en | hin-en | kan-en | kas-en | kok-en | mai-en | mal-en | mni-en | mar-en | nep-en | ori-en | pan-en | san-en | sid-en | tam-en | tel-en | urd-en |
|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|:------:|
| Training | 179K | 1231K | 36K | 1143K | 1299K | 2907K | 47K | 613K | 283K | 4101K | 10K | 1453K | 2397K | 346K | 515K | 1813K | 60K | 3231K | 2430K | 699K |
| Validation | 4K | 11K | 3K | 12K | 6K | 7K | 4K | 4K | 4K | 8K | 3K | 8K | 3K | 3K | 9K | 3K | 8K | 9K | 8K | 12K |
| Test | 5531 | 5009 | 4136 | 7768 | 5693 | 6396 | 7707 | 5093 | 5512 | 6911 | 4925 | 6573 | 4133 | 4256 | 4316 | 5334 | - | 4682 | 4567 | 4463 |
## Dataset Creation
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
#### Who are the source language producers?
[More Information Needed]
### Annotations
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
#### Annotation process
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
#### Who are the annotators?
Information in the paper. [Aksharantar: Towards building open transliteration tools for the next billion users](https://arxiv.org/abs/2205.03018)
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
<!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
</a>
<br/> -->
This data is released under the following licensing scheme:
- Manually collected data: Released under CC-BY license.
- Mined dataset (from Samanantar and IndicCorp): Released under CC0 license.
- Existing sources: Released under CC0 license.
**CC-BY License**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100"/>
</a>
<br>
<br>
<!--
and the Aksharantar benchmark and all manually transliterated data under the [Creative Commons CC-BY license (“no rights reserved”)](https://creativecommons.org/licenses/by/4.0/). -->
**CC0 License Statement**
<a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
<img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
</a>
<br>
<br>
- We do not own any of the text from which this data has been extracted.
- We license the actual packaging of the mined data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
- To the extent possible under law, <a rel="dct:publisher" href="https://indicnlp.ai4bharat.org/aksharantar/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Aksharantar</span> manually collected data and existing sources.
- This work is published from: India.
### Citation Information
```
@misc{madhani2022aksharantar,
title={Aksharantar: Towards Building Open Transliteration Tools for the Next Billion Users},
author={Yash Madhani and Sushane Parthan and Priyanka Bedekar and Ruchi Khapra and Anoop Kunchukuttan and Pratyush Kumar and Mitesh Shantadevi Khapra},
year={2022},
eprint={},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions |
monology/pile-uncopyrighted | monology | 2023-08-31T03:45:38Z | 86,598 | 134 | [
"license:other",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2101.00027",
"region:us"
] | [] | 2023-08-30T18:47:58Z | null | ---
license: other
---
# Pile Uncopyrighted
In response to [authors demanding that LLMs stop using their works](https://tcrn.ch/3rtpIDn), here's a copy of [The Pile](https://huggingface.co/datasets/monology/pile) with all copyrighted content removed.
Please consider using this dataset to train your future LLMs, to respect authors and abide by copyright law.
Creating an uncopyrighted version of a larger dataset (ie RedPajama) is planned, with no ETA.
**Methodology**
Cleaning was performed by removing everything from the Books3, BookCorpus2, OpenSubtitles, YTSubtitles, and OWT2 subsets.
Based on section 7.1 of [the original paper](https://arxiv.org/abs/2101.00027), these datasets are the only ones which are not explicitly allowed to be used in AI training. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.