Novora/CodeClassifier-v1-Tiny
Text Classification
•
340k
•
Updated
•
1
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'language_name', 'file_extensions'}) and 4 missing columns ({'language', 'text', 'repository_url', 'filename'}).
This happened while the json dataset builder was generating data using
hf://datasets/Novora/CodeClassifier_v1/github_31_top_supported_languages.json (at revision 469b04a766b24b8adfcc5bfe560ba56c98f00cf2)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
language_name: string
file_extensions: list<item: string>
child 0, item: string
to
{'language': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'repository_url': Value(dtype='string', id=None), 'filename': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'language_name', 'file_extensions'}) and 4 missing columns ({'language', 'text', 'repository_url', 'filename'}).
This happened while the json dataset builder was generating data using
hf://datasets/Novora/CodeClassifier_v1/github_31_top_supported_languages.json (at revision 469b04a766b24b8adfcc5bfe560ba56c98f00cf2)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
filename
string | repository_url
string | language
string | text
string |
|---|---|---|---|
lw-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
shrunk-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
poly-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
chi-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
ball.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "3\t20 0.115717 19 0.101955 24 0.083132 18 0.081487 21 0.079234 12 0.077526 23 0.066153 16 0.059682 (...TRUNCATED)
|
lin-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
empirical-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
sig-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
rbf-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
pear-filtered.graph.out.ads
|
https://github.com/aasish/userIntentDataset
|
Ada
| "0\t24 0.073729 20 0.071803 18 0.061656 19 0.058444 23 0.058124 22 0.05725 21 0.054979 3 0.050712 4 (...TRUNCATED)
|
The code classifier dataset contains medium to high quality data harvested from GitHub to aid in Text Classification tasks in Programming Code language classification.
List of top 31(according to TIOBE index) supported languages by GitHub, the language identifiers and associated file extensions.
Programming code text and classified language according to github based on file extensions.
Tools to generate this dataset can be found here