Update README.md
Browse files
README.md
CHANGED
@@ -34,4 +34,124 @@ configs:
|
|
34 |
data_files:
|
35 |
- split: train
|
36 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
data_files:
|
35 |
- split: train
|
36 |
path: data/train-*
|
37 |
+
license: cc-by-4.0
|
38 |
+
language:
|
39 |
+
- en
|
40 |
+
- fr
|
41 |
+
- es
|
42 |
+
- pt
|
43 |
+
- de
|
44 |
+
- ru
|
45 |
+
- nl
|
46 |
+
- tr
|
47 |
+
- it
|
48 |
+
pretty_name: YouTube Commons Descriptions
|
49 |
---
|
50 |
+
|
51 |
+
# YouTube Commons Descriptions and Language Detection
|
52 |
+
This dataset is based on [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), a valuable open dataset:
|
53 |
+
> YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC BY 4.0 license.
|
54 |
+
>
|
55 |
+
> **Content**
|
56 |
+
>
|
57 |
+
> The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).
|
58 |
+
|
59 |
+
Unfortunately I have found that the detection of the original language, at least for Dutch, has room for improvement.
|
60 |
+
Others have observed ([1](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/9), [2](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/12)) similar issues.
|
61 |
+
Therefore this dataset adds the video **title** and **description** to YouTube Commons and performs **language detection** on those.
|
62 |
+
|
63 |
+
# YouTube Commons
|
64 |
+
There are [problems](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/10) with loading YouTube Commons with Hugging Face Datasets.
|
65 |
+
To alleviate those, I took the source parquet-files and reuploaded a fixed version to HuggingFace: [Rijgersberg/YouTube-Commons](https://huggingface.co/datasets/Rijgersberg/YouTube-Commons).
|
66 |
+
This reupload is the basis for YouTube Commons Descriptions.
|
67 |
+
|
68 |
+
## Acquisition
|
69 |
+
The titles and descriptions are downloaded from YouTube with the help of [yt-dlp](https://github.com/yt-dlp/yt-dlp).
|
70 |
+
Some videos are missing compared to YouTube Commons, for one of the following reasons:
|
71 |
+
|
72 |
+
- Some videos are no longer available on YouTube, either taken down by the uploader or by YouTube.
|
73 |
+
- Some videos are only visible to logged in users.
|
74 |
+
- (rarely) Anti-bot measures by YouTube prevented download.
|
75 |
+
|
76 |
+
The download took about two weeks.
|
77 |
+
|
78 |
+
Code:
|
79 |
+
```python
|
80 |
+
import json
|
81 |
+
from concurrent.futures import ProcessPoolExecutor, as_completed
|
82 |
+
from pathlib import Path
|
83 |
+
|
84 |
+
from datasets import load_dataset
|
85 |
+
from tqdm import tqdm
|
86 |
+
from yt_dlp import YoutubeDL
|
87 |
+
|
88 |
+
output_dir = Path('/path/to/output/dir/')
|
89 |
+
|
90 |
+
|
91 |
+
def get_info(video_id, output_dir):
|
92 |
+
write_folder = output_dir / video_id[:2]
|
93 |
+
write_filepath = write_folder / f'{video_id}.json'
|
94 |
+
if write_filepath.exists():
|
95 |
+
return video_id, True
|
96 |
+
|
97 |
+
with YoutubeDL({'quiet': True, 'skip_download': True}) as ydl:
|
98 |
+
try:
|
99 |
+
info = ydl.extract_info(f'https://www.youtube.com/watch?v={video_id}', download=False)
|
100 |
+
|
101 |
+
title = info.get('title', '')
|
102 |
+
description = info.get('description', '')
|
103 |
+
|
104 |
+
# Write the title and description to a text file
|
105 |
+
write_folder.mkdir(exist_ok=True, parents=True)
|
106 |
+
with open(write_filepath, 'w', encoding='utf-8') as f:
|
107 |
+
json.dump({'id': video_id,
|
108 |
+
'title': title,
|
109 |
+
'description': description}, f)
|
110 |
+
except Exception as e:
|
111 |
+
print(video_id, e)
|
112 |
+
return video_id, False
|
113 |
+
return video_id, True
|
114 |
+
|
115 |
+
def main():
|
116 |
+
video_ids = []
|
117 |
+
for filepath in tqdm(sorted(Path('/path/to/YouTubeCommons/files').rglob('*.parquet'))):
|
118 |
+
try: # I was having trouble loading the original dataset, so this lets me get what I can
|
119 |
+
dataset = load_dataset("parquet",
|
120 |
+
data_files={'train': str(filepath)})
|
121 |
+
video_ids.extend(dataset['train']['video_id'])
|
122 |
+
except Exception as e:
|
123 |
+
print(filepath, e)
|
124 |
+
continue
|
125 |
+
video_ids = set(video_ids)
|
126 |
+
|
127 |
+
with ProcessPoolExecutor(max_workers=10) as executor:
|
128 |
+
futures = {executor.submit(get_info, video_id, output_dir): video_id
|
129 |
+
for video_id in video_ids}
|
130 |
+
|
131 |
+
for future in tqdm(as_completed(futures), total=len(futures), desc="Downloading video info"):
|
132 |
+
video_id = futures[future]
|
133 |
+
try:
|
134 |
+
_, success = future.result()
|
135 |
+
if not success:
|
136 |
+
print(f"Failed to process: {video_id}")
|
137 |
+
except Exception as e:
|
138 |
+
print(f"Error occurred for {video_id}: {e}")
|
139 |
+
|
140 |
+
if __name__ == "__main__":
|
141 |
+
main()
|
142 |
+
```
|
143 |
+
|
144 |
+
|
145 |
+
## Language detection
|
146 |
+
The `language` and `confidence` columns were added by running [LangID](https://github.com/saffsd/langid.py) on the title and description.
|
147 |
+
So note: the detection was _not_ performed on the audio of the video.
|
148 |
+
|
149 |
+
The equivalent detection code:
|
150 |
+
```python
|
151 |
+
from langid.langid import LanguageIdentifier, model
|
152 |
+
|
153 |
+
lang_id = LanguageIdentifier.from_modelstring(model, norm_probs=True)
|
154 |
+
|
155 |
+
lang, conf = lang_id.classify(title + '\n\n' + description)
|
156 |
+
```
|
157 |
+
|