File size: 5,796 Bytes
85f7d8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcbd027
 
 
 
 
 
 
 
 
 
 
 
85f7d8d
dcbd027
 
506d21d
dcbd027
 
 
 
 
 
 
 
 
 
 
 
506d21d
dcbd027
 
 
 
 
 
 
 
 
 
 
ebf3ff7
 
7af6654
dcbd027
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ebf3ff7
dcbd027
 
 
 
 
 
 
 
 
 
 
 
 
 
f4a6591
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
---
dataset_info:
  features:
  - name: video_id
    dtype: string
  - name: video_link
    dtype: string
  - name: channel
    dtype: string
  - name: channel_id
    dtype: string
  - name: date
    dtype: string
  - name: license
    dtype: string
  - name: original_language
    dtype: string
  - name: title
    dtype: string
  - name: description
    dtype: string
  - name: language
    dtype: string
  - name: confidence
    dtype: float64
  splits:
  - name: train
    num_bytes: 3684421635
    num_examples: 3030568
  download_size: 2229560856
  dataset_size: 3684421635
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-4.0
language:
- en
- fr
- es
- pt
- de
- ru
- nl
- tr
- it
pretty_name: YouTube Commons Descriptions
---

# YouTube Commons Descriptions and Language Detection
This dataset adds titles, descriptions and language detection to [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons), a valuable open dataset:
> YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC BY 4.0 license.
>
> **Content**
>
> The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).

Unfortunately I have found that the detection of the original language, at least for Dutch, has room for improvement.
Others have observed ([1](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/9), [2](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/12)) similar issues.
Therefore this dataset adds the video **title** and **description** to YouTube Commons and performs **language detection** on those.

# YouTube Commons
There are [problems](https://huggingface.co/datasets/PleIAs/YouTube-Commons/discussions/10) with loading YouTube Commons with Hugging Face Datasets.
To alleviate those, I also took the source parquet-files and reuploaded a fixed version to HuggingFace: [Rijgersberg/YouTube-Commons](https://huggingface.co/datasets/Rijgersberg/YouTube-Commons).

## Acquisition
The titles and descriptions are downloaded from YouTube with the help of [yt-dlp](https://github.com/yt-dlp/yt-dlp).
Some videos are missing compared to YouTube Commons, for one of the following reasons:

- Some videos are no longer available on YouTube, either taken down by the uploader or by YouTube.
- Some videos are only visible to logged in users.
- (rarely) Anti-bot measures by YouTube prevented download.

The download took about two weeks.

<details>
<summary>Code:</summary>

```python
import json
from concurrent.futures import ProcessPoolExecutor, as_completed
from pathlib import Path

from datasets import load_dataset
from tqdm import tqdm
from yt_dlp import YoutubeDL

output_dir = Path('/path/to/output/dir/')


def get_info(video_id, output_dir):
    write_folder = output_dir / video_id[:2]
    write_filepath = write_folder / f'{video_id}.json'
    if write_filepath.exists():
        return video_id, True

    with YoutubeDL({'quiet': True, 'skip_download': True}) as ydl:
        try:
            info = ydl.extract_info(f'https://www.youtube.com/watch?v={video_id}', download=False)

            title = info.get('title', '')
            description = info.get('description', '')

            # Write the title and description to a text file
            write_folder.mkdir(exist_ok=True, parents=True)
            with open(write_filepath, 'w', encoding='utf-8') as f:
                json.dump({'id': video_id,
                           'title': title,
                           'description': description}, f)
        except Exception as e:
            print(video_id, e)
            return video_id, False
    return video_id, True

def main():
    video_ids = []
    for filepath in tqdm(sorted(Path('/path/to/YouTubeCommons/files').rglob('*.parquet'))):
        try:  # I was having trouble loading the original dataset, so this lets me get what I can
            dataset = load_dataset("parquet",
                                   data_files={'train': str(filepath)})
            video_ids.extend(dataset['train']['video_id'])
        except Exception as e:
            print(filepath, e)
            continue
    video_ids = set(video_ids)

    with ProcessPoolExecutor(max_workers=10) as executor:
        futures = {executor.submit(get_info, video_id, output_dir): video_id
                   for video_id in video_ids}

        for future in tqdm(as_completed(futures), total=len(futures), desc="Downloading video info"):
            video_id = futures[future]
            try:
                _, success = future.result()
                if not success:
                    print(f"Failed to process: {video_id}")
            except Exception as e:
                print(f"Error occurred for {video_id}: {e}")

if __name__ == "__main__":
    main()
```
</details>


## Language detection
The `language` and `confidence` columns were added by running [LangID](https://github.com/saffsd/langid.py) on the title and description.
So note: the detection was _not_ performed on the audio of the video.

The equivalent detection code:
```python
from langid.langid import LanguageIdentifier, model

lang_id = LanguageIdentifier.from_modelstring(model, norm_probs=True)

lang, conf = lang_id.classify(title + '\n\n' + description)
```

For Dutch, here is the agreement table between the `original_language` column from YouTube Commons and the newly detected `language` column.

|                | `original_language` nl | `original_language` !nl |
|----------------|------------------------|-------------------------|
| `language` nl  | 7010                   | 4698                    |
| `language` !nl | 21452                  | 2997408                 |