nhagar commited on
Commit
75c91ad
·
verified ·
1 Parent(s): 7420f27

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: odc-by
3
+ ---
4
+ # Dataset Card for mixturevitae-fineweb-permissive-multilingual-2m_urls
5
+
6
+ This dataset provides the URLs and top-level domains associated with training records in [ontocord/MixtureVitae-fineweb-permissive-multilingual-2m](https://huggingface.co/datasets/ontocord/MixtureVitae-fineweb-permissive-multilingual-2m). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible.
7
+
8
+ ## Dataset Details
9
+
10
+ ### Dataset Description
11
+
12
+ This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy).
13
+
14
+ - **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy)
15
+ - **License:** Same as source dataset
16
+
17
+ ### Dataset Sources
18
+
19
+ - **Repository:** [ontocord/MixtureVitae-fineweb-permissive-multilingual-2m](https://huggingface.co/datasets/ontocord/MixtureVitae-fineweb-permissive-multilingual-2m)
20
+
21
+ ## Uses
22
+
23
+ This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data.
24
+
25
+ ### Direct Use
26
+
27
+ The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve:
28
+ - Identifying the most-used websites
29
+ - Categorizing URLs to understand domain- or topic-level dataset composition
30
+ - Comparing URLs across datasets
31
+ - Digging into inclusion/exclusion patterns for a particular website
32
+
33
+ ### Out-of-Scope Use
34
+
35
+ This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset.
36
+
37
+ ## Dataset Structure
38
+
39
+ This dataset contains every record with a URL from the source dataset. It contains two columns:
40
+ - `url`: The raw URL associated with each record
41
+ - `domain`: The top-level domain for each URL, extracted with `tldextract`
42
+
43
+ ## Citation [optional]
44
+
45
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
46
+
47
+ **BibTeX:**
48
+
49
+ [More Information Needed]
50
+
51
+ **APA:**
52
+
53
+ [More Information Needed]