BrightData commited on
Commit
cae8836
·
verified ·
1 Parent(s): 02bfe7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -5
README.md CHANGED
@@ -1,5 +1,85 @@
1
- ---
2
- license: other
3
- license_name: bright-data-master-service-agreement
4
- license_link: https://brightdata.com/license
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: bright-data-master-service-agreement
4
+ license_link: https://brightdata.com/license
5
+ language:
6
+ - en
7
+ ---
8
+ ![Bright Data Logo](https://brightdata.com/wp-content/uploads/2024/06/Bright-Data-logo-removebg-preview.png)
9
+
10
+ # Dataset Card for "BrightData/Wikipedia-Articles"
11
+
12
+ If you are using this dataset, we would love your feedback: [Link to form](https://docs.google.com/forms/d/e/1FAIpQLScbpGZ4qYipuVRplYrBO13gNJStuiA3dz2vEt9XzZ14pgUdZA/viewform?usp=sf_link).
13
+
14
+ ## Dataset Summary
15
+
16
+ Explore a collection of millions of books with the Goodreads dataset, comprising over 6.3M structured records and 14 data fields.
17
+
18
+ Continuously updated and verified for accuracy, this dataset provides invaluable insights for researchers, analysts and book enthusiasts alike.
19
+
20
+ Each entry includes all major data points such as including URLs, book IDs, titles, authors, ratings, number of ratings, reviews, summaries, genres, publication dates, author details and prices.
21
+
22
+ For a complete list of data points, please refer to the full "Data Dictionary" provided below.
23
+
24
+ To explore additional free and premium datasets, visit our website [brightdata.com](https://www.brightdata.com).
25
+
26
+ ## Data Dictionary
27
+
28
+ | Column name | Description | Data type |
29
+ |---------------------|--------------------------------------------------|-----------|
30
+ | url | URL of the article | Url |
31
+ | title | Title of the article | Text |
32
+ | table_of_contents | Table of Contents in the article | Array |
33
+ | raw_text | Raw article text | Text |
34
+ | cataloged_text | Cataloged text of the article by titles | Array |
35
+ | *> title* | Title of a cataloged section | Text |
36
+ | *> sub_title* | Subtitle within a cataloged section | Text |
37
+ | *> text* | Text content within a cataloged section | Text |
38
+ | *> links_in_text* | Links within the text content | Array |
39
+ | *>> link_name* | Name or description of the link | Text |
40
+ | *>> url* | URL of the link | Url |
41
+ | images | Links to the URLs of images in the article | Array |
42
+ | *> image_text* | Text description under an image | Text |
43
+ | *> image_url* | URL of the image | Url |
44
+ | see_also | Other recommended articles | Array |
45
+ | *> title* | Recommended article title | Text |
46
+ | *> url* | URL of the recommended article | Url |
47
+ | references | References in the article | Array |
48
+ | *> reference* | Reference in the article | Text |
49
+ | *>> urls* | URLs referenced within the article | Array |
50
+ | *>>> url_text* | Text description of the referenced URL | Text |
51
+ | *>>> url* | URL of the referenced article or source | Url |
52
+ | external_links | External links referenced in the article | Array |
53
+ | *> external_links_name* | Name or description of the external link | Text |
54
+ | *> link* | External link URL | Text |
55
+
56
+ ## Dataset Creation
57
+
58
+ ### Data Collection and Processing
59
+ The data collection process involved extracting information directly from Goodreads, ensuring comprehensive coverage of the required attributes. Once collected, the data underwent several stages of processing:
60
+ - Parsing: Extracted raw data was parsed to convert it into a structured format.
61
+ - Cleaning: The cleaning process involved removing any irrelevant or erroneous entries to enhance data quality.
62
+
63
+ ### Validation
64
+ To ensure data integrity, a validation process was implemented. Each entry is checked across various attributes, including:
65
+ - Uniqueness: Each record was checked to ensure it was unique, eliminating any duplicates.
66
+ - Completeness: The dataset was examined to confirm that all necessary fields were populated or filled, with missing data addressed appropriately.
67
+ - Consistency: Cross-validation checks were conducted to ensure consistency across various attributes, including comparison with historical records.
68
+ - Data Types Verification: Ensured that all data types were correctly assigned and consistent with expected formats.
69
+ - Fill Rates and Duplicate Checks: Conducted comprehensive checks to verify fill rates, ensuring no significant gaps in data, and rigorously screened for duplicates.
70
+ This ensures that the dataset meets the high standards of quality necessary for analysis, research and modeling.
71
+
72
+ ## Example JSON
73
+ <div style="max-height: 300px; overflow-y: auto; border: 1px solid #ccc; padding: 10px;">
74
+ ```json
75
+ [
76
+ {
77
+ "timestamp": "2024-05-09",
78
+ "url": "https://www.imdb.com/title/tt1533087/",
79
+ "title": "Soda Springs",
80
+ "popularity": null,
81
+ "genres": [
82
+ "Drama"
83
+ }
84
+ ]
85
+ ```