BrightData commited on
Commit
e9cb64d
·
verified ·
1 Parent(s): 08ea82c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -8
README.md CHANGED
@@ -72,17 +72,22 @@ To explore additional free and premium datasets, visit our website [brightdata.c
72
  ## Dataset Creation
73
 
74
  ### Data Collection and Processing
 
75
  The data collection process involved extracting information directly from Goodreads, ensuring comprehensive coverage of the required attributes. Once collected, the data underwent several stages of processing:
76
- - Parsing: Extracted raw data was parsed to convert it into a structured format.
77
- - Cleaning: The cleaning process involved removing any irrelevant or erroneous entries to enhance data quality.
78
 
79
- ### Validation
 
 
 
 
80
  To ensure data integrity, a validation process was implemented. Each entry is checked across various attributes, including:
81
- - Uniqueness: Each record was checked to ensure it was unique, eliminating any duplicates.
82
- - Completeness: The dataset was examined to confirm that all necessary fields were populated or filled, with missing data addressed appropriately.
83
- - Consistency: Cross-validation checks were conducted to ensure consistency across various attributes, including comparison with historical records.
84
- - Data Types Verification: Ensured that all data types were correctly assigned and consistent with expected formats.
85
- - Fill Rates and Duplicate Checks: Conducted comprehensive checks to verify fill rates, ensuring no significant gaps in data, and rigorously screened for duplicates.
 
 
86
  This ensures that the dataset meets the high standards of quality necessary for analysis, research and modeling.
87
 
88
  ## Example JSON
 
72
  ## Dataset Creation
73
 
74
  ### Data Collection and Processing
75
+
76
  The data collection process involved extracting information directly from Goodreads, ensuring comprehensive coverage of the required attributes. Once collected, the data underwent several stages of processing:
 
 
77
 
78
+ - **Parsing**: Extracted raw data was parsed to convert it into a structured format.
79
+ - **Cleaning**: The cleaning process involved removing any irrelevant or erroneous entries to enhance data quality.
80
+
81
+ ### Validation:
82
+
83
  To ensure data integrity, a validation process was implemented. Each entry is checked across various attributes, including:
84
+
85
+ - **Uniqueness**: Each record was checked to ensure it was unique, eliminating any duplicates.
86
+ - **Completeness**: The dataset was examined to confirm that all necessary fields were populated or filled, with missing data addressed appropriately.
87
+ - **Consistency**: Cross-validation checks were conducted to ensure consistency across various attributes, including comparison with historical records.
88
+ - **Data Types Verification**: Ensured that all data types were correctly assigned and consistent with expected formats.
89
+ - **Fill Rates and Duplicate Checks**: Conducted comprehensive checks to verify fill rates, ensuring no significant gaps in data, and rigorously screened for duplicates.
90
+
91
  This ensures that the dataset meets the high standards of quality necessary for analysis, research and modeling.
92
 
93
  ## Example JSON