Junseong Park commited on
Commit
44a2889
·
1 Parent(s): 4a8e831

Upload dataset with dataset card

Browse files
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Dataset Card for Custom Text Dataset
3
+
4
+ ## Dataset Name
5
+ Custom Text Dataset
6
+
7
+
8
+ ## Overview
9
+ This dataset contains text data for training summarization models.
10
+ The data is collected from CNN/daily mail.
11
+
12
+
13
+ ## Composition
14
+
15
+ - **Number of records**: 100
16
+ - **Fields**: `text`, `label`
17
+
18
+ ## Collection Process
19
+
20
+ CNN/daily mail
21
+
22
+ ## Preprocessing
23
+
24
+ nothing
25
+
26
+ ## How to Use
27
+ ```python
28
+ from datasets import load_dataset
29
+ dataset = load_dataset("path_to_dataset")
30
+
31
+ for example in dataset["train"]:
32
+ print(example["text"], example["label"])
33
+
34
+
35
+ ```
36
+
37
+ ## Evaluation
38
+
39
+ This dataset is designed for evaluating summarization models.
40
+ The common evaluation metric is BLEU.
41
+
42
+ ## Limitations
43
+ The dataset may contain outdated or biased information.
44
+ Users should be aware of these limitations when using the data.
45
+
46
+ ## Ethical Considerations
47
+ - **Privacy**: Ensure that the data does not contain personal information.
48
+ - **Bias**: Be aware of potential biases in the data.
49
+
test/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["test"]}
test/test/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e6aa13a3e10a33624931f6c220c9618528323886bd7b7ac334af681b8dc0646
3
+ size 346576
test/test/dataset_info.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "sentence": {
6
+ "feature": {
7
+ "dtype": "string",
8
+ "_type": "Value"
9
+ },
10
+ "_type": "Sequence"
11
+ },
12
+ "labels": {
13
+ "feature": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ },
17
+ "_type": "Sequence"
18
+ }
19
+ },
20
+ "homepage": "",
21
+ "license": ""
22
+ }
test/test/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a966e5e39a3a551f",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }
train/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train"]}
train/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3b84a293ed7afd9641f578c760558feab774e12174775ffef3bd6d130873903
3
+ size 1400
train/train/dataset_info.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "sentence": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "labels": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ }
13
+ },
14
+ "homepage": "",
15
+ "license": ""
16
+ }
train/train/state.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "a1df46296853828f",
8
+ "_format_columns": null,
9
+ "_format_kwargs": {},
10
+ "_format_type": null,
11
+ "_output_all_columns": false,
12
+ "_split": null
13
+ }