Update README.md
Browse files
README.md
CHANGED
|
@@ -58,3 +58,59 @@ configs:
|
|
| 58 |
- split: validation
|
| 59 |
path: data/validation-*
|
| 60 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
- split: validation
|
| 59 |
path: data/validation-*
|
| 60 |
---
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
# Dataset Card for hep-ph_gr-qc_primary Dataset
|
| 64 |
+
|
| 65 |
+
## Dataset Description
|
| 66 |
+
|
| 67 |
+
- **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv)
|
| 68 |
+
- **Repository:** [hepthLlama](https://github.com/Paul-Richmond/hepthLlama)
|
| 69 |
+
- **Paper:** [tbd](tbd)
|
| 70 |
+
- **Point of Contact:** [Paul Richmond](mailto:[email protected])
|
| 71 |
+
|
| 72 |
+
### Dataset Summary
|
| 73 |
+
This dataset contains metadata included in arXiv submissions.
|
| 74 |
+
|
| 75 |
+
## Dataset Structure
|
| 76 |
+
|
| 77 |
+
### Languages
|
| 78 |
+
|
| 79 |
+
The text in the `abstract` field of the dataset is in English, however there may be examples
|
| 80 |
+
where the abstract also contains a translation into another language.
|
| 81 |
+
|
| 82 |
+
## Dataset Creation
|
| 83 |
+
|
| 84 |
+
### Curation Rationale
|
| 85 |
+
The starting point was to load v193 of the Kaggle arXiv Dataset which includes arXiv submissions upto 23rd August 2024.
|
| 86 |
+
The arXiv dataset contains the following data fields:
|
| 87 |
+
- `id`: ArXiv ID (can be used to access the paper)
|
| 88 |
+
- `submitter`: Who submitted the paper
|
| 89 |
+
- `authors`: Authors of the paper
|
| 90 |
+
- `title`: Title of the paper
|
| 91 |
+
- `comments`: Additional info, such as number of pages and figures
|
| 92 |
+
- `journal-ref`: Information about the journal the paper was published in
|
| 93 |
+
- `doi`: [Digital Object Identifier](https://www.doi.org)
|
| 94 |
+
- `report-no`: Report Number
|
| 95 |
+
- `abstract`: The abstract of the paper
|
| 96 |
+
- `categories`: Categories / tags in the ArXiv system
|
| 97 |
+
|
| 98 |
+
To arrive at the hep-ph_gr-qc_primary dataset, the full arXiv data
|
| 99 |
+
was filtered so that only `categories` which included 'hep-ph' or 'gr-qc' were retained.
|
| 100 |
+
This resulted in papers that were either primarily classified as 'hep-ph' or 'gr-qc' or appeared cross-listed.
|
| 101 |
+
For this dataset, the decision was made to focus only on papers primarily classified as either 'hep-ph' or 'gr-qc'.
|
| 102 |
+
This meant taking only those abstracts where the first characters in `categories` were either 'hep-ph' or 'gr-qc'
|
| 103 |
+
(see [here](https://info.arxiv.org/help/arxiv_identifier_for_services.html#indications-of-classification) for more details).
|
| 104 |
+
|
| 105 |
+
We also dropped entries whose `abstract` or `comments` contained the word 'Withdrawn' or 'withdrawn' and we removed the five records which appear in the repo `LLMsForHepth/arxiv_hepth_first_overfit`.
|
| 106 |
+
|
| 107 |
+
In addition, we have cleaned the data appearing in `abstract` by first replacing all occurences of '\n' with a whitespace and then removing any leading and trailing whitespace.
|
| 108 |
+
|
| 109 |
+
### Data splits
|
| 110 |
+
|
| 111 |
+
The dataset is split into a training, validation and test set with split percentages 70%, 15% and 15%. This was done by applying `train_test_split` twice (both with `seed=42`).
|
| 112 |
+
The final split sizes are as follows:
|
| 113 |
+
|
| 114 |
+
| Train | Test | Validation |
|
| 115 |
+
|:---:|:---:|:---:|
|
| 116 |
+
|137,136 | 29,387| 29,386 |
|