alvanlii's picture
Pushing 10 new rows
4eba4df verified
---
dataset_info:
config_name: year_2024
features:
- name: id
dtype: string
- name: content
dtype: string
- name: score
dtype: int64
- name: poster
dtype: string
- name: date_utc
dtype: timestamp[ns]
- name: flair
dtype: string
- name: ups
dtype: int64
- name: permalink
dtype: string
- name: depth
dtype: int64
- name: link_id
dtype: string
- name: parent_id
dtype: string
- name: updated
dtype: bool
- name: new
dtype: bool
splits:
- name: train
num_bytes: 14827259
num_examples: 38375
download_size: 6975806
dataset_size: 14827259
configs:
- config_name: year_2024
data_files:
- split: train
path: year_2024/train-*
---
--- Generated Part of README Below ---
## Dataset Overview
The goal is to have an open dataset of [r/CanadianInvestor](https://www.reddit.com/r/CanadianInvestor/) submissions. I'm leveraging PRAW and the Reddit API to get downloads.
There is a limit of 1000 in an API call and limited search functionality, so this is run hourly to get new submissions.
## Creation Details
This dataset was created by [alvanlii/dataset-creator-reddit-CanadianInvestor](https://huggingface.co/spaces/alvanlii/dataset-creator-reddit-CanadianInvestor)
## Update Frequency
The dataset is updated hourly with the most recent update being `2024-12-07 15:00:00 UTC+0000` where we added **10 new rows**.
## Licensing
[Reddit Licensing terms](https://www.redditinc.com/policies/data-api-terms) as accessed on October 25:
[License information]
## Opt-out
To opt-out of this dataset please make a pull request with your justification and add your ids in filter_ids.json
1. Go to [filter_ids.json](https://huggingface.co/spaces/reddit-tools-HF/dataset-creator-reddit-bestofredditorupdates/blob/main/filter_ids.json)
2. Click Edit
3. Add your ids, 1 per row
4. Comment with your justification