Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'test' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      Not able to read records in the JSON file at hf://datasets/AlgoveraAI/autocast@844aa403486950f583e399578cb11f89fa05f87f/autocast_competition_test_set.json.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 165, in _generate_tables
                  raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
              ValueError: Not able to read records in the JSON file at hf://datasets/AlgoveraAI/autocast@844aa403486950f583e399578cb11f89fa05f87f/autocast_competition_test_set.json.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Forecasting Future World Events with Neural Networks

This is an (unofficial) repository for "Forecasting Future World Events with Neural Networks"
by Andy Zou, Tristan Xiao, Ryan Jia, Joe Kwon, Mantas Mazeika, Richard Li, Dawn Song, Jacob Steinhardt, Owain Evans, and Dan Hendrycks.

Introduction

Forecasting future world events is a challenging but valuable task. Forecasts of climate, geopolitical conflict, pandemics and economic indicators help shape policy and decision making. In these domains, the judgment of expert humans contributes to the best forecasts. Given advances in language modeling, can these forecasts be automated? To this end, we introduce Autocast, a dataset containing thousands of forecasting questions and an accompanying news corpus. Questions are taken from forecasting tournaments, ensuring high quality, real-world importance, and diversity. The news corpus is organized by date, allowing us to precisely simulate the conditions under which humans made past forecasts (avoiding leakage from the future). We test language models on our forecasting task and find that performance is far below a human expert baseline. However, performance improves with increased model size and incorporation of relevant information from the news corpus. In sum, Autocast poses a novel challenge for large language models and improved performance could bring large practical benefits.

Autocast Dataset

The original version of the Autocast dataset can be downloaded here. For more details on how to use the Autocast dataset and news articles, please refer to our short demonstration in usage.ipynb.

Each question has the following fields:

{
  "id":                "unique identifier (str)",
  "question":          "question body (str)",
  "background":        "question context/details (str)",
  "qtype":             "question type (str)",
  "status":            "question status (str)",
  "choices":           "choices or possible ranges (List or Dict)",
  "answer":            "question resolution (str or float)",
  "crowd":             "human crowd forecasts over time (List)",
  "publish_time":      "publish timestamp (str)",
  "close_time":        "close timestamp (str)",
  "prediction_count":  "number of crowd predictions (int)",
  "forecaster_count":  "number of crowd forecasters (int)",
  "tags":              "question category (List)",
  "source_links":      "source links from comments (List)"
}

The authors obtained permission from Metaculus to host the dataset on GitHub for research purposes only.

IntervalQA Dataset

Motivated by the difficulty of forecasting numbers across orders of magnitude (e.g. global cases of COVID-19 in 2022), we also curate IntervalQA, a dataset of numerical questions and metrics for calibration.

Download the IntervalQA dataset here.

Citation

If you find this useful in your research, please consider citing:

@article{zouforecasting2022,
  title={Forecasting Future World Events with Neural Networks},
  author={Andy Zou and Tristan Xiao and Ryan Jia and Joe Kwon and Mantas Mazeika and Richard Li and Dawn Song and Jacob Steinhardt and Owain Evans and Dan Hendrycks},
  journal={NeurIPS},
  year={2022}
}
Downloads last month
56