Edit model card
YAML Metadata Error: "license" does not match any of the allowed types
YAML Metadata Error: "language[0]" must only contain lowercase characters
YAML Metadata Error: "language[0]" with value "English" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.
YAML Metadata Error: "language[1]" must be a string
YAML Metadata Error: "tags[0]" must be a string
YAML Metadata Error: "tags[1]" must be a string
YAML Metadata Error: "tags[2]" must be a string

Pegasus XSUM Gigaword

Model description

Pegasus XSUM model finetuned to Gigaword Summarization task, significantly better performance than pegasus gigaword, but still doesn't match model paper performance.

Intended uses & limitations

Produces short summaries with the coherence of the XSUM Model

How to use

# You can include sample code which will be formatted

Limitations and bias

Still has all the biases of any of the abstractive models, but seems a little less prone to hallucination.

Training data

Initialized with pegasus-XSUM

Training procedure

Trained for 11500 iterations on Gigaword corpus using OOB seq2seq (from hugging face using the default parameters)

Eval results

Evaluated on Gigaword test set (from hugging face using the default parameters) run_summarization.py --model_name_or_path pegasus-xsum/checkpoint-11500/ --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate

Metric Score
eval_rouge1 34.1958
eval_rouge2 15.4033
eval_rougeL 31.4488

run_summarization.py --model_name_or_path google/pegasus-gigaword --do_predict --dataset_name gigaword --dataset_config "3.0.0" --source_prefix "summarize: " --output_dir pegasus-xsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --overwrite_output_dir --predict_with_generate

Metric Score
eval_rouge1 20.8111
eval_rouge2 8.766
eval_rougeL 18.4431

BibTeX entry and citation info

@inproceedings{...,
  year={2020}
}
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.