Edit model card
YAML Metadata Error: "datasets[0]" with value "private Czech News Center dataset news-based" is not valid. If possible, use a dataset id from https://hf.co/datasets.
YAML Metadata Error: "datasets[1]" with value "SumeCzech dataset news-based" is not valid. If possible, use a dataset id from https://hf.co/datasets.

mBART fine-tuned model for Czech abstractive summarization (HT2A-CS)

This model is a fine-tuned checkpoint of facebook/mbart-large-cc25 on the Czech news dataset to produce Czech abstractive summaries.

Task

The model deals with the task Headline + Text to Abstract (HT2A) which consists in generating a multi-sentence summary considered as an abstract from a Czech news text.

Dataset

The model has been trained on a large Czech news dataset developed by a concatenation of two datasets, the private CNC dataset provided by Czech News Center and SumeCzech dataset. The dataset includes around 1.75M Czech news-based documents consisting of a Headline, Abstract, and Full-text sections. Truncation and padding were set to 512 tokens for the encoder and 128 for the decoder.

Training

The model has been trained on 1x NVIDIA Tesla A100 40GB for 60 hours and 4x NVIDIA Tesla A100 40GB for 40 hours. During training, the model has seen 12896K documents corresponding to roughly 8.4 epochs.

Use

Assuming that you are using the provided Summarizer.ipynb file.

def summ_config():
    cfg = OrderedDict([
        # summarization model - checkpoint from website
        ("model_name", "krotima1/mbart-ht2a-cs"),
        ("inference_cfg", OrderedDict([
            ("num_beams", 4),
            ("top_k", 40),
            ("top_p", 0.92),
            ("do_sample", True),
            ("temperature", 0.89),
            ("repetition_penalty", 1.2),
            ("no_repeat_ngram_size", None),
            ("early_stopping", True),
            ("max_length", 128),
            ("min_length", 10),
        ])),
        #texts to summarize
        ("text",
            [
                "Input your Czech text",
            ]
        ),
    ])
    return cfg
cfg = summ_config()
#load model    
model = AutoModelForSeq2SeqLM.from_pretrained(cfg["model_name"])
tokenizer = AutoTokenizer.from_pretrained(cfg["model_name"])
# init summarizer
summarize = Summarizer(model, tokenizer, cfg["inference_cfg"])
summarize(cfg["text"])
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.