Korean News Summarization Model

Demo

https://huggingface.co/spaces/gogamza/kobart-summarization

How to use

import torch
from transformers import PreTrainedTokenizerFast
from transformers import BartForConditionalGeneration

tokenizer = PreTrainedTokenizerFast.from_pretrained('gogamza/kobart-summarization')
model = BartForConditionalGeneration.from_pretrained('gogamza/kobart-summarization')

text = "κ³Όκ±°λ₯Ό λ– μ˜¬λ €λ³΄μž. 방솑을 보던 우리의 λͺ¨μŠ΅μ„. 독보적인 λ§€μ²΄λŠ” TVμ˜€λ‹€. 온 가쑱이 λ‘˜λŸ¬μ•‰μ•„ TVλ₯Ό λ΄€λ‹€. κ°„ν˜Ή 가쑱듀끼리 λ‰΄μŠ€μ™€ λ“œλΌλ§ˆ, 예λŠ₯ ν”„λ‘œκ·Έλž¨μ„ λ‘˜λŸ¬μ‹Έκ³  리λͺ¨μ»¨ μŸνƒˆμ „μ΄ λ²Œμ–΄μ§€κΈ°λ„  ν–ˆλ‹€. 각자 μ„ ν˜Έν•˜λŠ” ν”„λ‘œκ·Έλž¨μ„ β€˜λ³Έλ°©β€™μœΌλ‘œ 보기 μœ„ν•œ μ‹Έμ›€μ΄μ—ˆλ‹€. TVκ°€ ν•œ λŒ€μΈμ§€ 두 λŒ€μΈμ§€ 여뢀도 κ·Έλž˜μ„œ μ€‘μš”ν–ˆλ‹€. μ§€κΈˆμ€ μ–΄λ–€κ°€. β€˜μ•ˆλ°©κ·Ήμž₯β€™μ΄λΌλŠ” 말은 μ˜›λ§μ΄ 됐닀. TVκ°€ μ—†λŠ” 집도 λ§Žλ‹€. λ―Έλ””μ–΄μ˜ 혜 택을 λˆ„λ¦΄ 수 μžˆλŠ” 방법은 λŠ˜μ–΄λ‚¬λ‹€. 각자의 λ°©μ—μ„œ 각자의 νœ΄λŒ€ν°μœΌλ‘œ, λ…ΈνŠΈλΆμœΌλ‘œ, νƒœλΈ”λ¦ΏμœΌλ‘œ μ½˜ν…μΈ  λ₯Ό 즐긴닀."

raw_input_ids = tokenizer.encode(text)
input_ids = [tokenizer.bos_token_id] + raw_input_ids + [tokenizer.eos_token_id]

summary_ids = model.generate(torch.tensor([input_ids]))
tokenizer.decode(summary_ids.squeeze().tolist(), skip_special_tokens=True)
Downloads last month
127,474
Safetensors
Model size
124M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for gogamza/kobart-summarization

Finetunes
4 models

Spaces using gogamza/kobart-summarization 6