Korean News Summarization Model
Demo
https://huggingface.co/spaces/gogamza/kobart-summarization
How to use
import torch
from transformers import PreTrainedTokenizerFast
from transformers import BartForConditionalGeneration
tokenizer = PreTrainedTokenizerFast.from_pretrained('gogamza/kobart-summarization')
model = BartForConditionalGeneration.from_pretrained('gogamza/kobart-summarization')
text = "κ³Όκ±°λ₯Ό λ μ¬λ €λ³΄μ. λ°©μ‘μ 보λ μ°λ¦¬μ λͺ¨μ΅μ. λ
보μ μΈ λ§€μ²΄λ TVμλ€. μ¨ κ°μ‘±μ΄ λλ¬μμ TVλ₯Ό λ΄€λ€. κ°νΉ κ°μ‘±λ€λΌλ¦¬ λ΄μ€μ λλΌλ§, μλ₯ νλ‘κ·Έλ¨μ λλ¬μΈκ³ 리λͺ¨μ»¨ μνμ μ΄ λ²μ΄μ§κΈ°λ νλ€. κ°μ μ νΈνλ νλ‘κ·Έλ¨μ βλ³Έλ°©βμΌλ‘ 보기 μν μΈμμ΄μλ€. TVκ° ν λμΈμ§ λ λμΈμ§ μ¬λΆλ κ·Έλμ μ€μνλ€. μ§κΈμ μ΄λ€κ°. βμλ°©κ·Ήμ₯βμ΄λΌλ λ§μ μλ§μ΄ λλ€. TVκ° μλ μ§λ λ§λ€. λ―Έλμ΄μ ν νμ λ릴 μ μλ λ°©λ²μ λμ΄λ¬λ€. κ°μμ λ°©μμ κ°μμ ν΄λν°μΌλ‘, λ
ΈνΈλΆμΌλ‘, νλΈλ¦ΏμΌλ‘ μ½ν
μΈ λ₯Ό μ¦κΈ΄λ€."
raw_input_ids = tokenizer.encode(text)
input_ids = [tokenizer.bos_token_id] + raw_input_ids + [tokenizer.eos_token_id]
summary_ids = model.generate(torch.tensor([input_ids]))
tokenizer.decode(summary_ids.squeeze().tolist(), skip_special_tokens=True)
- Downloads last month
- 127,474
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support