ABSOSUM Phase 1 - Multi-Answer Summarization
This model is a T5-based model fine-tuned for multi-answer summarization on Vietnamese Q&A data (ABSOSUM Phase 1).
Model Description
- Base Model: T5-base
- Task: Multi-answer summarization
- Language: Vietnamese
- Phase: Phase 1 (Baseline)
Training Details
- Model trained using TensorFlow/Keras
- Fine-tuned on ABSOSUM dataset
- Optimized for Vietnamese question-answer summarization
Usage
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Load model and tokenizer
model = T5ForConditionalGeneration.from_pretrained("HuyTran1301/ABSOSUM_Phase1")
tokenizer = T5Tokenizer.from_pretrained("HuyTran1301/ABSOSUM_Phase1")
# Example usage
input_text = "summarize: Your input text here"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
# Generate summary
outputs = model.generate(input_ids, max_length=150, num_beams=4, early_stopping=True)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(summary)
Citation
If you use this model, please cite:
@misc{absosum_phase1,
title={ABSOSUM Phase 1: Multi-Answer Summarization},
author={Huy Tran},
year={2025},
url={https://huggingface.co/HuyTran1301/ABSOSUM_Phase1}
}
Model Architecture
This is a standard T5 encoder-decoder architecture fine-tuned for the multi-answer summarization task.
Training Date
November 28, 2025
Notes
- This is Phase 1 baseline model
- For Phase 2 with weight-aware cross-attention, see
HuyTran1301/ABSOSUM_Phase2_v1.0
- Downloads last month
- 37