ayushsinha commited on
Commit
64f9113
Β·
verified Β·
1 Parent(s): fa1d908

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Text-to-Text Transfer Transformer Quantized Model for Drug Reports Summarization
2
+
3
+ This repository hosts a quantized version of the T5 model, fine-tuned for text summarization tasks. The model has been optimized for efficient deployment while maintaining high accuracy, making it suitable for resource-constrained environments.
4
+
5
+ ## Model Details
6
+
7
+ - **Model Architecture:** T5
8
+ - **Task:** Drug Report Summarization
9
+ - **Dataset:** Hugging Face's `cnn_dailymail'
10
+ - **Quantization:** Float16
11
+ - **Fine-tuning Framework:** Hugging Face Transformers
12
+
13
+ ## Usage
14
+
15
+ ### Installation
16
+
17
+ ```sh
18
+ pip install transformers torch
19
+ ```
20
+
21
+ ### Loading the Model
22
+
23
+ ```python
24
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
25
+ import torch
26
+
27
+ device = "cuda" if torch.cuda.is_available() else "cpu"
28
+
29
+ model_name = "AventIQ-AI/t5-summarization-for-drug-reports"
30
+ tokenizer = T5Tokenizer.from_pretrained(model_name)
31
+ model = T5ForConditionalGeneration.from_pretrained(model_name).to(device)
32
+
33
+ def test_summarization(model, tokenizer):
34
+ user_text = input("\nEnter your text for summarization:\n")
35
+ input_text = "summarize: " + user_text
36
+ inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512).to(device)
37
+
38
+ output = model.generate(
39
+ **inputs,
40
+ max_new_tokens=100,
41
+ num_beams=5,
42
+ length_penalty=0.8,
43
+ early_stopping=True
44
+ )
45
+
46
+ summary = tokenizer.decode(output[0], skip_special_tokens=True)
47
+ return summary
48
+
49
+ print("\nπŸ“ **Model Summary:**")
50
+ print(test_summarization(model, tokenizer))
51
+ ```
52
+
53
+ # πŸ“Š ROUGE Evaluation Results
54
+
55
+ After fine-tuning the **T5-Small** model for text summarization, we obtained the following **ROUGE** scores:
56
+
57
+ | **Metric** | **Score** | **Meaning** |
58
+ |-------------|-----------|-------------|
59
+ | **ROUGE-1** | **0.3061** (~30%) | Measures overlap of **unigrams (single words)** between the reference and generated summary. |
60
+ | **ROUGE-2** | **0.1241** (~12%) | Measures overlap of **bigrams (two-word phrases)**, indicating coherence and fluency. |
61
+ | **ROUGE-L** | **0.2233** (~22%) | Measures **longest matching word sequences**, testing sentence structure preservation. |
62
+ | **ROUGE-Lsum** | **0.2620** (~26%) | Similar to ROUGE-L but optimized for summarization tasks. |
63
+
64
+
65
+ ## Fine-Tuning Details
66
+
67
+ ### Dataset
68
+
69
+ The Hugging Face's `cnn_dailymail` dataset was used, containing the text and their summarization examples.
70
+
71
+ ### Training
72
+
73
+ - Number of epochs: 3
74
+ - Batch size: 4
75
+ - Evaluation strategy: epoch
76
+ - Learning rate: 3e-5
77
+
78
+ ### Quantization
79
+
80
+ Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
81
+
82
+ ## Repository Structure
83
+
84
+ ```
85
+ .
86
+ β”œβ”€β”€ model/ # Contains the quantized model files
87
+ β”œβ”€β”€ tokenizer_config/ # Tokenizer configuration and vocabulary files
88
+ β”œβ”€β”€ model.safetensors/ # Quantized Model
89
+ β”œβ”€β”€ README.md # Model documentation
90
+ ```
91
+
92
+ ## Limitations
93
+
94
+ - The model may not generalize well to domains outside the fine-tuning dataset.
95
+ - Quantization may result in minor accuracy degradation compared to full-precision models.
96
+
97
+ ## Contributing
98
+
99
+ Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
100
+