Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Overview
|
2 |
+
|
3 |
+
This model is a fine-tuned version of the Helsinki-NLP OPUS-MT model for multiple language pairs. It has been fine-tuned on the Tatoeba dataset for the following language pairs:
|
4 |
+
|
5 |
+
English to Marathi (en-mr)
|
6 |
+
|
7 |
+
Esperanto to Dutch (eo-nl)
|
8 |
+
|
9 |
+
Spanish to Portuguese (es-pt)
|
10 |
+
|
11 |
+
French to Russian (fr-ru)
|
12 |
+
|
13 |
+
Spanish to Galician (es-gl)
|
14 |
+
|
15 |
+
The model supports sequence-to-sequence translation and has been optimized for performance using FP16 quantization.
|
16 |
+
|
17 |
+
# Model Details
|
18 |
+
```
|
19 |
+
Base Model: Helsinki-NLP/opus-mt-en-roa
|
20 |
+
|
21 |
+
Training Dataset: Tatoeba dataset
|
22 |
+
|
23 |
+
Fine-tuned Language Pairs: en-mr, eo-nl, es-pt, fr-ru, es-gl
|
24 |
+
|
25 |
+
Evaluation Metric: BLEU Score (using sacreBLEU)
|
26 |
+
|
27 |
+
Training Framework: Hugging Face Transformers
|
28 |
+
|
29 |
+
Training Configuration
|
30 |
+
|
31 |
+
Optimizer: AdamW
|
32 |
+
|
33 |
+
Learning Rate: 2e-5
|
34 |
+
|
35 |
+
Batch Size: 16 (per device)
|
36 |
+
|
37 |
+
Weight Decay: 0.01
|
38 |
+
|
39 |
+
Epochs: 3
|
40 |
+
|
41 |
+
Precision: FP32 (initial training), converted to FP16 for inference
|
42 |
+
```
|
43 |
+
|
44 |
+
Quantization and FP16 Conversion
|
45 |
+
|
46 |
+
To improve inference efficiency, models were converted to FP16:
|
47 |
+
|
48 |
+
import torch
|
49 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
50 |
+
|
51 |
+
# List of fine-tuned models
|
52 |
+
models = [
|
53 |
+
"fine_tuned_models/en-mr/final/",
|
54 |
+
"fine_tuned_models/es-pt/final/",
|
55 |
+
"fine_tuned_models/eo-nl/final/",
|
56 |
+
"fine_tuned_models/en-mr/final/"
|
57 |
+
]
|
58 |
+
|
59 |
+
output_fp16_dir = "fine_tuned_models_fp16"
|
60 |
+
|
61 |
+
# Convert each model to FP16
|
62 |
+
for model_path in models:
|
63 |
+
print(f"Quantizing {model_path} to FP16...")
|
64 |
+
|
65 |
+
# Load model and tokenizer
|
66 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(model_path, torch_dtype=torch.float16)
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
68 |
+
|
69 |
+
# Define save path
|
70 |
+
save_path = model_path.replace("fine_tuned_models", output_fp16_dir)
|
71 |
+
|
72 |
+
# Save quantized model
|
73 |
+
model.save_pretrained(save_path)
|
74 |
+
tokenizer.save_pretrained(save_path)
|
75 |
+
|
76 |
+
print(f"Saved quantized model to: {save_path}\n")
|
77 |
+
|
78 |
+
# Inference Example
|
79 |
+
```
|
80 |
+
python
|
81 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
82 |
+
import torch
|
83 |
+
|
84 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("fine_tuned_models_fp16/en-mr/final/", torch_dtype=torch.float16).to("cuda")
|
85 |
+
tokenizer = AutoTokenizer.from_pretrained("fine_tuned_models_fp16/en-mr/final/")
|
86 |
+
|
87 |
+
inputs = tokenizer("Hello, how are you?", return_tensors="pt").to("cuda")
|
88 |
+
outputs = model.generate(**inputs)
|
89 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
90 |
+
```
|
91 |
+
# Usage
|
92 |
+
|
93 |
+
The models can be used for translation tasks in various NLP applications, including chatbots, document translation, and real-time communication.
|
94 |
+
|
95 |
+
# Limitations
|
96 |
+
|
97 |
+
May not generalize well for domain-specific text.
|
98 |
+
|
99 |
+
FP16 quantization may lead to minor loss in precision.
|
100 |
+
|
101 |
+
Translation accuracy depends on the dataset quality.
|
102 |
+
|
103 |
+
# Citation
|
104 |
+
|
105 |
+
If you use this model, please cite the original OPUS-MT paper and acknowledge the fine-tuning process conducted using the Tatoeba dataset.
|
106 |
+
|