aymanbakiri commited on
Commit
3429f35
verified
1 Parent(s): b252e59

Add model card

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ tags:
5
+ - text-generation
6
+ - question-answering
7
+ - mcqa
8
+ - merged
9
+ - sft
10
+ - lora
11
+ base_model: AnnaelleMyriam/SFT_M3_model
12
+ ---
13
+
14
+ # MNLP M3 MCQA Merged Model
15
+
16
+ This model is a merged version of:
17
+ - **Base SFT Model**: `AnnaelleMyriam/SFT_M3_model`
18
+ - **LoRA Adapter**: `aymanbakiri/MNLP_M3_mcqa_model_test`
19
+
20
+ ## Model Description
21
+
22
+ This is a specialized model for Multiple Choice Question Answering (MCQA) tasks, created by:
23
+ 1. Starting with the SFT model `AnnaelleMyriam/SFT_M3_model`
24
+ 2. Fine-tuning with LoRA adapters on MCQA data
25
+ 3. Merging the LoRA weights back into the base model
26
+
27
+ ## Usage
28
+
29
+ ```python
30
+ from transformers import AutoModelForCausalLM, AutoTokenizer
31
+
32
+ model = AutoModelForCausalLM.from_pretrained("aymanbakiri/MNLP_M3_mcqa_merged_model_test")
33
+ tokenizer = AutoTokenizer.from_pretrained("aymanbakiri/MNLP_M3_mcqa_merged_model_test")
34
+
35
+ # Example usage for MCQA
36
+ prompt = """Question: What is the capital of France?
37
+ Options: (A) London (B) Berlin (C) Paris (D) Madrid
38
+ Answer:"""
39
+
40
+ inputs = tokenizer(prompt, return_tensors="pt")
41
+ outputs = model.generate(**inputs, max_new_tokens=5)
42
+ answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
43
+ print(answer)
44
+ ```
45
+
46
+ ## Training Details
47
+
48
+ - Base Model: SFT model fine-tuned for instruction following
49
+ - LoRA Configuration: r=16, alpha=32, dropout=0.1
50
+ - Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, lm_head
51
+ - Training Data: MNLP M2 MCQA Dataset
52
+
53
+ ## Performance
54
+
55
+ This merged model should provide better performance than the original LoRA adapter while being easier to deploy and use.