smita1988 commited on
Commit
4fb4958
·
verified ·
1 Parent(s): 0f1a90f

Model save

Browse files
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: apache-2.0
4
+ base_model: unsloth/tinyllama-chat-bnb-4bit
5
+ tags:
6
+ - unsloth
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: english-hindi-colloquial-translator
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # english-hindi-colloquial-translator
17
+
18
+ This model is a fine-tuned version of [unsloth/tinyllama-chat-bnb-4bit](https://huggingface.co/unsloth/tinyllama-chat-bnb-4bit) on an unknown dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 8
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
42
+ - lr_scheduler_type: linear
43
+ - lr_scheduler_warmup_steps: 500
44
+ - num_epochs: 3
45
+ - mixed_precision_training: Native AMP
46
+
47
+ ### Framework versions
48
+
49
+ - PEFT 0.14.0
50
+ - Transformers 4.48.3
51
+ - Pytorch 2.6.0+cu124
52
+ - Datasets 3.3.0
53
+ - Tokenizers 0.21.0
adapter_config.json CHANGED
@@ -23,10 +23,10 @@
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
- "q_proj",
27
- "o_proj",
28
  "v_proj",
29
- "k_proj"
 
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
 
 
26
  "v_proj",
27
+ "q_proj",
28
+ "k_proj",
29
+ "o_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2bc26cf0bf9d69c98b604a4fb05567e51ab37cc2c76fa04102c07a3ae2a3201e
3
  size 18045856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c35827cf44c3b94ae86131540d96c7fbf3618ce06329b3fa55360772a24ab69
3
  size 18045856
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d23f3991a9c871882ddd9a1f2bf6daf11226f18e359c7ce210a1ffa0ae21625a
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2a4e77c081a803cf152dcd65366c31bb68f17ed5cf1ad2f57a132a105d2e29a
3
  size 5368