justinj92 commited on
Commit
bbf8699
·
verified ·
1 Parent(s): a148273

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -6
README.md CHANGED
@@ -1,12 +1,10 @@
1
  ---
2
  language:
3
- - ml
4
  - en
5
  base_model: justinj92/Delphermes-8B
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
  tags:
9
- - malayalam
10
  - text-generation
11
  - lora
12
  - merged
@@ -15,12 +13,12 @@ license: apache-2.0
15
 
16
  # Delphermes-8B-RL-epoch2
17
 
18
- This is a merged LoRA model based on justinj92/Delphermes-8B, fine-tuned for Malayalam language tasks.
19
 
20
  ## Model Details
21
 
22
  - **Base Model**: justinj92/Delphermes-8B
23
- - **Language**: Malayalam (ml), English (en)
24
  - **Type**: Merged LoRA model
25
  - **Library**: transformers
26
 
@@ -39,7 +37,7 @@ model = AutoModelForCausalLM.from_pretrained(
39
  )
40
 
41
  # Example usage
42
- text = "നമസ്കാരം"
43
  inputs = tokenizer(text, return_tensors="pt")
44
  outputs = model.generate(**inputs, max_length=100)
45
  response = tokenizer.decode(outputs[0], skip_special_tokens=True)
@@ -48,4 +46,4 @@ print(response)
48
 
49
  ## Training Details
50
 
51
- This model was created by merging a LoRA adapter trained for Malayalam language understanding and generation.
 
1
  ---
2
  language:
 
3
  - en
4
  base_model: justinj92/Delphermes-8B
5
  library_name: transformers
6
  pipeline_tag: text-generation
7
  tags:
 
8
  - text-generation
9
  - lora
10
  - merged
 
13
 
14
  # Delphermes-8B-RL-epoch2
15
 
16
+ This is a merged LoRA model based on justinj92/Delphermes-8B, fine-tuned.
17
 
18
  ## Model Details
19
 
20
  - **Base Model**: justinj92/Delphermes-8B
21
+ - **Language**: English (en)
22
  - **Type**: Merged LoRA model
23
  - **Library**: transformers
24
 
 
37
  )
38
 
39
  # Example usage
40
+ text = "Hi"
41
  inputs = tokenizer(text, return_tensors="pt")
42
  outputs = model.generate(**inputs, max_length=100)
43
  response = tokenizer.decode(outputs[0], skip_special_tokens=True)
 
46
 
47
  ## Training Details
48
 
49
+ This model was created by merging a LoRA adapter trained for language understanding and generation.