LLMMINE commited on
Commit
9ff0355
·
verified ·
1 Parent(s): 3d5d3f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ license: apache-2.0
7
  <!-- Provide a quick summary of what the model is/does. -->
8
  base_model: Qwen/Qwen2.5-7B-Instruct
9
 
10
- This is an MTIPA-7B merged LoRA version. If you are unable to directly use [MTIPA-7B-LoRA](https://huggingface.co/LLMMINE/MTIPA-7B-PositionTask/tree/main), please load the model directly
11
 
12
  It should be noted that the MTIPA, TIPA, and training data for this model are all from Chinese, and support for other languages may not be sufficient. If you need to train a model specific to a particular language or for a general purpose, please refer to our paper and GitHub
13
 
@@ -17,7 +17,7 @@ This model is trained on the MTIPA dataset, and its function is to predict the p
17
  from transformers import AutoModelForCausalLM, AutoTokenizer
18
 
19
  base_model = AutoModelForCausalLM.from_pretrained(
20
- "MTIPA-7B-POSITION-MERGE",
21
  trust_remote_code=True,
22
  torch_dtype="auto",
23
  device_map="auto")
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
  base_model: Qwen/Qwen2.5-7B-Instruct
9
 
10
+ **This is an MTIPA-7B merged LoRA version.** If you are unable to directly use [MTIPA-7B-LoRA](https://huggingface.co/LLMMINE/MTIPA-7B-PositionTask/tree/main), please load the model directly
11
 
12
  It should be noted that the MTIPA, TIPA, and training data for this model are all from Chinese, and support for other languages may not be sufficient. If you need to train a model specific to a particular language or for a general purpose, please refer to our paper and GitHub
13
 
 
17
  from transformers import AutoModelForCausalLM, AutoTokenizer
18
 
19
  base_model = AutoModelForCausalLM.from_pretrained(
20
+ "LLMMINE/MTIPA-7B-POSITION-MERGE",
21
  trust_remote_code=True,
22
  torch_dtype="auto",
23
  device_map="auto")