Push model using huggingface_hub.
Browse files- README.md +3 -3
- pytorch_model.bin +1 -1
    	
        README.md
    CHANGED
    
    | @@ -24,7 +24,7 @@ You can then generate text as follows: | |
| 24 | 
             
            ```python
         | 
| 25 | 
             
            from transformers import pipeline
         | 
| 26 |  | 
| 27 | 
            -
            generator = pipeline("text-generation", model="Hermi2023//mnt/hdd0/home/buwei/sample/tmp/ | 
| 28 | 
             
            outputs = generator("Hello, my llama is cute")
         | 
| 29 | 
             
            ```
         | 
| 30 |  | 
| @@ -34,8 +34,8 @@ If you want to use the model for training or to obtain the outputs from the valu | |
| 34 | 
             
            from transformers import AutoTokenizer
         | 
| 35 | 
             
            from trl import AutoModelForCausalLMWithValueHead
         | 
| 36 |  | 
| 37 | 
            -
            tokenizer = AutoTokenizer.from_pretrained("Hermi2023//mnt/hdd0/home/buwei/sample/tmp/ | 
| 38 | 
            -
            model = AutoModelForCausalLMWithValueHead.from_pretrained("Hermi2023//mnt/hdd0/home/buwei/sample/tmp/ | 
| 39 |  | 
| 40 | 
             
            inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
         | 
| 41 | 
             
            outputs = model(**inputs, labels=inputs["input_ids"])
         | 
|  | |
| 24 | 
             
            ```python
         | 
| 25 | 
             
            from transformers import pipeline
         | 
| 26 |  | 
| 27 | 
            +
            generator = pipeline("text-generation", model="Hermi2023//mnt/hdd0/home/buwei/sample/tmp/tmp5nx4qcp3/Hermi2023/doc2query-ppo-msmarco-43520-121")
         | 
| 28 | 
             
            outputs = generator("Hello, my llama is cute")
         | 
| 29 | 
             
            ```
         | 
| 30 |  | 
|  | |
| 34 | 
             
            from transformers import AutoTokenizer
         | 
| 35 | 
             
            from trl import AutoModelForCausalLMWithValueHead
         | 
| 36 |  | 
| 37 | 
            +
            tokenizer = AutoTokenizer.from_pretrained("Hermi2023//mnt/hdd0/home/buwei/sample/tmp/tmp5nx4qcp3/Hermi2023/doc2query-ppo-msmarco-43520-121")
         | 
| 38 | 
            +
            model = AutoModelForCausalLMWithValueHead.from_pretrained("Hermi2023//mnt/hdd0/home/buwei/sample/tmp/tmp5nx4qcp3/Hermi2023/doc2query-ppo-msmarco-43520-121")
         | 
| 39 |  | 
| 40 | 
             
            inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
         | 
| 41 | 
             
            outputs = model(**inputs, labels=inputs["input_ids"])
         | 
    	
        pytorch_model.bin
    CHANGED
    
    | @@ -1,3 +1,3 @@ | |
| 1 | 
             
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            -
            oid sha256: | 
| 3 | 
             
            size 891706649
         | 
|  | |
| 1 | 
             
            version https://git-lfs.github.com/spec/v1
         | 
| 2 | 
            +
            oid sha256:7eebeed948c902c1395f1a93ac6857e73265c56353a737e518e64cd65b35636d
         | 
| 3 | 
             
            size 891706649
         |