DhruvaBansal00 commited on
Commit
7b50a11
·
1 Parent(s): 2413468

Adding code for generating model outputs

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -34,15 +34,19 @@ This repository contains weights for RefuelLLM-2-small that are compatible for u
34
  See the snippet below for usage with Transformers:
35
 
36
  ```python
37
- >>> import transformers
38
  >>> import torch
 
39
 
40
  >>> model_id = "refuelai/Llama-3-Refueled"
 
 
41
 
42
- >>> pipeline = transformers.pipeline(
43
- "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
44
- )
45
- >>> pipeline("Hey how are you doing today?")
 
 
46
  ```
47
 
48
  ## Training Data
 
34
  See the snippet below for usage with Transformers:
35
 
36
  ```python
 
37
  >>> import torch
38
+ >>> from transformers import AutoModelForCausalLM, AutoTokenizer
39
 
40
  >>> model_id = "refuelai/Llama-3-Refueled"
41
+ >>> tokenizer = AutoTokenizer.from_pretrained(model_id)
42
+ >>> model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
43
 
44
+ >>> messages = [{"role": "user", "content": "Is this comment toxic or non-toxic: RefuelLLM is the new way to label text data!"}]
45
+
46
+ >>> inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to("cuda")
47
+
48
+ >>> outputs = model.generate(inputs, max_new_tokens=20)
49
+ >>> print(tokenizer.decode(outputs[0]))
50
  ```
51
 
52
  ## Training Data