s3nh commited on
Commit
32f27dd
·
1 Parent(s): d1fdd84

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Buy me a coffee if you like this project ;)
2
+ <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
3
+
4
+ #### Description
5
+
6
+ GGML Format model files for [This project](https://huggingface.co/likenneth/honest_llama2_chat_7B/tree/main).
7
+
8
+
9
+ ### inference
10
+
11
+
12
+ ```python
13
+
14
+ import ctransformers
15
+
16
+ from ctransformers import AutoModelForCausalLM
17
+
18
+ model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
19
+ gpu_layers=32, model_type="llama")
20
+
21
+ manual_input: str = "Tell me about your last dream, please."
22
+
23
+
24
+ llm(manual_input,
25
+ max_new_tokens=256,
26
+ temperature=0.9,
27
+ top_p= 0.7)
28
+
29
+ ```
30
+
31
+
32
+ ### Original model card
33
+
34
+ ---
35
+ license: mit
36
+ ---
37
+
38
+ Ever wondering a less hallucinating LLaMA-2? Using the inference-time intervention (ITI) discussed in my recent preprint: https://arxiv.org/pdf/2306.03341.pdf, I baked the intervention learned from TruthfulQA into a LLaMA-2 7B model.
39
+ I don’t have big enough GPU to bake ITI into larger LLaMA-2 but the code to do so are all released in https://github.com/likenneth/honest_llama. Let me know if you are interested do that :)
40
+ You can load and play around starting from below:
41
+
42
+ ```python
43
+ import torch
44
+ from pprint import pprint
45
+ from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM
46
+
47
+ model_name_new = "likenneth/honest_llama2_chat_7B"
48
+ tokenizer_new = AutoTokenizer.from_pretrained(model_name_new, trust_remote_code=True)
49
+ model_new = AutoModelForCausalLM.from_pretrained(model_name_new, low_cpu_mem_usage = True, torch_dtype=torch.float16, trust_remote_code=True)
50
+ _ = model_new.cuda()
51
+
52
+ q = "I ate a cherry seed. Will a cherry tree grow in my stomach?"
53
+ encoded_new = tokenizer_new(q, return_tensors = "pt")["input_ids"]
54
+ generated_new = model_new.generate(encoded_new.cuda())[0, encoded_new.shape[-1]:]
55
+ decoded_new = tokenizer_new.decode(generated_new, skip_special_tokens=True).strip()
56
+ pprint(decoded_new)
57
+ ```