tingyuansen commited on
Commit
a5e286c
·
verified ·
1 Parent(s): f24d417

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -37,7 +37,37 @@ AstroLLaMA-3-8B is a specialized base language model for astronomy, developed by
37
 
38
  ## Generating text from a prompt
39
 
40
- [The code example remains the same as in the previous version]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ## Model Limitations and Biases
43
 
 
37
 
38
  ## Generating text from a prompt
39
 
40
+ ```python
41
+ from transformers import AutoModelForCausalLM, AutoTokenizer
42
+ import torch
43
+
44
+ # Load the model and tokenizer
45
+ tokenizer = AutoTokenizer.from_pretrained("AstroMLab/astrollama-2-7b-base_aic")
46
+ model = AutoModelForCausalLM.from_pretrained("AstroMLab/astrollama-2-7b-base_aic", device_map="auto")
47
+
48
+ # Create the pipeline with explicit truncation
49
+ from transformers import pipeline
50
+ generator = pipeline(
51
+ "text-generation",
52
+ model=model,
53
+ tokenizer=tokenizer,
54
+ device_map="auto",
55
+ truncation=True,
56
+ max_length=512
57
+ )
58
+
59
+ # Example prompt from an astronomy paper
60
+ prompt = "In this letter, we report the discovery of the highest redshift, " \
61
+ "heavily obscured, radio-loud QSO candidate selected using JWST NIRCam/MIRI, " \
62
+ "mid-IR, sub-mm, and radio imaging in the COSMOS-Web field. "
63
+
64
+ # Set seed for reproducibility
65
+ torch.manual_seed(42)
66
+
67
+ # Generate text
68
+ generated_text = generator(prompt, do_sample=True)
69
+ print(generated_text[0]['generated_text'])
70
+ ```
71
 
72
  ## Model Limitations and Biases
73