We tried to use the huggingface transformers library to recreate the TinyStories models on Consumer GPU using GPT2 Architecture instead of GPT-Neo Architecture orignally used in the paper (https://arxiv.org/abs/2305.07759). Output model is 15mb and has 3 million parameters.
------ EXAMPLE USAGE 1 ---
from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("segestic/Tinystories-gpt-0.1-3m")
model = AutoModelForCausalLM.from_pretrained("segestic/Tinystories-gpt-0.1-3m")
prompt = "Once upon a time there was"
input_ids = tokenizer.encode(prompt, return_tensors="pt")
Generate completion
output = model.generate(input_ids, max_length = 1000, num_beams=1)
Decode the completion
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
Print the generated text
print(output_text)
------ EXAMPLE USAGE 2 ------
Use a pipeline as a high-level helper
from transformers import pipeline
pipeline
pipe = pipeline("text-generation", model="segestic/Tinystories-gpt-0.1-3m")
prompt
prompt = "where is the little girl"
generate completion
output = pipe(prompt, max_length=1000, num_beams=1)
decode the completion
generated_text = output[0]['generated_text']
Print the generated text
print(generated_text)
- Downloads last month
- 443
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.