Model Card for Model ID

Model finetuned to autocomplete for YAML files 8 bit quantized version of https://huggingface.co/alexvumnov/yaml_completion

Model Details

Model Description

This is the model card of a 馃 transformers model that has been pushed on the Hub. This model card has been automatically generated.

Uses

Model expects a specific prompt format, so please use it for best performance

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model = AutoModelForCausalLM.from_pretrained("alexvumnov/yaml_completion")
tokenizer = AutoTokenizer.from_pretrained("alexvumnov/yaml_completion", padding='left')

prompt_format = """
# Here's a yaml file to offer a completion for
# Lines after the current one
{text_after}
# Lines before the current one
{text_before}
# Completion:
"""

input_prefix = """
name: my_awesome_env
dependencies:

"""

input_suffix = ""

generator = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda')

generator(prompt_format.format(text_after=input_suffix, text_before=input_prefix), max_new_tokens=64)

# [{'generated_text': "\n# Here's a yaml file to offer a completion for\n# Lines after the current one\n\n# Lines before the current one\n\nname: my_awesome_env\ndependencies:\n\n\n# Completion:\n- deploy"}]
Downloads last month
12
Safetensors
Model size
357M params
Tensor type
F32
F16
I8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for alexvumnov/yaml_completion_8bit

Quantized
(2)
this model