TokenButler

TokenButler


The collection of TokenButler models can be found here. To run the DeepSeek-R1-Distill-Llama-8B model, follow:

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

question = "If millionaires have butlers, why don't million dollar language models have a butler too? I think its because "

model_name = "akhauriyash/DeepSeek-R1-Distill-Llama-8B-Butler"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)

generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
response = generator(question, max_new_tokens=200, do_sample=True, top_p=0.95, temperature=0.7)

print(response[0]['generated_text'][len(question):])

Note that the 'default' configured sparsity is 50%. Further, there is a 'sliding window' of 128 and 8 'anchor tokens'. To 'change' the sparsity, you can use the following function after loading the model. Please note that the 'fixed' is the only supported strategy at the moment, which 'fixes' the sparsity of each layer (except the first) at the 'pc' (percentage) mentioned. This can also be found at test_hf.py. Sliding window and anchor tokens can be changed in a similar manner.

def set_sparsity(model, sparsity):
    for module in model.modules():
        if module.__class__.__name__.__contains__("AttentionExperimental"):
            module.token_sparse_method = sparsity
            module.set_token_sparsity()
    return model

model = set_sparsity(model, "fixed_60pc")

Predictor Architecture

TokenButlerFigure

Custom Synthetic Task

Synthetic Tasks
Downloads last month
16
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for akhauriyash/DeepSeek-R1-Distill-Llama-8B-Butler

Finetuned
(55)
this model

Collection including akhauriyash/DeepSeek-R1-Distill-Llama-8B-Butler