SLIM-BOOLEAN

slim-boolean is an experimental model designed to implement a boolean question answering function call using a 2.7B parameter specialized model. As an input, the model takes a context passage, a yes-no question, and an optional (explain) parameter, and as output, the model generates a python dictionary with two keys - 'answer' which contains the 'yes/no' classification, and 'explain' which provides a text snippet from the passage that was the basis for the classification, e.g.:

    {'answer': ['yes'], 'explanation': ['the results exceeded expectations by 3%'] }

This model is fine-tuned on top of llmware/bling-stable-lm-3b-4e1t-v0, which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.

For fast inference, we would recommend using the'quantized tool' version, e.g., 'slim-boolean-tool'.

Prompt format:

function = "boolean"
params = "{insert yes-no-question} (explain)"
prompt = "<human> " + {text} + "\n" +
                      "<{function}> " + {params} + "</{function}>" + "\n<bot>:"

Transformers Script
model = AutoModelForCausalLM.from_pretrained("llmware/slim-boolean")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-boolean")

function = "boolean"
params = "did tesla stock price increase? (explain) "

text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."  

prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"

inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])

outputs = model.generate(
    inputs.input_ids.to('cpu'),
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
    do_sample=True,
    temperature=0.3,
    max_new_tokens=100
)

output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)

print("output only: ", output_only)  

# here's the fun part
try:
    output_only = ast.literal_eval(llm_string_output)
    print("success - converted to python dictionary automatically")
except:
    print("fail - could not convert to python dictionary automatically - ", llm_string_output)
Using as Function Call in LLMWare
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-boolean")
response = slim_model.function_call(text,params=["did the stock price increase? (explain)"], function="boolean")

print("llmware - llm_response: ", response)

Model Card Contact

Darren Oberst & llmware team

Join us on Discord

Downloads last month
12
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Collection including llmware/slim-boolean