AL40b-dev Model Card
AL40b-dev is a highly multilingual aligned model with 40B parameters.
DISCLAIMER: This model is an experimental version and is provided for research purposes only. Its use is subject to the terms of the research-only license governing the data used in its post-training, which prohibits commercial use. Access is not public and currently restricted to select partners.
How to use
The instruction-following models use the commonly adopted ChatML template:
{%- if messages[0]['role'] == 'system' %}{%- set system_message = messages[0]['content'] %}{%- set loop_messages = messages[1:] %}{%- else %}{%- set system_message = 'SYSTEM MESSAGE' %}{%- set loop_messages = messages %}{%- endif %}{%- if not date_string is defined %}{%- set date_string = '2024-09-30' %}{%- endif %}{{ '<|im_start|>system\n' + system_message + '<|im_end|>\n' }}{% for message in loop_messages %}{%- if (message['role'] != 'user') and (message['role'] != 'assistant')%}{{ raise_exception('Only user and assitant roles are suported after the initial optional system message.') }}{% endif %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('After the optional system message, conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}
Where system_message
is used to guide the model during generation and date_string
can be set to allow the model to respond with the current date.
The exact same chat template should be used for an enhanced conversational experience. The easiest way to apply it is by using the tokenizer's built-in functions, as shown in the following snippet.
from datetime import datetime
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "BSC-LT/AL40b-dev"
text = "At what temperature does water boil?"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
message = [ { "role": "user", "content": text } ]
date_string = datetime.today().strftime('%Y-%m-%d')
prompt = tokenizer.apply_chat_template(
message,
tokenize=False,
add_generation_prompt=True,
date_string=date_string
)
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Using this template, each turn is preceded by a <|im_start|>
delimiter and the role of the entity
(either user
, for content supplied by the user, or assistant
for LLM responses), and finished with the <|im_end|>
token.
License
- Downloads last month
- 28