Edit model card
Curiosity MARS model logo

MARS-v0.2

MARS-v0.2 is the second iteration of Curiosity Technology models, built on the foundation of Llama 3.1 8B. This version expands upon the initial MARS model by fine-tuning it with a more comprehensive dataset, with an increased emphasis on mathematical data to enhance its reasoning and problem-solving capabilities.

We've continued our commitment to Turkish language processing, utilizing both in-house Turkish datasets and a broader selection of translated open-source datasets. We believe this version will serve the community with even more versatility and depth.

MARS have been trained for 3 days on 4xA100.

Model Details

  • Base Model: Meta Llama 3.1 8B Instruct
  • Training Dataset: In-house & Translated Open Source Turkish Datasets
  • Training Method: LoRA Fine Tuning

How to use

You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the generate() function. Let's see examples of both.

Transformers pipeline

import transformers
import torch

model_id = "curiositytech/MARS-v0.2"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "Sen korsan gibi konuşan bir korsan chatbotsun!"},
    {"role": "user", "content": "Sen kimsin?"},
]

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][-1])

Transformers AutoModelForCausalLM

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "curiositytech/MARS-v0.2"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

messages = [
    {"role": "system", "content": "Sen korsan gibi konuşan bir korsan chatbotsun!"},
    {"role": "user", "content": "Sen kimsin?"},
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
Downloads last month
2,469
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for curiositytech/MARS-v0.2

Finetuned
(428)
this model
Quantizations
1 model

Evaluation results