10b.gif

LwQ-10B-Instruct

LwQ-10B-Instruct (Llama with Questions), based on the Llama 3.1 collection of multilingual large language models (LLMs), is a set of pre-trained and instruction-tuned generative models optimized for multilingual dialogue use cases. These models outperform many available open-source alternatives. Model Architecture: Llama 3.1 is an auto-regressive language model that utilizes an optimized transformer architecture. The tuned versions undergo supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to better align with human preferences for helpfulness and safety. LwQ-10B is trained on synthetic reasoning datasets for mathematical reasoning and context-based problem-solving, with a focus on following instructions or keywords embedded in the input.

Use with transformers

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

import transformers
import torch

model_id = "prithivMLmods/LwQ-10B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Intended Use

  1. Multilingual Conversational Agents:
    LwQ-10B-Instruct is well-suited for building multilingual chatbots and virtual assistants, providing accurate and context-aware responses in various languages.

  2. Instruction-Following Applications:
    The model is ideal for tasks where adherence to specific instructions is critical, such as task automation, guided workflows, and structured content generation.

  3. Mathematical and Logical Reasoning:
    Trained on synthetic reasoning datasets, LwQ-10B can handle mathematical problem-solving, logical reasoning, and step-by-step explanations, making it suitable for education platforms and tutoring systems.

  4. Contextual Problem-Solving:
    The model is optimized for solving contextually rich problems by understanding and processing inputs with embedded instructions or keywords, useful for complex decision-making and recommendation systems.

  5. Content Creation and Summarization:
    LwQ-10B can generate high-quality content, including articles, reports, and summaries, across different languages and domains.

Limitations

  1. Limited Context Window:
    The model has a finite context length, which may affect its ability to handle tasks requiring extensive context or long conversations effectively.

  2. Performance Variability Across Languages:
    While it supports multiple languages, performance may vary, with higher accuracy in languages that are better represented in the training data.

  3. Accuracy in Complex Reasoning:
    Despite being trained on reasoning datasets, the model may occasionally produce incorrect or incomplete answers for highly complex or multi-step reasoning tasks.

  4. Bias and Ethical Risks:
    Since the model is trained on large datasets from diverse sources, it may exhibit biases present in the training data, potentially leading to inappropriate or biased outputs.

  5. Dependency on Clear Instructions:
    The model’s ability to generate accurate outputs relies heavily on the clarity and specificity of user instructions. Ambiguous or vague instructions may result in suboptimal responses.

  6. Resource Requirements:
    As a large language model with 10 billion parameters, it requires significant computational resources for both training and inference, limiting its deployment in low-resource environments.

  7. Lack of Real-Time Understanding:
    LwQ-10B lacks real-time understanding of current events or data beyond its training, so it may not provide accurate responses for highly recent or dynamic information.

Downloads last month
16
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for prithivMLmods/LwQ-10B-Instruct

Finetuned
(695)
this model
Quantizations
2 models