metadata
library_name: transformers
tags:
- text-generation-inference
- code
- math
license: apache-2.0
language:
- en
pipeline_tag: text-generation
Stablelm-3b-abliterated
Stablelm-3b-abliterated is a multilingual large language model (LLM) designed for text-based generative AI applications. It is a 3-billion parameter model optimized for dialogue-based interactions, including summarization, retrieval-augmented generation, and creative writing. This model is based on the
StableLmForCausalLM
architecture and is instruction-tuned to handle a variety of conversational and agentic tasks.
Features
- Multilingual Capabilities: Supports multiple languages for diverse use cases.
- Optimized for Dialogue: Trained for natural, context-aware conversation.
- Instruction-Tuned: Fine-tuned for task-specific instructions and prompt adherence.
- Lightweight & Efficient: Designed for fast inference with optimized transformer-based architecture.
- Agentic Retrieval & Summarization: Performs well in knowledge retrieval and text summarization tasks.
Installation & Setup
Ensure you have the latest version of transformers
installed:
pip install --upgrade transformers
Usage with Transformers
You can load and use the model via the transformers
library:
import torch
from transformers import pipeline
model_id = "stabilityai/Stablelm-3b-abliterated"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a scientific assistant who provides precise, well-researched answers."},
{"role": "user", "content": "Explain quantum entanglement in simple terms."},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Intended Use
Primary Applications
- Conversational AI: Virtual assistants, chatbots, and interactive AI systems.
- Content Generation: Creative writing, storytelling, and ideation.
- Knowledge Retrieval: Summarization and information extraction from large datasets.
- Code Assistance: Generating code snippets and debugging suggestions.
- Multilingual NLP: Applications requiring language understanding across multiple languages.
Limitations
- Not suitable for real-time decision-making: Should not be used where human safety is critical.
- May produce incorrect or biased outputs: Like all LLMs, this model is dependent on its training data.
- Requires Computational Resources: While optimized, it still needs GPUs for efficient inference.