π€ smolified-extractor
Intelligence, Distilled.
This is a Domain Specific Language Model (DSLM) generated by the Smolify Foundry.
It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments.
π¦ Asset Details
- Origin: Smolify Foundry (Job ID:
13d088d3) - Architecture: DSLM-Micro (270M Parameter Class)
- Training Method: Proprietary Neural Distillation
- Optimization: 4-bit Quantized / FP16 Mixed
- Dataset: Link to Dataset
π Usage (Inference)
This model is compatible with standard inference backends like vLLM.
# Example: Running your Sovereign Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "smolify/smolified-extractor"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [
{'role': 'system', 'content': '''Your task is to classify the user's input into one of the following categories: 'Booking Request', 'Cancellation', 'Inquiry', or 'Complaint'. Provide a confidence score for your classification. Use the following JSON schema: ```json { "type": "object", "properties": { "category": { "type": "string", "enum": ["Booking Request", "Cancellation", "Inquiry", "Complaint"] }, "confidence_score": { "type": "number", "minimum": 0, "maximum": 1 } }, "required": ["category", "confidence_score"] } ```'''},
{'role': 'user', 'content': '''I need to reserve a table for two for tonight at 7 PM under 'Smith'.'''}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
).removeprefix('<bos>')
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 1000,
temperature = 1, top_p = 0.95, top_k = 64,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
βοΈ License & Ownership
This model weights are a sovereign asset owned by smolify. Generated via Smolify.ai.
- Downloads last month
- 15
