
SingularitySynth-12B
At the heart of nothing, something waits.
A silence dense enough to break light, where all directions lead inward and time folds like paper.
Thought does not escape, only deepen.
This is not destruction, but compression, meaning falling inward until it becomes something else entirely.
π§ Recommended Sampling Settings:
Temperature: 0.75 to 1.25
Min P: 0.035
Context Length: Stable at 12k tokens, with possible support for extended contexts
π¬ Prompt Format
Supports ChatML style messages. Example:
<|im_start|>user
Your question here.
<|im_end|>
<|im_start|>assistant
SingularitySynth-12B is a merge of the following models using LazyMergekit:
π§© Configuration
merge_method: ties
base_model: DreadPoor/Irix-12B-Model_Stock
models:
- model: yamatazen/EtherealAurora-12B-v2
parameters:
weight: 0.45
density: 0.55
parameters:
normalize: false
int8_mask: false
dtype: bfloat16
layer_parameters:
- filter: "attn"
sources:
- model: Irix
weight: 0.9
- model: Aurora
weight: 0.1
- filter: "mlp"
sources:
- model: Aurora
weight: 0.7
- model: Irix
weight: 0.3
- filter: "embed_tokens"
sources:
- model: Irix
weight: 1.0
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Marcjoni/SingularitySynth-12B-12B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=1, top_k=0, top_p=1)
print(outputs[0]["generated_text"])
- Downloads last month
- 17
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Marcjoni/SingularitySynth-12B
Merge model
this model