~ Feel the vibes πͺ²π~
Pinecone-sage-24b
Recommended ST preset for RP :
β Support My Work
If you like my work, consider buying me a coffee to support future merges, GPU time, and future experiments.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using darkc0de/XortronCriminalComputingConfig as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
base_model: darkc0de/XortronCriminalComputingConfig
chat_template: auto
merge_method: dare_ties
modules:
default:
slices:
- sources:
- layer_range: [0, 40]
model: Entropicengine/DarkTriad-24b
parameters:
density: 0.5
weight: 0.3
- layer_range: [0, 40]
model: darkc0de/XortronCriminalComputingConfig
parameters:
density: 0.8
weight: 0.8
- layer_range: [0, 40]
model: Entropicengine/Trifecta-Max-24b
parameters:
density: 0.5
weight: 0.1
out_dtype: bfloat16
tokenizer: {}
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Entropicengine/Pinecone-sage-24b
Merge model
this model