Stheno_experimental
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
dtype: bfloat16
merge_method: passthrough
modules:
default:
slices:
- sources:
- layer_range: [0, 7]
model: Sao10K/L3-8B-Stheno-v3.2
- sources:
- layer_range: [3, 12]
model: Sao10K/L3-8B-Stheno-v3.2
- sources:
- layer_range: [8, 17]
model: Sao10K/L3-8B-Stheno-v3.2
- sources:
- layer_range: [13, 22]
model: Sao10K/L3-8B-Stheno-v3.2
- sources:
- layer_range: [18, 27]
model: Sao10K/L3-8B-Stheno-v3.2
- sources:
- layer_range: [23, 32]
model: Sao10K/L3-8B-Stheno-v3.2
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Entropicengine/Stheno-Biggened-13b
Base model
Sao10K/L3-8B-Stheno-v3.2