merged_model_output
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Linear DELLA merge method using /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Smart-Base-V2 as a base.
Models Merged
The following models were included in the merge:
- /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Story-Base-V2
- /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Middle-Base-sce-V1
- /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Dark-Base-sce-V1
Configuration
The following YAML configuration was used to produce this model:
# --- Mergekit Example: della_linear ---
# Method: Implements the DELLA concept (Deep Ensembling with Layer-wise Linear Averaging).
# This typically involves a sophisticated layer-wise linear combination of models.
base_model: /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Smart-Base-V2 # The foundational model
models:
- model: /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Dark-Base-sce-V1
parameters:
weight: [0.2, 0.3, 0.5] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5]
density: 0.50 # Sparsity/pruning factor for this model's contribution.
epsilon: 0.15 # Single epsilon for the pruning
- model: /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Story-Base-V2
parameters:
weight: [0.5, 0.2, 0.3] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5]
density: 0.50 # Sparsity/pruning factor for this model's contribution.
epsilon: 0.15 # Single epsilon for the pruning
- model: /media/administrator/oiseauxai1data/modelweights/L3.3-70B-Middle-Base-sce-V1
parameters:
weight: [0.3, 0.5, 0.2] # Contribution of this model (e.g., 50%) (can also use a gradiant) [0.1, 0.1, 0.1, 0.2, 0.5]
density: 0.40 # Sparsity/pruning factor for this model's contribution.
epsilon: 0.15 # Single epsilon for the pruning
model_name: L3.3-70B-Amalgamma-V17 # Name of your merge
dtype: float32 # Input size float32, float16, bfloat16
out_dtype: bfloat16 # output size float32, float16, bfloat16
merge_method: della_linear
parameters:
normalize: false # If true (default), weights are normalized to sum to 1.
# If false, absolute weights are used.
lambda: 1.07 # Single lambda for scaling the final merged deltas
tokenizer_source: base # Or 'base' if base_model is set, or 'union', careful with this one
chat_template: llama3 # Template for chat (Chatml, llama3, etc...)
license: apache-2.0 # License type
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support