Edit model card

L3-Hecate-8B-v1.2

Hecate

About:

This is a merge of pre-trained language models created using mergekit.

Recommended Samplers:

Temperature - 1.0
TFS - 0.7
Smoothing Factor - 0.3
Smoothing Curve - 1.1
Repetition Penalty - 1.08

Merge Method

This model was merged a series of model stock, followed by ExPO. It uses a mix of roleplay models to improve performance.

Configuration

The following YAML configuration was used to produce this model:

---
# Concise-Mopey
models:
  - model: Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-Concise-R
    parameters:
      weight: 1.0
  - model: failspy/Llama-3-8B-Instruct-MopeyMule
    parameters:
      weight: 1.0
merge_method: task_arithmetic
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
  normalize: false
dtype: float32
vocab_type: bpe
name: Concise-Mopey

---
# Mopey RP Mix
models:
  - model: Concise-Mopey+Azazelle/Llama-3-Sunfall-8b-lora
  - model: Concise-Mopey+Azazelle/Llama-3-8B-Abomination-LORA
  - model: Concise-Mopey+Azazelle/llama3-8b-hikikomori-v0.4
  - model: Concise-Mopey+Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B
  - model: Concise-Mopey+Azazelle/BlueMoon_Llama3
  - model: Concise-Mopey+Azazelle/Llama3_RP_ORPO_LoRA
  - model: Concise-Mopey+mpasila/Llama-3-LimaRP-Instruct-LoRA-8B
  - model: Concise-Mopey+Azazelle/Llama-3-LongStory-LORA
merge_method: model_stock
base_model: failspy/Llama-3-8B-Instruct-MopeyMule
dtype: float32
vocab_type: bpe
name: mopey_rp

---
models:
  - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
  - model: Sao10K/L3-8B-Tamamo-v1
  - model: Sao10K/L3-8B-Niitama-v1
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
  - model: nothingiisreal/L3-8B-Celeste-v1
  - model: Jellywibble/lora_120k_pref_data_ep2
  - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
  - model: mopey_rp
merge_method: model_stock
base_model: NousResearch/Meta-Llama-3-8B-Instruct
dtype: float32
vocab_type: bpe
name: hq_rp

---
# ExPO
models:
  - model: hq_rp
    parameters:
      weight:
        - filter: mlp
          value: 1.15
        - filter: self_attn
          value: 1.025
        - value: 1.0
merge_method: task_arithmetic
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
  normalize: false
dtype: float32
vocab_type: bpe
Downloads last month
32
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Azazelle/L3-Hecate-8B-v1.2

Finetuned
(22)
this model
Merges
1 model
Quantizations
2 models