~ We are Legion...

image/png

My biggest merge yet, consisting of a total of 20 specially curated models. My methodology in approaching this was to create 5 highly specialized models:

  • A completely uncensored base
  • A very intelligent model based on UGI, Willingness and NatInt scores on the UGI Leaderboard
  • A highly descriptive writing model, specializing in creative and natural prose
  • A RP model specially merged with fine-tuned models that use a lot of RP datasets
  • The secret ingredient: A completely unhinged, uncensored final model

These five models went through a series of iterations until I got something I thought worked well and then combined them to make LEGION.

The full list of models used in this merge is below:

  • TheDrummer/Fallen-Llama-3.3-R1-70B-v1
  • Sao10K/Llama-3.3-70B-Vulpecula-r1
  • Sao10K/L3-70B-Euryale-v2.1
  • SicariusSicariiStuff/Negative_LLAMA_70B
  • allura-org/Bigger-Body-70b
  • Sao10K/70B-L3.3-mhnnn-x1
  • Sao10K/L3.3-70B-Euryale-v2.3
  • Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
  • Sao10K/L3.1-70B-Hanami-x1
  • Sao10K/70B-L3.3-Cirrus-x1
  • EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
  • TheDrummer/Anubis-70B-v1
  • ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
  • LatitudeGames/Wayfarer-Large-70B-Llama-3.3
  • NeverSleep/Lumimaid-v0.2-70B
  • mlabonne/Hermes-3-Llama-3.1-70B-lorablated
  • ReadyArt/Forgotten-Safeword-70B-3.6
  • ReadyArt/Fallen-Abomination-70B-R1-v4.1
  • ReadyArt/Fallen-Safeword-70B-R1-v4.1
  • huihui-ai/Llama-3.3-70B-Instruct-abliterated

Recommended settings:

Temp 1.0
Min P 0.02

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using TareksLab/L-BASE-V1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: TareksLab/L2-MERGE2a
    parameters:
      weight: 0.20
      density: 0.5
  - model: TareksLab/L2-MERGE4
    parameters:
      weight: 0.20
      density: 0.5
  - model: TareksLab/L-BASE-V1
    parameters:
      weight: 0.20
      density: 0.5
  - model: TareksLab/L2-MERGE3
    parameters:
      weight: 0.20
      density: 0.5
  - model: TareksLab/L2-MERGE1
    parameters:
      weight: 0.20
      density: 0.5
merge_method: dare_ties
base_model: TareksLab/L-BASE-V1
parameters:
  normalize: false
out_dtype: bfloat16
chat_template: llama3
tokenizer:
 source: base

Support on KO-FI <3

Downloads last month
36
Safetensors
Model size
70.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Tarek07/Legion-V2.1-LLaMa-70B