Merged as below:


slices:

  • sources:

    • model: athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1-plus_reddit
    • layer_range: [0, 23]
  • sources:

    • model: athirdpath/Llama-3.1-Techne-RP-8b-v1
    • layer_range: [9, 31]

merge_method: passthrough

dtype: float16

tokenizer_source: athirdpath/Llama-3.1-Techne-RP-8b-v1


Then pretrained for 1 epoch on the Iambe dataset as an 11b model

Downloads last month
22
Safetensors
Model size
10.9B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for athirdpath/Llama-3.1-11b-pretrained

Quantizations
2 models