Eyas 17B
Overview
Eyas 17B is a frankenmerge based on the Falcon 3-10B architecture. Built using the mergekit library, Eyas 17B is optimized for a range of natural language processing tasks.
Merge Details
Merge Method
This model was created using the passthrough merge method. This method allows for a seamless integration of model layers to produce a new, high-performance model while maintaining compatibility with the Hugging Face transformers
library.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce Eyas 17B:
slices:
- sources:
- layer_range: [0, 10]
model: tiiuae/Falcon3-10B-Base
- sources:
- layer_range: [5, 15]
model: tiiuae/Falcon3-10B-Base
- sources:
- layer_range: [10, 20]
model: tiiuae/Falcon3-10B-Base
- sources:
- layer_range: [15, 25]
model: tiiuae/Falcon3-10B-Base
- sources:
- layer_range: [20, 30]
model: tiiuae/Falcon3-10B-Base
- sources:
- layer_range: [25, 35]
model: tiiuae/Falcon3-10B-Base
- sources:
- layer_range: [30, 40]
model: tiiuae/Falcon3-10B-Base
merge_method: passthrough
dtype: float16
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for qingy2024/Eyas-17B-Base
Base model
tiiuae/Falcon3-10B-Base