Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
original model Meta-Llama-3-70B-Instruct
EXL2 quants of Meta-Llama-3-70B-Instruct
Located in the main branch
- 2.55 bits per weight
- measurement.json
์๋ณธ ๋ชจ๋ธ Meta-Llama-3-70B-Instruct
Meta-Llama-3-70B-Instruct ๋ชจ๋ธ EXL2 ์์ํ
๋ฉ์ธ branch์ ์๋ ํ์ผ
- 2.55 bits per weight
- measurement.json
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.