File size: 1,351 Bytes
a902a54 9433a39 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
pipeline_tag: text-generation
library_name: transformers
base_model: meta-llama/Meta-Llama-3.1-70B
tags:
- pytorch
- llama
- llama-3
- vultr
---
# Model Information
The `vultr/Meta-Llama-3.1-70B-Instruct-AWQ-INT4-Dequantized-FP32` model is a quantized version `Meta-Llama-3.1-70B-Instruct` that was dequantized from HuggingFace's AWS Int4 model and requantized and optimized to run on AMD GPUs. It is a drop-in replacement for [hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4](https://huggingface.co/hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4).
```
Throughput: 68.74 requests/s, 43994.71 total tokens/s, 8798.94 output tokens/s
```
## Model Details
### Model Description
- **Developed by:** Meta
- **Model type:** Quantized Large Language Model
- **Language(s) (NLP):** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
- **License:** Llama 3.1
- **Dequantized From:** [hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4](https://huggingface.co/hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4)
## Technical Specifications [optional]
### Compute Infrastructure
- Vultr
#### Hardware
- AMD MI300X
#### Software
- ROCm
## Model Card Authors [optional]
- [biondizzle](https://huggingface.co/biondizzle)
|