MaziyarPanahi's picture
Update README.md (#3)
dd549e3 verified
|
raw
history blame
2.3 kB
---
language:
- en
pipeline_tag: text-generation
tags:
- chat
- llama
- facebook
- llaam3
- finetune
- chatml
library_name: transformers
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
model_name: calme-2.2-llama3.1-70b
datasets:
- MaziyarPanahi/truthy-dpo-v0.1-axolotl
---
<img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# MaziyarPanahi/calme-2.2-llama3.1-70b
This model is a fine-tuned version of the powerful `meta-llama/Meta-Llama-3.1-70B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
## Use Cases
This model is suitable for a wide range of applications, including but not limited to:
- Advanced question-answering systems
- Intelligent chatbots and virtual assistants
- Content generation and summarization
- Code generation and analysis
- Complex problem-solving and decision support
# ⚡ Quantized GGUF
coming soon!
# 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
coming soon!
This model uses `ChatML` prompt template:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
# How to use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.2-llama3.1-70b")
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-llama3.1-70b")
```
# Ethical Considerations
As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.