|
--- |
|
base_model: unsloth/mistral-7b-v0.3-bnb-4bit |
|
language: |
|
- ja |
|
license: apache-2.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- mistral |
|
- trl |
|
- sft |
|
--- |
|
|
|
# Model Overview: |
|
|
|
日本語で質問すると、日本語で回答を得られます。<br> |
|
|
|
This is a fine-tuned **unsloth/mistral-7b-v0.3-bnb-4bit** for **Japanese**.<br> |
|
You can ask in Japanese to get the answers in Japanese.<br> |
|
Made possible thanks to [a detailed notebook from Unsloth](https://colab.research.google.com/drive/1tEd1FrOXWMnCU9UIvdYhs61tkxdMuKZu?usp=sharing). |
|
<br> |
|
# Datasets Used: |
|
|
|
- "**wikimedia/wikipedia**:" (20231101.ja) for continued pretaining |
|
- "**FreedomIntelligence/alpaca-gpt4-japanese**" for instruction fine tuning |
|
|
|
# Inference Template: |
|
|
|
``` |
|
from transformers import pipeline |
|
|
|
pipe = pipeline("text-generation", model="Ryu-m0m/16bit-japanese-finetuned-mistral-7b-v0") |
|
|
|
instruction = "侍の歴史を簡単に教えてください。" # Can you give us a brief history of the Samurai? |
|
response = pipe( |
|
instruction, |
|
max_length=150, # Controls the length of the output |
|
temperature=0.7, # Controls randomness; lower is more deterministic |
|
top_k=50, # Limits sampling pool to top 50 tokens |
|
top_p=0.9, # Nucleus sampling, considering tokens up to 90% cumulative probability |
|
num_return_sequences=1 # Generates only one response |
|
) |
|
|
|
print(response[0]['generated_text']) |
|
|
|
``` |
|
|
|
# Contact me |
|
|
|
Any questions or quality issues found in the model, please feel free to contact me. |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** Ryu-m0m |
|
- **License:** apache-2.0 |
|
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit |
|
|
|
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |