File size: 3,853 Bytes
c788c6b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
---
license: apache-2.0
base_model: mlx-community/Mistral-7B-Instruct-v0.3-4bit
tags:
- mlx
- lora
- stanley-cup
- nhl
- sports
- history
- generated_from_trainer
datasets:
- custom-stanley-cup-facts
language:
- en
library_name: peft
pipeline_tag: text-generation
---
# Mistral-7B Stanley Cup Facts LoRA
This is a LoRA adapter for Mistral-7B-Instruct fine-tuned on comprehensive NHL Stanley Cup championship data (1915-2025).
## Model Details
- **Developed by**: Mark Norgren (MLX fine-tuning on Apple Silicon)
- **Model type**: LoRA adapter for causal language modeling
- **Base model**: [mlx-community/Mistral-7B-Instruct-v0.3-4bit](https://huggingface.co/mlx-community/Mistral-7B-Instruct-v0.3-4bit)
- **Training Framework**: MLX (Apple Silicon optimized)
- **License**: Apache 2.0
## Training Details
- **LoRA Rank**: 16
- **LoRA Alpha**: 320.0
- **LoRA Dropout**: 0.1
- **Training Iterations**: 2500
- **Batch Size**: 2
- **Learning Rate**: 1.5e-05
- **Training Data**: 127 curated examples of Stanley Cup winners, scores, and matchups
## Performance
Achieves **100% accuracy** on Stanley Cup fact retrieval:
- ✅ All championship winners (1915-2025)
- ✅ Series scores and matchups
- ✅ Special cases (1919 flu cancellation, 2005 lockout)
- ✅ Recent championships (2023 Vegas, 2024 Florida)
## Usage
### With MLX (Recommended for Apple Silicon)
```bash
# Install MLX LM
pip install mlx-lm
# Generate text
mlx_lm.generate \
--model "mlx-community/Mistral-7B-Instruct-v0.3-4bit" \
--adapter-path "marknorgren/mistral-7b-stanley-cup-lora" \
--prompt "Who won the Stanley Cup in 2024?" \
--max-tokens 50
```
### With Transformers + PEFT
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-Instruct-v0.3",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "marknorgren/mistral-7b-stanley-cup-lora")
# Generate
prompt = "Who won the Stanley Cup in 2024?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Example Outputs
```
Q: Who won the Stanley Cup in 2024?
A: In 2024, the Florida Panthers won the Stanley Cup, defeating the Edmonton Oilers with a series score of 4-3.
Q: What happened with the Stanley Cup in 1919?
A: In 1919, the Stanley Cup was Not awarded (Spanish flu).
Q: Who won the Stanley Cup in 1967?
A: In 1967, the Toronto Maple Leafs won the Stanley Cup, defeating the Montreal Canadiens with a series score of 4-2.
```
## Training Data
The adapter was trained on 127 carefully curated examples covering:
- Every Stanley Cup winner from 1915-2025
- Championship series scores
- Finals matchups
- Special circumstances (pandemic, lockouts)
- Multiple phrasings for recent years
## Limitations
- Specialized for Stanley Cup facts only
- Best performance with questions similar to training format
- May not generalize well to other hockey or sports topics
- Responses reflect data up to 2025 season
## Technical Specifications
- **Adapter Size**: ~27MB
- **Inference Speed**: ~97 tokens/sec on Apple M-series
- **Memory Usage**: ~4.2GB peak with 4-bit base model
## Citation
If you use this model, please cite:
```
@misc{mistral-stanley-cup-lora,
author = {Mark Norgren},
title = {Mistral-7B Stanley Cup Facts LoRA},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/marknorgren/mistral-7b-stanley-cup-lora}
}
```
## Acknowledgments
- Base model by Mistral AI
- Fine-tuning framework by MLX team
- Training performed on Apple Silicon
|