mrs83's picture
Update README.md
f168e39 verified
---
base_model: microsoft/phi-4
library_name: peft
license: mit
datasets:
- vicgalle/alpaca-gpt4
language:
- en
pipeline_tag: text-generation
---
# Model Card for FlowerTune-phi-4-NLP-PEFT
This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework.
The adapter and benchmark results have been submitted to the [FlowerTune LLM NLP Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/nlp/).
## Model Details
Please check the following GitHub project for model details and evaluation results:
[https://github.com/mrs83/FlowerTune-phi-4-NLP](https://github.com/mrs83/FlowerTune-phi-4-NLP)
## How to Get Started with the Model
Use this model as:
```
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("microsoft/phi-4")
model = PeftModel.from_pretrained(base_model, "mrs83/FlowerTune-phi-4-NLP-PEFT")
```
### Evaluation Results (Accuracy)
- **STEM**: 40.66 %
- **Social Sciences**: 74.52 %
- **Humanities**: 51.75 %
- **Average**: 55.64 %
### Communication Budget
45804.69 Megabytes
### Framework versions
- PEFT 0.14.0
- Flower 1.13.0