--- license: mit datasets: - flwrlabs/code-alpaca-20k language: - en metrics: - accuracy base_model: - microsoft/Phi-4-mini-instruct pipeline_tag: text-generation library_name: peft tags: - text-generation-inference - code --- ## Evaluation Results (Pass@1) - **HumanEval**: 59.76 % - **MBPP**: 46.20 % - **MultiPL-E (C++)**: 37.27 % - **MultiPL-E (JS)**: 52.79 % - **Average**: 49.00 % ## Model Details This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework. The adapter and benchmark results have been submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/). Please check the following GitHub project for details on how to reproduce training and evaluation steps: [FlowerTune-LLM-Labs](https://github.com/ethicalabs-ai/FlowerTune-LLM-Labs/blob/main/workspace/models/README.md)