---
base_model: unsloth/phi-4
license: mit
language:
  - en
  - hi
tags:
  - multilingual
  - instruction-tuning
  - phi4
  - efficiency
  - hindi
datasets:
  - 1024m/PHI-4-Hindi-Instruct-Data
model-index:
  - name: Airavata
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU Pro (5-Shot)
          type: mmlu_pro
          config: MMLU Pro
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 52.39
            name: accuracy
        source:
          url: >-
            https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-Shot)
          type: gpqa
          config: GPQA
          split: test
          args:
            num_few_shot: 0
        metrics:
          - type: acc
            value: 39.77
            name: accuracy (normalized)
        source:
          url: >-
            https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
          name: Open LLM Leaderboard 
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-Shot)
          type: musr
          config: MuSR
          split: test
          args:
            num_few_shot: 0
        metrics:
          - type: acc
            value: 49.07
            name: accuracy (normalized)
        source:
          url: >-
            https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
          name: Open LLM Leaderboard   
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Big Bench Hard (3-Shot)
          type: bbh
          config: Big Bench Hard
          split: test
          args:
            num_few_shot: 3
        metrics:
          - type: acc
            value: 66.97
            name: accuracy (normalized)
        source:
          url: >-
            https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
          name: Open LLM Leaderboard   
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Math HARD (4-Shot)
          type: math_hard
          config: Math Hard
          split: test
          args:
            num_few_shot: 4
        metrics:
          - type: acc
            value: 23.11
            name: accuracy (exact match)
        source:
          url: >-
            https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/1024m/PHI-4-Hindi/results_2025-02-06T05-43-08.878637.json
          name: Open LLM Leaderboard     
---

# Uploaded  model

- **Developed by:** 1024m
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)