Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -7,15 +7,14 @@ license: apache-2.0 | |
| 7 | 
             
            Welcome to ***GEM_Testing_Arsenal***, where groundbreaking research meets practical power! This repository unveils a novel architecture for On-Device Language Models (ODLMs), straight from our paper, ["Fragile Mastery: are domain-specific trade-offs undermining On-Device Language Models?"](./link_to_be_insterted). With just a few lines of code, our custom `gem_trainer.py` script lets you train ODLMs that are more accurate than ever, tracking accuracy and loss as you go.
         | 
| 8 |  | 
| 9 | 
             
            ---
         | 
| 10 | 
            -
             | 
| 11 | 
            -
            ### Highlights:
         | 
| 12 | 
             
            - **Next-Level ODLMs**: Boosts accuracy with a new architecture from our research.
         | 
| 13 | 
             
            - **Easy Training**: Call run_gem_pipeline to train on your dataset in minutes.
         | 
| 14 | 
             
            - **Live Metrics**: Get accuracy and loss results as training unfolds.
         | 
| 15 | 
             
            - **Flexible Design**: Works with any compatible dataset—plug and play!
         | 
| 16 |  | 
| 17 | 
             
            ---
         | 
| 18 | 
            -
             | 
| 19 | 
             
            To dive in, you’ll need:
         | 
| 20 | 
             
            - **Python** `3.8+`
         | 
| 21 |  | 
| @@ -24,11 +23,11 @@ To dive in, you’ll need: | |
| 24 | 
             
            - **Git** *(to clone the repo)*
         | 
| 25 |  | 
| 26 | 
             
            ---
         | 
| 27 | 
            -
             | 
| 28 |  | 
| 29 | 
             
            1. **Clone the repository:**
         | 
| 30 | 
             
                ```bash
         | 
| 31 | 
            -
                git clone https://huggingface.co/GEM025/ | 
| 32 | 
             
                ```
         | 
| 33 |  | 
| 34 | 
             
            2. **Install Dependencies:**
         | 
| @@ -52,13 +51,54 @@ Create a new python file and execute the code like: | |
| 52 | 
             
                ```
         | 
| 53 |  | 
| 54 | 
             
            > ***Boom—your ODLM is training with boosted accuracy!***
         | 
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
|  | |
| 55 |  | 
| 56 | 
             
            ---
         | 
| 57 | 
            -
             | 
| 58 | 
             
            `run_gem_pipeline` keeps it simple, but you can tweak it! Dive into [`gem_trainer.py`](./gem_trainer.py) to adjust epochs, batch size, or other settings to fit your needs.
         | 
| 59 |  | 
| 60 | 
             
            ---
         | 
| 61 | 
            -
             | 
| 62 | 
             
            Got ideas to make this even better? We’re all ears!
         | 
| 63 | 
             
            - Fork the repo.
         | 
| 64 | 
             
            - Branch off (`git checkout -b your-feature`).
         | 
|  | |
| 7 | 
             
            Welcome to ***GEM_Testing_Arsenal***, where groundbreaking research meets practical power! This repository unveils a novel architecture for On-Device Language Models (ODLMs), straight from our paper, ["Fragile Mastery: are domain-specific trade-offs undermining On-Device Language Models?"](./link_to_be_insterted). With just a few lines of code, our custom `gem_trainer.py` script lets you train ODLMs that are more accurate than ever, tracking accuracy and loss as you go.
         | 
| 8 |  | 
| 9 | 
             
            ---
         | 
| 10 | 
            +
            ## Highlights:
         | 
|  | |
| 11 | 
             
            - **Next-Level ODLMs**: Boosts accuracy with a new architecture from our research.
         | 
| 12 | 
             
            - **Easy Training**: Call run_gem_pipeline to train on your dataset in minutes.
         | 
| 13 | 
             
            - **Live Metrics**: Get accuracy and loss results as training unfolds.
         | 
| 14 | 
             
            - **Flexible Design**: Works with any compatible dataset—plug and play!
         | 
| 15 |  | 
| 16 | 
             
            ---
         | 
| 17 | 
            +
            ## Prerequisites:
         | 
| 18 | 
             
            To dive in, you’ll need:
         | 
| 19 | 
             
            - **Python** `3.8+`
         | 
| 20 |  | 
|  | |
| 23 | 
             
            - **Git** *(to clone the repo)*
         | 
| 24 |  | 
| 25 | 
             
            ---
         | 
| 26 | 
            +
            ## Quick Start:
         | 
| 27 |  | 
| 28 | 
             
            1. **Clone the repository:**
         | 
| 29 | 
             
                ```bash
         | 
| 30 | 
            +
                git clone https://huggingface.co/GEM025/GEM_Arsenal
         | 
| 31 | 
             
                ```
         | 
| 32 |  | 
| 33 | 
             
            2. **Install Dependencies:**
         | 
|  | |
| 51 | 
             
                ```
         | 
| 52 |  | 
| 53 | 
             
            > ***Boom—your ODLM is training with boosted accuracy!***
         | 
| 54 | 
            +
            ---
         | 
| 55 | 
            +
            ## Running on Colab/Kaggle?
         | 
| 56 | 
            +
             | 
| 57 | 
            +
            Well it's pretty similar to the local run.
         | 
| 58 | 
            +
             | 
| 59 | 
            +
            ```python
         | 
| 60 | 
            +
            """ This is very recommended to run for clean ouput during trains...
         | 
| 61 | 
            +
             | 
| 62 | 
            +
            import warnings 
         | 
| 63 | 
            +
            warnings.filterwarnings('ignore')
         | 
| 64 | 
            +
             | 
| 65 | 
            +
            """
         | 
| 66 | 
            +
             | 
| 67 | 
            +
            #@ Step 1: Clone the github repo 
         | 
| 68 | 
            +
            !git clone https://huggingface.co/GEM025/GEM_Arsenal
         | 
| 69 | 
            +
             | 
| 70 | 
            +
            #@ Step 2: Install all requirements 
         | 
| 71 | 
            +
            !pip install -r /content/GEM/requirements.txt  #! For colab
         | 
| 72 | 
            +
             | 
| 73 | 
            +
            """
         | 
| 74 | 
            +
             | 
| 75 | 
            +
            @! For kaggle:
         | 
| 76 | 
            +
            !pip install  -r /kaggle/working/GEM/requirements.txt
         | 
| 77 | 
            +
             | 
| 78 | 
            +
            """
         | 
| 79 | 
            +
             | 
| 80 | 
            +
            #@ Step 3: Add repo to path
         | 
| 81 | 
            +
            import sys
         | 
| 82 | 
            +
            sys.path.append('/content/GEM')  #! Or /kaggle/working/GEM (for kaggle)
         | 
| 83 | 
            +
             | 
| 84 | 
            +
            #@ Step 4: Import and run function
         | 
| 85 | 
            +
            from gem_trainer import run_gem_pipeline
         | 
| 86 | 
            +
            from datasets import load_dataset
         | 
| 87 | 
            +
             | 
| 88 | 
            +
            #@ Rest of the code as above
         | 
| 89 | 
            +
            dataset = load_dataset("imdb")
         | 
| 90 | 
            +
             | 
| 91 | 
            +
            result = run_gem_pipeline(dataset, num_classes=2, num_epochs=2)
         | 
| 92 | 
            +
             | 
| 93 | 
            +
            print(result)
         | 
| 94 | 
            +
            ```
         | 
| 95 |  | 
| 96 | 
             
            ---
         | 
| 97 | 
            +
            ## Customizing Training:
         | 
| 98 | 
             
            `run_gem_pipeline` keeps it simple, but you can tweak it! Dive into [`gem_trainer.py`](./gem_trainer.py) to adjust epochs, batch size, or other settings to fit your needs.
         | 
| 99 |  | 
| 100 | 
             
            ---
         | 
| 101 | 
            +
            ## Contributing 💓 
         | 
| 102 | 
             
            Got ideas to make this even better? We’re all ears!
         | 
| 103 | 
             
            - Fork the repo.
         | 
| 104 | 
             
            - Branch off (`git checkout -b your-feature`).
         | 

