Upload README.md with huggingface_hub
Browse files
    	
        README.md
    ADDED
    
    | 
         @@ -0,0 +1,54 @@ 
     | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
|
| 
         | 
| 
         | 
|
| 1 | 
         
            +
            ---
         
     | 
| 2 | 
         
            +
            license: apache-2.0
         
     | 
| 3 | 
         
            +
            tags:
         
     | 
| 4 | 
         
            +
            - StepLaw
         
     | 
| 5 | 
         
            +
            - causal-lm
         
     | 
| 6 | 
         
            +
            language:
         
     | 
| 7 | 
         
            +
            - en
         
     | 
| 8 | 
         
            +
            library_name: transformers
         
     | 
| 9 | 
         
            +
            pipeline_tag: text-generation
         
     | 
| 10 | 
         
            +
            model-index:
         
     | 
| 11 | 
         
            +
            - name: step2v2_0618_h768_ffnh6416_numh12_numl7_lr1.953e-03_bs32_ti122070_mlr1e-5
         
     | 
| 12 | 
         
            +
              results: []
         
     | 
| 13 | 
         
            +
            ---
         
     | 
| 14 | 
         
            +
             
     | 
| 15 | 
         
            +
            # Wandb Model Name: step2v2_0618_h768_ffnh6416_numh12_numl7_lr1.953e-03_bs32_ti122070_mlr1e-5
         
     | 
| 16 | 
         
            +
             
     | 
| 17 | 
         
            +
            This model is part of the [StepLaw-N_119M-D_7.0B](https://huggingface.co/collections/StepLaw/StepLaw-N_119M-D_7.0B) collection.
         
     | 
| 18 | 
         
            +
             
     | 
| 19 | 
         
            +
            ## Model Specifications
         
     | 
| 20 | 
         
            +
             
     | 
| 21 | 
         
            +
            ### Architecture
         
     | 
| 22 | 
         
            +
            - **Hidden size (H)**: 768
         
     | 
| 23 | 
         
            +
            - **Feed-forward network size (FFN)**: 6416
         
     | 
| 24 | 
         
            +
            - **Attention heads**: 12
         
     | 
| 25 | 
         
            +
            - **Layers**: 7
         
     | 
| 26 | 
         
            +
            - **Parameter count**: 119MM
         
     | 
| 27 | 
         
            +
             
     | 
| 28 | 
         
            +
            ### Training Parameters
         
     | 
| 29 | 
         
            +
            - **Learning rate (lr)**: 1.953e-03
         
     | 
| 30 | 
         
            +
            - **Batch size (bs)**: 32
         
     | 
| 31 | 
         
            +
            - **Training iterations**: 122070
         
     | 
| 32 | 
         
            +
            - **Training tokens (D)**: 8.0B
         
     | 
| 33 | 
         
            +
             
     | 
| 34 | 
         
            +
            ## Model Description
         
     | 
| 35 | 
         
            +
             
     | 
| 36 | 
         
            +
            StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 1.953e-03 and batch size 32 for 122070 iterations, using a total of 8.0B training tokens.
         
     | 
| 37 | 
         
            +
             
     | 
| 38 | 
         
            +
            ## Usage Example
         
     | 
| 39 | 
         
            +
             
     | 
| 40 | 
         
            +
            ```python
         
     | 
| 41 | 
         
            +
            from transformers import AutoModelForCausalLM, AutoTokenizer
         
     | 
| 42 | 
         
            +
             
     | 
| 43 | 
         
            +
            model_name = "StepLaw/StepLaw-N_119M-D_7.0B-LR1.953e-03-BS65536"
         
     | 
| 44 | 
         
            +
            tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
         
     | 
| 45 | 
         
            +
            model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
         
     | 
| 46 | 
         
            +
             
     | 
| 47 | 
         
            +
            # Generate text
         
     | 
| 48 | 
         
            +
            inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt")
         
     | 
| 49 | 
         
            +
            outputs = model.generate(**inputs, max_length=100)
         
     | 
| 50 | 
         
            +
            print(tokenizer.decode(outputs[0], skip_special_tokens=True))
         
     | 
| 51 | 
         
            +
            ```## Part of StepLaw Project
         
     | 
| 52 | 
         
            +
             
     | 
| 53 | 
         
            +
            StepLaw is an initiative to provide thousands of models for optimal hyperparameter research.
         
     | 
| 54 | 
         
            +
            Visit [StepLaw Project](https://step-law.github.io/) for more information.
         
     |