brain-zhang's picture
Upload README.md with huggingface_hub
600e7ab verified
|
raw
history blame
1.66 kB
---
license: apache-2.0
tags:
- StepLaw
- causal-lm
language:
- en
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: step2v2_0618_h960_ffnh9368_numh15_numl7_lr1.381e-03_bs1024_ti1907_mlr1.00e-05
results: []
---
# Wandb Model Name: step2v2_0618_h960_ffnh9368_numh15_numl7_lr1.381e-03_bs1024_ti1907_mlr1.00e-05
This model is part of the [StepLaw-N_214M-D_3.0B](https://huggingface.co/collections/StepLaw/StepLaw-N_214M-D_3.0B) collection.
## Model Specifications
### Architecture
- **Hidden size (H)**: 960
- **Feed-forward network size (FFN)**: 9368
- **Attention heads**: 15
- **Layers**: 7
- **Parameter count**: 214M
### Training Parameters
- **Learning rate (lr)**: 1.381e-03
- **Batch size (bs)**: 2097152
- **Training iterations**: 1907
- **Training tokens (D)**: 4.0B
## Model Description
StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 1.381e-03 and batch size 2097152 for 1907 iterations, using a total of 4.0B training tokens.
## Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "StepLaw/StepLaw-N_214M-D_3.0B-LR1.381e-03-BS2097152"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
# Generate text
inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```