--- license: llama3.3 base_model: - meta-llama/Llama-3.3-70B-Instruct language: - en library_name: transformers ---
Senko

🦊 L3.3-70B-Vulpecula 🌸

Hi hi! 🌟

This is a collaboration work between GradientPutri and Sao10K.

This is a passion project of mine spanning the past few weeks, so we hope you like it.

While there may some minor issues, I think the final result is nice, and there are nice outputs which was the main goal.

Model card made by GradientPutri.

📜 Licensing Information

This model is based on Meta's Llama 3.3 and is subject to the Llama 3.3 Community License Agreement and the Acceptable Use Policy.

While we are unable to disallow commercial usage, do note that this is a project made using our own resources, time and effort. I'd rather not be discouraged from doing future project models instead. We kindly request that commercial users reach out before deployment to discuss usage and proper attribution. We appreciate users who help maintain transparency in the AI ecosystem by keeping us informed of how our work is being used. Same goes for any merges or derivatives, hopefully :)

🚀 Model Details

📚 Dataset Composition

Total token count: ~270M Tokens (210M Trainable), over 2 epochs.

🎨 Formatting and Samplers

Instruct Format: Llama-3-Instruct

``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> --- Note that newlines are represented within example above ```

✨ Sampler Recommendations

```yaml temperature: 0.75 min_p: 0.1 Repetition Penalty: 1.1 Presence Penalty: 1.1 ```

⚙️ Training Details

```yaml # Iterations num_epochs: 2 # Batching - Global Batch 4x GPUs × Batch 2 × 4 Grad_accum = 32 gradient_accumulation_steps: 4 micro_batch_size: 2 # Optimizer optimizer: paged_ademamix_8bit lr_scheduler: cosine learning_rate: 0.00002 max_grad_norm: 1 weight_decay: 0.01 ```

🦊 Thank you for visiting! May the foxes bring you good fortune! 🌸