--- license: apache-2.0 language: - en tags: - code --- This is a fine-tune of Llama3-8B base model using an Alpaca-like instruction-tuning dataset generated with GPT4. My aim with doing this was to compare the instruction-tuned performance of this model with the one Llama has available.