Full finetuning and LoRA adapters for Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K
AI & ML interests
None defined yet.
Full finetuning and LoRA adapters for Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K
This collection contains checkpoints and LoRA adapters for Llama-2-7B trained on the Python subset of StarCoder for up to 20 billion tokens.
Model and LoRA adapter checkpoints for Llama-2-7B finetuned on MetaMathQA
Model and LoRA adapter checkpoints for Llama-2-7B trained on OpenWebMath for up to 20 billion tokens