Argilla 2.4: Easily Build Fine-Tuning and Evaluation datasets on the Hub — No Code Required 14 days ago • 33
view post Post 1401 Reply SFT + Quantisation + Unsloth is a super easy way of squeezing extra performance out of an LLM at low latencies. Here are some hand y resources to bootstrap your projects.Here's a filtered dataset from Helpsteer2 with the most correct and coherent samples: burtenshaw/helpsteer-2-plusThis is a SFT finetuned model: ttps://huggingface.co/burtenshaw/gemma-help-tiny-sftThis is the notebook I use to train the model: https://colab.research.google.com/drive/17oskw_5lil5C3jCW34rA-EXjXnGgRRZw?usp=sharingHere's a load of Unsloth notebook on finetuning and inference: https://docs.unsloth.ai/get-started/unsloth-notebooks
Open Image Genereration Models A collection of models that are open source equivalents of flux-schnell and flux-dev ostris/OpenFLUX.1 Text-to-Image • Updated Oct 3 • 16.1k • 578 OnomaAIResearch/Illustrious-xl-early-release-v0 Text-to-Image • Updated Oct 6 • 14.9k • 255 black-forest-labs/FLUX.1-schnell Text-to-Image • Updated Aug 16 • 1.96M • • 2.85k TencentARC/PhotoMaker-V2 Text-to-Image • Updated Jul 22 • 19k • 119
Gemma HelpSteer A work in progress collection of resources related to a project to finetune Gemma 2 2b for helpfulness with Helpsteer2. nvidia/HelpSteer2 Viewer • Updated Oct 15 • 21.4k • 15.8k • 369 google/gemma-2-2b Text Generation • Updated Aug 7 • 9.24M • 422 HelpSteer2: Open-source dataset for training top-performing reward models Paper • 2406.08673 • Published Jun 12 • 16 HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM Paper • 2311.09528 • Published Nov 16, 2023 • 2
HelpSteer2: Open-source dataset for training top-performing reward models Paper • 2406.08673 • Published Jun 12 • 16
HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM Paper • 2311.09528 • Published Nov 16, 2023 • 2