โ›๏ธ Andy-4-micro-LoRA โ›๏ธ

Andy-4-micro is a 1.5b test for Andy-4, it shows off how incredible the Andy-4 dataset can be even in tiny packages.

Model Details

This is a Small version of Andy-4, which is 1.5b parameters, and will release in all facets around a week from the full Andy-4 release.

This model used an optimal training environment for smaller models, as well as the same datasets as the full Andy-4 model.

Andy-4: 7b parameters

Andy-4-micro: 1.5b parameters.

Additional detail

In preliminary testing of Andy-4-base and Andy-4-micro-base, Andy-4-micro-base showed to be only slightly worse than Andy-4, and sometimes on-par with non-building examples

Andy-4-micro shows off its incredible ability to play when allowed to reason, this is because the model is able to walk through step by step of what it should do, instead of relying on what it "remembers" for what it should do

In the preliminary testing, both non-reasoning and reasoning modes were used. Again, this was using Andy-4-micro-base and Andy-4-base, not the final versions.

Using this LoRA to continue to train a model falls under the ruling of the Andy License, as in the final model must include Andy in it somewhere, and credit to be posted to I, Sweaterdog, in the model repo.

## License
This model is licensed under the Andy 1.0 License.  
Credit: https://huggingface.co/Sweaterdog  
Acknowledgment: This work uses data and models created by @Sweaterdog.
Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Sweaterdog/Andy-4-micro-base-LoRA