--- base_model: CopyleftCultivars/llama-3.1-natural-farmer-16bit language: - en license: llama3.1 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - agriculture - climate - farming - llama-cpp - gguf-my-repo datasets: - CopyleftCultivars/Natural-Farming-Real-QandA-Conversations-Q1-2024-Update --- # Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF This model was converted to GGUF format from [`CopyleftCultivars/llama-3.1-natural-farmer-16bit`](https://huggingface.co/CopyleftCultivars/llama-3.1-natural-farmer-16bit) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/CopyleftCultivars/llama-3.1-natural-farmer-16bit) for more details on the model. # Llama 3.1 Natural Farmer V1 by Copyleft Cultivars (8B) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654527ce2a13610acc25d921/MBzPSJ5cWgfCmqGo4pLy-.png) - **Developed by:** Caleb DeLeeuw (Solshine), Copyleft Cultivars (a nonprofit, protecting and preserving vulnerable plants) - **License:** Llama3.1 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) this LLM was iteratively fine-tuned and tested in comparison to our previous releases (Gemma 2B Natural Farmer and Mistral 7B Natural Farmer) as well as basic benchmarking. This model was then loaded onto Hugging Face Hub in hopes it will help farmers everywhere and inspire future works. Shout out to roger j (bhugxer) for help with the dataset and training framework. Testing and further compiling to integrate into on-device app interfaces are ongoing. This project was created by Copyleft Cultivars, a nonprofit, in partnership with Open Nutrient Project and Evergreen State College. This project serves to democratize access to farming knowledge and support the protection of vulnerable plants. This is V1 beta. It runs locally on Ollama with some expirimental configuring so you can use it off the grid and places where internet is not accessible (ie most farms I've been on.) This llama 3.1 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. This is a fine tune of Llama 3.1 and inherits all use terms and licensing from the base model. Please review the original release by Meta for more details. [](https://github.com/unslothai/unsloth) ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF --hf-file llama-3.1-natural-farmer-16bit-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF --hf-file llama-3.1-natural-farmer-16bit-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF --hf-file llama-3.1-natural-farmer-16bit-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF --hf-file llama-3.1-natural-farmer-16bit-q8_0.gguf -c 2048 ```