--- base_model: microsoft/Phi-3-mini-4k-instruct library_name: transformers license: apache-2.0 language: - en tags: - Finetuning - PEFT - NLP - LLM - text-generation-inference - transformers - QLoRA - LoRA --- # Model Card for Model ID This is a finetuned model trained on agricultural datasets for crop disease remedies. ## Model Details phi3-finetuned-20250414-0740 can be used for crop disease remedies. Peft ,QLoRA ,LoRA , transformers are used and supervised finetuning is done for training this model. LoRA_dropout was taken 0.1, Lora_r=16. This model was trained on Google Colab free tier giving T4 GPU of 15 GB vRAM and can be used for 12 hours. ### Model Description Finetuned Agricultural Chatbot (Phi-3-mini-4k-instruct) fine-tuned Microsoft’s Phi-3-mini-4k-instruct, a compact yet powerful instruction-tuned LLM (~3.8B parameters), specifically for agriculture-related tasks using curated domain-specific datasets. Built on top of Microsoft’s Phi-3-mini-4k-instruct, a lightweight but capable open-source language model, this chatbot has been carefully trained using thousands of real-world examples from the agricultural domain. From crop disease symptoms and soil health tips to pesticide usage and sustainable farming practices, it has absorbed knowledge from curated, high-quality datasets. - **Developed by:** Satyam Kahali (reach me out on LinkedIN "https://www.linkedin.com/in/satyam-kahali-883098235/") - **Model type:** Causal Language Model (CausalLM) - **Language(s) (NLP):** Python - **License:** apache-2.0 - **Finetuned from model :** microsoft/Phi-3-mini-4k-instruct ### Model Sources - **Repository:** Satyam66/phi3-finetuned-20250414-0740 ## Uses ### Direct Use [More Information Needed] ### Downstream Use [optional] [More Information Needed] ### Out-of-Scope Use [More Information Needed] ## Bias, Risks, and Limitations [More Information Needed] ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data [More Information Needed] ### Training Procedure #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] lora_alpha: 32, lora_bias: false, lora_dropout: 0.05, r: 16, fp16 = True bf16 = False #### Speeds, Sizes, Times [optional] [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1