aashituli commited on
Commit
c50caee
·
verified ·
1 Parent(s): fa4860d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +185 -0
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - WinKawaks/vit-tiny-patch16-224
4
+ library_name: transformers
5
+ license: mit
6
+ metrics:
7
+ - accuracy
8
+ pipeline_tag: image-classification
9
+ tags:
10
+ - vision transformer
11
+ - agriculture
12
+ - plant disease detection
13
+ - smart farming
14
+ - image classification
15
+ ---
16
+
17
+ # Model Card for Smart Farming Disease Detection Transformer
18
+
19
+ This model is a Vision Transformer (ViT) designed to identify plant diseases in crops as part of a smart agricultural farming system. It has been trained on a diverse dataset of plant images, including different disease categories affecting crops such as corn, potato, rice, and wheat. The model aims to provide farmers and agronomists with real-time disease detection for better crop management.
20
+
21
+ ## Model Details
22
+
23
+ ### Model Description
24
+
25
+ This Vision Transformer model has been fine-tuned to classify various plant diseases commonly found in agricultural settings. The model can classify diseases in crops such as corn, potato, rice, and wheat, identifying diseases like rust, blight, leaf spots, and others. The goal is to enable precision farming by helping farmers detect diseases early and take appropriate actions.
26
+
27
+ - **Developed by:** Wambugu Kinyua
28
+ - **Model type:** Vision Transformer (ViT)
29
+ - **Languages (NLP):** N/A (Computer Vision Model)
30
+ - **License:** Apache 2.0
31
+ - **Finetuned from model:** (WinKawaks/vit-tiny-patch16-224)[https://huggingface.co/WinKawaks/vit-tiny-patch16-224]
32
+ - **Input:** Images of crops (RGB format)
33
+ - **Output:** Disease classification labels (healthy or diseased categories)
34
+ ## Diseases from the model
35
+
36
+ | Crop | Diseases Identified |
37
+ |--------|------------------------------|
38
+ | Corn | Common Rust |
39
+ | Corn | Gray Leaf Spot |
40
+ | Corn | Healthy |
41
+ | Corn | Leaf Blight |
42
+ | - | Invalid |
43
+ | Potato | Early Blight |
44
+ | Potato | Healthy |
45
+ | Potato | Late Blight |
46
+ | Rice | Brown Spot |
47
+ | Rice | Healthy |
48
+ | Rice | Leaf Blast |
49
+ | Wheat | Brown Rust |
50
+ | Wheat | Healthy |
51
+ | Wheat | Yellow Rust |
52
+
53
+
54
+
55
+ ## Uses
56
+
57
+ ### Direct Use
58
+
59
+ This model can be used directly to classify images of crops to detect plant diseases. It is especially useful for precision farming, enabling users to monitor crop health and take early interventions based on the detected disease.
60
+
61
+ ### Downstream Use
62
+
63
+ This model can be fine-tuned on other agricultural datasets for specific crops or regions to improve its performance or be integrated into larger precision farming systems that include other features like weather predictions and irrigation control.
64
+
65
+ Can be quantitized or deployed in full precision on edge devices due to its small parameter size without compromising on precision and accuracy.
66
+ ### Out-of-Scope Use
67
+
68
+ This model is not designed for non-agricultural image classification tasks or for environments with insufficient or very noisy data. Misuse includes using the model in areas with vastly different agricultural conditions from those it was trained on.
69
+
70
+ ## Bias, Risks, and Limitations
71
+
72
+ - The model may exhibit bias toward the crops and diseases present in the training dataset, leading to lower performance on unrepresented diseases or crop varieties.
73
+ - False negatives (failing to detect a disease) may result in untreated crop damage, while false positives could lead to unnecessary interventions.
74
+
75
+ ### Recommendations
76
+
77
+ Users should evaluate the model on their specific crops and farming conditions. Regular updates and retraining with local data are recommended for optimal performance.
78
+
79
+ ## How to Get Started with the Model
80
+
81
+ ```python
82
+ from PIL import Image, UnidentifiedImageError
83
+ from transformers import ViTFeatureExtractor, ViTForImageClassification
84
+ feature_extractor = ViTFeatureExtractor.from_pretrained('wambugu71/crop_leaf_diseases_vit')
85
+ model = ViTForImageClassification.from_pretrained(
86
+ 'wambugu1738/crop_leaf_diseases_vit',
87
+ ignore_mismatched_sizes=True
88
+ )
89
+ image = Image.open('<image_path>')
90
+ inputs = feature_extractor(images=image, return_tensors="pt")
91
+ outputs = model(**inputs)
92
+ logits = outputs.logits
93
+ predicted_class_idx = logits.argmax(-1).item()
94
+ print("Predicted class:", model.config.id2label[predicted_class_idx])
95
+ ```
96
+
97
+ ## Training Details
98
+
99
+ ### Training Data
100
+
101
+ The model was trained on a dataset containing images of various crops with labeled diseases, including the following categories:
102
+
103
+ - **Corn**: Common Rust, Gray Leaf Spot, Leaf Blight, Healthy
104
+ - **Potato**: Early Blight, Late Blight, Healthy
105
+ - **Rice**: Brown Spot, Hispa, Leaf Blast, Healthy
106
+ - **Wheat**: Brown Rust, Yellow Rust, Healthy
107
+
108
+ The dataset also includes images captured under various lighting conditions, from both controlled and uncontrolled environments and angles, to simulate real-world farming scenarios.
109
+ We made use of public available datasets, and our own private data.
110
+ ### Training Procedure
111
+
112
+ The model was fine-tuned using a vision transformer architecture pre-trained on the ImageNet dataset. The dataset was preprocessed by resizing the images and normalizing the pixel values.
113
+
114
+ #### Training Hyperparameters
115
+
116
+ - **Batch size:** 32
117
+ - **Learning rate:** 2e-5
118
+ - **Epochs:** 4
119
+ - **Optimizer:** AdamW
120
+ - **Precision:** fp16
121
+
122
+ ### Evaluation
123
+ ![Confusion matrix](disease_classification_metrics.png)
124
+
125
+
126
+ #### Testing Data, Factors & Metrics
127
+
128
+ The model was evaluated using a validation set consisting of 20% of the original dataset, with the following metrics:
129
+
130
+ - **Accuracy:** 98%
131
+ - **Precision:** 97%
132
+ - **Recall:** 97%
133
+ - **F1 Score:** 96%
134
+
135
+ ## Environmental Impact
136
+
137
+ Carbon emissions during model training can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
138
+
139
+ - **Hardware Type:** NVIDIA L40S
140
+ - **Hours used:** 1 hours
141
+ - **Cloud Provider:** Lightning AI
142
+
143
+ ## Technical Specifications
144
+
145
+ ### Model Architecture and Objective
146
+
147
+ The model uses a Vision Transformer architecture to learn image representations and classify them into disease categories. Its self-attention mechanism enables it to capture global contextual information in the images, making it suitable for agricultural disease detection.
148
+
149
+ ### Compute Infrastructure
150
+
151
+ #### Hardware
152
+
153
+ - NVIDIA L40S GPUs
154
+ - 48 GB RAM
155
+ - SSD storage for fast I/O
156
+
157
+ #### Software
158
+
159
+ - Python 3.9
160
+ - PyTorch 2.4.1+cu121
161
+ - pytorch_lightning
162
+ - Transformers library by Hugging Face
163
+
164
+ ## Citation
165
+
166
+ If you use this model in your research or applications, please cite it as:
167
+
168
+ **BibTeX:**
169
+
170
+ ```
171
+ @misc{kinyua2024smartfarming,
172
+ title={Smart Farming Disease Detection Transformer},
173
+ author={Wambugu Kinyua},
174
+ year={2024},
175
+ publisher={Hugging Face},
176
+ }
177
+ ```
178
+
179
+ **APA:**
180
+
181
+ Kinyua, W. (2024). Smart Farming Disease Detection Transformer. Hugging Face.
182
+
183
+ ## Model Card Contact
184
+
185
+ For further inquiries, contact: wambugukinyua@proton.me