File size: 11,172 Bytes
f068bf1
ac1a67c
f068bf1
 
 
 
 
9e524db
f068bf1
 
 
7e1d6d4
1eee641
 
e222bf2
c34048f
1eee641
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ceaf18
 
 
 
 
 
 
 
 
 
 
 
 
9127064
 
 
1ceaf18
bbff989
 
 
1ceaf18
 
 
 
 
 
 
 
 
44730ab
1ceaf18
 
44730ab
d2d1b06
1eee641
3bff14a
 
 
 
f068bf1
 
 
 
ac1a67c
f068bf1
 
 
9e524db
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: gemma
language:
- en
---

# Mode card
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.*
This Plant Disease detection Gemma model is fine tuned version of the Gemma 3n E4B, it is fine-tuned with the plant disease dataset. This model specializes in the scientific analysis of plant diseases in image of plants. Most models lack accurate information on plant diseases. The purpose is to fine-tune the Gemma 3n model to specialize in scientific plant disease.
Also, this model has very fast text output using pipeline.

### Inputs and outputs*

-   **Input:**
    -   Text string, such as a question, a prompt, or a document to be
        summarized
    -   Images, normalized to 256x256, 512x512, or 768x768 resolution
        and encoded to 256 tokens each
    -   Audio data encoded to 6.25 tokens per second from a single channel
    -   Total input context of 32K tokens
-   **Output:**
    -   Generated text in response to the input, such as an answer to a
        question, analysis of image content, or a summary of a document
    -   Total output length up to 32K tokens, subtracting the request
        input tokens

### Usage

Below, there are some code snippets on how to get quickly started with running
the model. First, install the Transformers library. Gemma 3n is supported
starting from transformers 4.53.0.

```sh
$ pip install -U transformers
```

Then, copy the snippet from the section that is relevant for your use case.

#### Running with the `pipeline` API

You can initialize the model and processor for inference with `pipeline` as
follows.

```python
from transformers import pipeline
import torch
pipe = pipeline(
    "image-text-to-text",
    model="EpistemeAI/PD_gemma-3n-E4B-v2",
    device="cuda",
    torch_dtype=torch.bfloat16,
)
```

With instruction-tuned models, you need to use chat templates to process our
inputs first. Then, you can pass it to the pipeline.

```python
messages = [
    {
        "role": "system",
        "content": [{"type": "text", "text": "You are a helpful assistant."}]
    },
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    }
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
# Okay, let's take a look!
# Based on the image, the animal on the candy is a **turtle**.
# You can see the shell shape and the head and legs.
```
        
this is the demo [Demo](https://huggingface.co/spaces/legolasyiu/Gemma3N-challenge)

## Model parameter:
Model size: 8.39B 
Tensor type: BF16


## Training Dataset
- Dataset name: minhhungg/plant-disease-dataset
  - 70,295 rows
  -  70,295 24bit, 256x256 images of plant disease, questions and answers

## LoRa and Training Parameters
- LoRA Adapter Parameters
  - r = 32, lora_alpha = 32, lora_dropout = 0, bias = "none", random_state = 3407

- Training Parameters
  - per_device_train_batch_size = 1, gradient_accumulation_steps = 4, gradient_checkpointing = True, gradient_checkpointing_kwargs = {"use_reentrant": False},
  - max_grad_norm = 0.3, warmup_ratio = 0.03, max_steps = 60, learning_rate = 2e-4, logging_steps = 1, save_strategy="steps", optim = "adamw_torch_fused", weight_decay = 0.01,
  - lr_scheduler_type = "cosine", seed = 3407,
   
## Usage and Limitations*

These models have certain limitations that users should be aware of.

### Intended Usage*

Open generative models have a wide range of applications across various
industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.

-   Content Creation and Communication
    -   **Text Generation**: Generate creative text formats such as
        poems, scripts, code, marketing copy, and email drafts.
    -   **Chatbots and Conversational AI**: Power conversational
        interfaces for customer service, virtual assistants, or interactive
        applications.
    -   **Text Summarization**: Generate concise summaries of a text
        corpus, research papers, or reports.
    -   **Image Data Extraction**: Extract, interpret, and summarize
        visual data for text communications.
    -   **Audio Data Extraction**: Transcribe spoken language, translate speech
        to text in other languages, and analyze sound-based data.
-   Research and Education
    -   **Natural Language Processing (NLP) and generative model
        Research**: These models can serve as a foundation for researchers to
        experiment with generative models and NLP techniques, develop
        algorithms, and contribute to the advancement of the field.
    -   **Language Learning Tools**: Support interactive language
        learning experiences, aiding in grammar correction or providing writing
        practice.
    -   **Knowledge Exploration**: Assist researchers in exploring large
        bodies of data by generating summaries or answering questions about
        specific topics.

### Limitations*

-   Training Data
    -   The quality and diversity of the training data significantly
        influence the model's capabilities. Biases or gaps in the training data
        can lead to limitations in the model's responses.
    -   The scope of the training dataset determines the subject areas
        the model can handle effectively.
-   Context and Task Complexity
    -   Models are better at tasks that can be framed with clear
        prompts and instructions. Open-ended or highly complex tasks might be
        challenging.
    -   A model's performance can be influenced by the amount of context
        provided (longer context generally leads to better outputs, up to a
        certain point).
-   Language Ambiguity and Nuance
    -   Natural language is inherently complex. Models might struggle
        to grasp subtle nuances, sarcasm, or figurative language.
-   Factual Accuracy
    -   Models generate responses based on information they learned
        from their training datasets, but they are not knowledge bases. They
        may generate incorrect or outdated factual statements.
-   Common Sense
    -   Models rely on statistical patterns in language. They might
        lack the ability to apply common sense reasoning in certain situations.

### Ethical Considerations and Risks*

The development of generative models raises several ethical concerns. In
creating an open model, we have carefully considered the following:

-   Bias and Fairness
    -   Generative models trained on large-scale, real-world text and image data
        can reflect socio-cultural biases embedded in the training material.
        These models underwent careful scrutiny, input data pre-processing
        described and posterior evaluations reported in this card.
-   Misinformation and Misuse
    -   Generative models can be misused to generate text that is
        false, misleading, or harmful.
    -   Guidelines are provided for responsible use with the model, see the
        [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
-   Transparency and Accountability:
    -   This model card summarizes details on the models' architecture,
        capabilities, limitations, and evaluation processes.
    -   A responsibly developed open model offers the opportunity to
        share innovation by making generative model technology accessible to
        developers and researchers across the AI ecosystem.

Risks identified and mitigations:

-   **Perpetuation of biases**: It's encouraged to perform continuous monitoring
    (using evaluation metrics, human review) and the exploration of de-biasing
    techniques during model training, fine-tuning, and other use cases.
-   **Generation of harmful content**: Mechanisms and guidelines for content
    safety are essential. Developers are encouraged to exercise caution and
    implement appropriate content safety safeguards based on their specific
    product policies and application use cases.
-   **Misuse for malicious purposes**: Technical limitations and developer
    and end-user education can help mitigate against malicious applications of
    generative models. Educational resources and reporting mechanisms for users
    to flag misuse are provided. Prohibited uses of Gemma models are outlined
    in the
    [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
-   **Privacy violations**: Models were trained on data filtered for removal of
    certain personal information and other sensitive data. Developers are
    encouraged to adhere to privacy regulations with privacy-preserving
    techniques.

Reference:
* From Gemma-3n-E4B model card

## Benchmark

mmlu_prox_en_biology benchmark result
hf (pretrained=EpistemeAI/PD_gemma-3n-E4B-v2), gen_kwargs: (None), limit: None, num_fewshot: 1, batch_size: 8
| Tasks |Version|    Filter    |n-shot|  Metric   |   |PD Gemma| Jamba 1.6 Mini |
|-------|------:|--------------|-----:|-----------|---|----:|----:|
|biology|      0|custom-extract|     1|exact_match|↑  |0.4786| 0.279|

mmlu_pro_plus_en_biology
| Tasks |Version|    Filter    |n-shot|  Metric   |   |Value | 
|-------|------:|--------------|-----:|-----------|---|-----:|
|biology|      1|custom-extract|     5|exact_match|↑  |0.3453|

|        Tasks        |Version|Filter|n-shot| Metric |   |PD-Gemma 3n-E4B |  E4B IT| changes |
|---------------------|------:|------|-----:|--------|---|-----:| -----:| -----:|
|gpqa_diamond_zeroshot|      1|none  |     0|acc     |↑  |0.3824| 0.237 | 61.350% increase in gpqa diamond|


![image/png](https://cdn-uploads.huggingface.co/production/uploads/651def66d0656f67a5f431b4/1qqvsTHnlKkyNvZqLbyHT.png)

## Model
Link to the model
- [PD_gemma-3n-E4B-v2](https://huggingface.co/EpistemeAI/PD_gemma-3n-E4B-v2)
- [PD_gemma-3n-E2B](https://huggingface.co/EpistemeAI/PD_gemma-3n-E2B)


## Demo
[Demo](https://huggingface.co/spaces/legolasyiu/Gemma3N-challenge)

## Thanks
Thank for minhhungg for allow me to fine tune the Huggingface dataset:  minhhungg/plant-disease-dataset

## Citation
This model is fine tune for Google - The Gemma 3n Impact Challenge

## Update
fine tune for 400 steps.
Pipeline output is significantly faster than previous PD Gemma E4B version

# Uploaded finetuned  model

- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit

This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)