duyntnet commited on
Commit
9b5f2f6
·
verified ·
1 Parent(s): 7e43bb8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +160 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - gemma-3-27b-it
12
+ ---
13
+ Quantizations of https://huggingface.co/google/gemma-3-27b-it
14
+
15
+ **Note**: you will need llama.cpp [b4875](https://github.com/ggml-org/llama.cpp/releases/tag/b4875) or later to run the model.
16
+
17
+ ### Open source inference clients/UIs
18
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
19
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
20
+ * [ollama](https://github.com/ollama/ollama)
21
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
22
+ * [jan](https://github.com/janhq/jan)
23
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
24
+
25
+ ### Closed source inference clients/UIs
26
+ * [LM Studio](https://lmstudio.ai/)
27
+ * [Msty](https://msty.app/)
28
+ * [Backyard AI](https://backyard.ai/)
29
+
30
+ ---
31
+
32
+ # From original readme
33
+
34
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
35
+ built from the same research and technology used to create the Gemini models.
36
+ Gemma 3 models are multimodal, handling text and image input and generating text
37
+ output, with open weights for both pre-trained variants and instruction-tuned
38
+ variants. Gemma 3 has a large, 128K context window, multilingual support in over
39
+ 140 languages, and is available in more sizes than previous versions. Gemma 3
40
+ models are well-suited for a variety of text generation and image understanding
41
+ tasks, including question answering, summarization, and reasoning. Their
42
+ relatively small size makes it possible to deploy them in environments with
43
+ limited resources such as laptops, desktops or your own cloud infrastructure,
44
+ democratizing access to state of the art AI models and helping foster innovation
45
+ for everyone.
46
+
47
+ ### Inputs and outputs
48
+
49
+ - **Input:**
50
+ - Text string, such as a question, a prompt, or a document to be summarized
51
+ - Images, normalized to 896 x 896 resolution and encoded to 256 tokens
52
+ each
53
+ - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
54
+ 32K tokens for the 1B size
55
+
56
+ - **Output:**
57
+ - Generated text in response to the input, such as an answer to a
58
+ question, analysis of image content, or a summary of a document
59
+ - Total output context of 8192 tokens
60
+
61
+ ### Usage
62
+
63
+ Below there are some code snippets on how to get quickly started with running the model. First, install the Transformers library with the version made for Gemma 3:
64
+
65
+ ```sh
66
+ $ pip install git+https://github.com/huggingface/[email protected]
67
+ ```
68
+
69
+ Then, copy the snippet from the section that is relevant for your use case.
70
+
71
+ #### Running with the `pipeline` API
72
+
73
+ You can initialize the model and processor for inference with `pipeline` as follows.
74
+
75
+ ```python
76
+ from transformers import pipeline
77
+ import torch
78
+
79
+ pipe = pipeline(
80
+ "image-text-to-text",
81
+ model="google/gemma-3-27b-it",
82
+ device="cuda",
83
+ torch_dtype=torch.bfloat16
84
+ )
85
+ ```
86
+
87
+ With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
88
+
89
+ ```python
90
+ messages = [
91
+ {
92
+ "role": "system",
93
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
94
+ },
95
+ {
96
+ "role": "user",
97
+ "content": [
98
+ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
99
+ {"type": "text", "text": "What animal is on the candy?"}
100
+ ]
101
+ }
102
+ ]
103
+
104
+ output = pipe(text=messages, max_new_tokens=200)
105
+ print(output[0][0]["generated_text"][-1]["content"])
106
+ # Okay, let's take a look!
107
+ # Based on the image, the animal on the candy is a **turtle**.
108
+ # You can see the shell shape and the head and legs.
109
+ ```
110
+
111
+ #### Running the model on a single/multi GPU
112
+
113
+ ```python
114
+ # pip install accelerate
115
+
116
+ from transformers import AutoProcessor, Gemma3ForConditionalGeneration
117
+ from PIL import Image
118
+ import requests
119
+ import torch
120
+
121
+ model_id = "google/gemma-3-27b-it"
122
+
123
+ model = Gemma3ForConditionalGeneration.from_pretrained(
124
+ model_id, device_map="auto"
125
+ ).eval()
126
+
127
+ processor = AutoProcessor.from_pretrained(model_id)
128
+
129
+ messages = [
130
+ {
131
+ "role": "system",
132
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
133
+ },
134
+ {
135
+ "role": "user",
136
+ "content": [
137
+ {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
138
+ {"type": "text", "text": "Describe this image in detail."}
139
+ ]
140
+ }
141
+ ]
142
+
143
+ inputs = processor.apply_chat_template(
144
+ messages, add_generation_prompt=True, tokenize=True,
145
+ return_dict=True, return_tensors="pt"
146
+ ).to(model.device, dtype=torch.bfloat16)
147
+
148
+ input_len = inputs["input_ids"].shape[-1]
149
+
150
+ with torch.inference_mode():
151
+ generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
152
+ generation = generation[0][input_len:]
153
+
154
+ decoded = processor.decode(generation, skip_special_tokens=True)
155
+ print(decoded)
156
+
157
+ # **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
158
+ # focusing on a cluster of pink cosmos flowers and a busy bumblebee.
159
+ # It has a slightly soft, natural feel, likely captured in daylight.
160
+ ```