gghfez commited on
Commit
b350a9c
·
verified ·
1 Parent(s): 2da85cc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - google/gemma-3-4b-it
4
+ license: gemma
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ ---
8
+
9
+ # Gemma-3-4b Text-Only
10
+
11
+ This model is a text-only version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it), converted from the multimodal Gemma3ForConditionalGeneration architecture to the text-only Gemma3ForCausalLM architecture.
12
+
13
+ ## Model Description
14
+
15
+ - **Original Model**: The original Gemma-3-4b-it is a multimodal model released by Google that can process both text and images
16
+ - **This Version**: This version has been modified to use the same architecture as the text-only 1b model, with the vision components removed
17
+ - **Parameters**: 4 billion parameters
18
+ - **Conversion Process**: Vision-related components were stripped while maintaining the text generation capabilities
19
+
20
+ ## Usage
21
+
22
+ You can load and use this model the same way you would use the text-only [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it) version:
23
+
24
+ ```python
25
+ from transformers import AutoTokenizer, BitsAndBytesConfig, Gemma3ForCausalLM
26
+ import torch
27
+
28
+ model_id = "gghfez/gemma-3-4b-novision"
29
+
30
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
31
+
32
+ model = Gemma3ForCausalLM.from_pretrained(
33
+ model_id, quantization_config=quantization_config
34
+ ).eval()
35
+
36
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
37
+
38
+ messages = [
39
+ [
40
+ {
41
+ "role": "system",
42
+ "content": [{"type": "text", "text": "You are a helpful assistant."},]
43
+ },
44
+ {
45
+ "role": "user",
46
+ "content": [{"type": "text", "text": "Write a poem on Hugging Face, the company"},]
47
+ },
48
+ ],
49
+ ]
50
+ inputs = tokenizer.apply_chat_template(
51
+ messages,
52
+ add_generation_prompt=True,
53
+ tokenize=True,
54
+ return_dict=True,
55
+ return_tensors="pt",
56
+ ).to(model.device).to(torch.bfloat16)
57
+
58
+
59
+ with torch.inference_mode():
60
+ outputs = model.generate(**inputs, max_new_tokens=64)
61
+
62
+ outputs = tokenizer.batch_decode(outputs)
63
+ ```
64
+