mradermacher commited on
Commit
c90141e
·
verified ·
1 Parent(s): 5103c8a

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -5,12 +5,9 @@ datasets:
5
  language: fa
6
  library_name: transformers
7
  license: apache-2.0
 
 
8
  quantized_by: mradermacher
9
- tags:
10
- - persian
11
- - text-generation
12
- - qlora
13
- - 4-bit-quantization
14
  ---
15
  ## About
16
 
@@ -22,7 +19,12 @@ tags:
22
  weighted/imatrix quants of https://huggingface.co/mshojaei77/gemma-3-4b-persian-v0
23
 
24
  <!-- provided-files -->
 
 
 
25
  static quants are available at https://huggingface.co/mradermacher/gemma-3-4b-persian-v0-GGUF
 
 
26
  ## Usage
27
 
28
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
5
  language: fa
6
  library_name: transformers
7
  license: apache-2.0
8
+ mradermacher:
9
+ readme_rev: 1
10
  quantized_by: mradermacher
 
 
 
 
 
11
  ---
12
  ## About
13
 
 
19
  weighted/imatrix quants of https://huggingface.co/mshojaei77/gemma-3-4b-persian-v0
20
 
21
  <!-- provided-files -->
22
+
23
+ ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gemma-3-4b-persian-v0-i1-GGUF).***
24
+
25
  static quants are available at https://huggingface.co/mradermacher/gemma-3-4b-persian-v0-GGUF
26
+
27
+ **This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/gemma-3-4b-persian-v0-GGUF).**
28
  ## Usage
29
 
30
  If you are unsure how to use GGUF files, refer to one of [TheBloke's