prithivMLmods commited on
Commit
5d0bbfe
·
verified ·
1 Parent(s): ea3e198

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -11,4 +11,34 @@ tags:
11
  - text-generation-inference
12
  - document
13
  - ocr
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - text-generation-inference
12
  - document
13
  - ocr
14
+ ---
15
+
16
+ # **docscopeOCR-7B-050425-exp-GGUF**
17
+
18
+ > The **docscopeOCR-7B-050425-exp** model is a fine-tuned version of **Qwen/Qwen2.5-VL-7B-Instruct**, optimized for **Document-Level Optical Character Recognition (OCR)**, **long-context vision-language understanding**, and **accurate image-to-text conversion with mathematical LaTeX formatting**. Built on top of the Qwen2.5-VL architecture, this model significantly improves document comprehension, structured data extraction, and visual reasoning across diverse input formats.
19
+
20
+ ## Model File
21
+
22
+ | File Name | Size | Format | Description |
23
+ |-----------------------------------------------|---------|----------------|----------------------------------------|
24
+ | docscopeOCR-7B-050425-exp.IQ4_XS.gguf | 4.25 GB | GGUF (IQ4_XS) | Int4 extra-small quantized model |
25
+ | docscopeOCR-7B-050425-exp.Q2_K.gguf | 3.02 GB | GGUF (Q2_K) | 2-bit quantized model |
26
+ | docscopeOCR-7B-050425-exp.Q3_K_L.gguf | 4.09 GB | GGUF (Q3_K_L) | 3-bit large quantized model |
27
+ | docscopeOCR-7B-050425-exp.Q3_K_M.gguf | 3.81 GB | GGUF (Q3_K_M) | 3-bit medium quantized model |
28
+ | docscopeOCR-7B-050425-exp.Q3_K_S.gguf | 3.49 GB | GGUF (Q3_K_S) | 3-bit small quantized model |
29
+ | docscopeOCR-7B-050425-exp.Q4_K_M.gguf | 4.68 GB | GGUF (Q4_K_M) | 4-bit medium quantized model |
30
+ | docscopeOCR-7B-050425-exp.Q5_K_M.gguf | 5.44 GB | GGUF (Q5_K_M) | 5-bit medium quantized model |
31
+ | docscopeOCR-7B-050425-exp.Q5_K_S.gguf | 5.32 GB | GGUF (Q5_K_S) | 5-bit small quantized model |
32
+ | docscopeOCR-7B-050425-exp.Q6_K.gguf | 6.25 GB | GGUF (Q6_K) | 6-bit quantized model |
33
+ | docscopeOCR-7B-050425-exp.Q8_0.gguf | 8.1 GB | GGUF (Q8_0) | 8-bit quantized model |
34
+ | config.json | 36 B | JSON | Configuration file |
35
+ | .gitattributes | 2.25 kB | Text | Git attributes configuration |
36
+
37
+ ## Quants Usage
38
+
39
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
40
+
41
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
42
+ types (lower is better):
43
+
44
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)