merve HF Staff commited on
Commit
7bffede
·
verified ·
1 Parent(s): e14b3cf

Add usage example

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -32,7 +32,64 @@ This model expects as input a single document image, rendered such that the long
32
 
33
  The prompt must then contain the additional metadata from the document, and the easiest way to generate this
34
  is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
 
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ## License and use
38
 
 
32
 
33
  The prompt must then contain the additional metadata from the document, and the easiest way to generate this
34
  is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
35
+ A simple way to infer using transformers is as follows:
36
 
37
+ ```python
38
+ import torch from transformers import AutoModelForImageTextToText, AutoProcessor
39
+
40
+ model_id = "allenai/olmOCR-7B-0725"
41
+ processor = AutoProcessor.from_pretrained(model_id)
42
+ model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()
43
+
44
+ PROMPT = """
45
+ Below is the image of one page of a PDF document , as well as some raw textual content that
46
+ was previously extracted for it that includes position information for each image and
47
+ block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
48
+ image ).
49
+ Just return the plain text representation of this document as if you were reading it
50
+ naturally .
51
+ Turn equations into a LaTeX representation , and tables into markdown format . Remove the
52
+ headers and footers , but keep references and footnotes .
53
+ Read any natural handwriting .
54
+ This is likely one page out of several in the document , so be sure to preserve any sentences
55
+ that come from the previous page , or continue onto the next page , exactly as they are .
56
+ If there is no text at all that you think you should read , you can output null .
57
+ Do not hallucinate .
58
+ RAW_TEXT_START
59
+ { base_text }
60
+ RAW_TEXT_END
61
+ """
62
+
63
+ messages = [
64
+ {
65
+ "role": "user",
66
+ "content": [
67
+ {
68
+ "type": "image",
69
+ "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
70
+ },
71
+ {"type": "text", "text": "OCR"},
72
+ ],
73
+ }
74
+ ]
75
+
76
+ text = processor.apply_chat_template(
77
+ messages, tokenize=False, add_generation_prompt=True
78
+ )
79
+ inputs = processor.apply_chat_template(
80
+ messages,
81
+ video_fps=1,
82
+ add_generation_prompt=True,
83
+ tokenize=True,
84
+ return_dict=True,
85
+ return_tensors="pt"
86
+ ).to(model.device)
87
+
88
+ output_ids = model.generate(**inputs, max_new_tokens=1000)
89
+ generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
90
+ output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
91
+ print(output_text)
92
+ ```
93
 
94
  ## License and use
95