ruke1ire commited on
Commit
ce95606
·
verified ·
1 Parent(s): 3d71f3e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -81,8 +81,9 @@ from Japanese documents and texts.
81
 
82
  ## Extraction Quality
83
 
84
- We evaluated several models including GPT5 and a 32B parameter Qwen3 model with thinking mode enabled, on 1k random samples taken from [finepdf](https://huggingface.co/datasets/HuggingFaceFW/finepdfs).
85
- LFM2-350M-PII-Extract-JP boasts GPT5-level performance on a tiny 350M parameter footprint, bringing cloud level performance on device!
 
86
 
87
  ![Model Size vs Recall Score](https://cdn-uploads.huggingface.co/production/uploads/65d6b6c1a07ad79084a0d214/ToNbcFk-7Pyr-aReyErEm.png)
88
 
 
81
 
82
  ## Extraction Quality
83
 
84
+ We evaluated several models, including GPT-5 and a 32B-parameter Qwen3 model with thinking mode enabled.
85
+ The image below shows the average recall score on 1,000 random samples, chunked into segments of 100–1,000 characters, taken from [finepdf](https://huggingface.co/datasets/HuggingFaceFW/finepdfs).
86
+ Overall, we found **LFM2-350M-PII-Extract-JP** to achieve GPT-5–level performance with only 350 million parameters—bringing cloud-grade performance to on-device applications!
87
 
88
  ![Model Size vs Recall Score](https://cdn-uploads.huggingface.co/production/uploads/65d6b6c1a07ad79084a0d214/ToNbcFk-7Pyr-aReyErEm.png)
89