Update README.md
Browse files
README.md
CHANGED
|
@@ -81,8 +81,9 @@ from Japanese documents and texts.
|
|
| 81 |
|
| 82 |
## Extraction Quality
|
| 83 |
|
| 84 |
-
We evaluated several models including
|
| 85 |
-
|
|
|
|
| 86 |
|
| 87 |

|
| 88 |
|
|
|
|
| 81 |
|
| 82 |
## Extraction Quality
|
| 83 |
|
| 84 |
+
We evaluated several models, including GPT-5 and a 32B-parameter Qwen3 model with thinking mode enabled.
|
| 85 |
+
The image below shows the average recall score on 1,000 random samples, chunked into segments of 100–1,000 characters, taken from [finepdf](https://huggingface.co/datasets/HuggingFaceFW/finepdfs).
|
| 86 |
+
Overall, we found **LFM2-350M-PII-Extract-JP** to achieve GPT-5–level performance with only 350 million parameters—bringing cloud-grade performance to on-device applications!
|
| 87 |
|
| 88 |

|
| 89 |
|