fyaronskiy commited on
Commit
ca53b04
·
verified ·
1 Parent(s): 10b1733

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -163,7 +163,7 @@ Full precision ONNX model (onnx/model.onnx) - 1.5x faster than Transformer model
163
 
164
  INT8 quantized model (onnx/model_quantized.onnx) - 2.5x faster than Transformer model, quality is almost the same.
165
 
166
- In table below results of tests of inference of 6329 samples of test_set.
167
  I tested inference with batch_size 1 on Intel Xeon CPU with 2 vCPUs (Google Colab).
168
 
169
  |Model |Size |f1 macro|acceleration|Time of inference|
 
163
 
164
  INT8 quantized model (onnx/model_quantized.onnx) - 2.5x faster than Transformer model, quality is almost the same.
165
 
166
+ In table below results of tests of inference of 5427 samples of test_set.
167
  I tested inference with batch_size 1 on Intel Xeon CPU with 2 vCPUs (Google Colab).
168
 
169
  |Model |Size |f1 macro|acceleration|Time of inference|