Datasets:
Updated Readme
Browse files
README.md
CHANGED
@@ -423,7 +423,7 @@ dataset.save_to_disk(f"ComicsPAP_{skill}_{split}_single_images")
|
|
423 |
|
424 |
The evaluation metric for all tasks is the accuracy of the model's predictions. The overall accuracy is calculated as the weighted average of the accuracy of each subtask, with the weights being the number of examples in each subtask.
|
425 |
|
426 |
-
To evaluate on the test set you must
|
427 |
|
428 |
```json
|
429 |
[
|
|
|
423 |
|
424 |
The evaluation metric for all tasks is the accuracy of the model's predictions. The overall accuracy is calculated as the weighted average of the accuracy of each subtask, with the weights being the number of examples in each subtask.
|
425 |
|
426 |
+
To evaluate on the test set you must submit your predictions to the [Robust Reading Competition website](https://rrc.cvc.uab.es/?ch=31&com=introduction), as a json file with the following structure:
|
427 |
|
428 |
```json
|
429 |
[
|