Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Llabres commited on
Commit
b7c73d3
·
1 Parent(s): c56be29

Updated Readme

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -423,7 +423,7 @@ dataset.save_to_disk(f"ComicsPAP_{skill}_{split}_single_images")
423
 
424
  The evaluation metric for all tasks is the accuracy of the model's predictions. The overall accuracy is calculated as the weighted average of the accuracy of each subtask, with the weights being the number of examples in each subtask.
425
 
426
- To evaluate on the test set you must summit your predictions to the [Robust Reading Competition website](https://rrc.cvc.uab.es/?ch=31&com=introduction), as a json file with the following structure:
427
 
428
  ```json
429
  [
 
423
 
424
  The evaluation metric for all tasks is the accuracy of the model's predictions. The overall accuracy is calculated as the weighted average of the accuracy of each subtask, with the weights being the number of examples in each subtask.
425
 
426
+ To evaluate on the test set you must submit your predictions to the [Robust Reading Competition website](https://rrc.cvc.uab.es/?ch=31&com=introduction), as a json file with the following structure:
427
 
428
  ```json
429
  [