Datasets:
Updated Readme
Browse files
README.md
CHANGED
@@ -177,6 +177,8 @@ configs:
|
|
177 |
|
178 |
This is the dataset for the [ICDAR 2025 Competition on Comics Understanding in the Era of Foundational Models](https://rrc.cvc.uab.es/?ch=31&com=introduction)
|
179 |
|
|
|
|
|
180 |
The dataset contains five subtask or skills:
|
181 |
|
182 |
### Sequence Filling
|
@@ -423,8 +425,73 @@ dataset.save_to_disk(f"ComPAP_{skill}_{split}_single_images")
|
|
423 |
|
424 |
</details>
|
425 |
|
426 |
-
##
|
427 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
428 |
|
429 |
## Citation
|
430 |
_coming soon_
|
|
|
177 |
|
178 |
This is the dataset for the [ICDAR 2025 Competition on Comics Understanding in the Era of Foundational Models](https://rrc.cvc.uab.es/?ch=31&com=introduction)
|
179 |
|
180 |
+
The competition is hosted in the [Robust Reading Competition website](https://rrc.cvc.uab.es/?ch=31&com=introduction) and the leaderboard is available [here](https://rrc.cvc.uab.es/?ch=31&com=evaluation).
|
181 |
+
|
182 |
The dataset contains five subtask or skills:
|
183 |
|
184 |
### Sequence Filling
|
|
|
425 |
|
426 |
</details>
|
427 |
|
428 |
+
## Evaluation
|
429 |
+
|
430 |
+
The evaluation metric for all tasks is the accuracy of the model's predictions. The overall accuracy is calculated as the weighted average of the accuracy of each subtask, with the weights being the number of examples in each subtask.
|
431 |
+
|
432 |
+
To evaluate on the test set you must summit your predictions to the [Robust Reading Competition website](https://rrc.cvc.uab.es/?ch=31&com=introduction), as a json file with the following structure:
|
433 |
+
|
434 |
+
```json
|
435 |
+
[
|
436 |
+
{ "sample_id" : "sample_id_0", "correct_panel_id" : 3},
|
437 |
+
{ "sample_id" : "sample_id_1", "correct_panel_id" : 1},
|
438 |
+
{ "sample_id" : "sample_id_2", "correct_panel_id" : 4},
|
439 |
+
...,
|
440 |
+
]
|
441 |
+
```
|
442 |
+
|
443 |
+
Where `sample_id` is the id of the sample, `correct_panel_id` is the prediction of your model as the index of the correct panel in the options.
|
444 |
+
|
445 |
+
<details>
|
446 |
+
|
447 |
+
<summary>Pseudocode for the evaluation on val set, adapt for your model:</summary>
|
448 |
+
|
449 |
+
```python
|
450 |
+
split = "val"
|
451 |
+
skills = {
|
452 |
+
"sequence_filling": {
|
453 |
+
"num_examples": 262
|
454 |
+
},
|
455 |
+
"char_coherence": {
|
456 |
+
"num_examples": 143
|
457 |
+
},
|
458 |
+
"visual_closure": {
|
459 |
+
"num_examples": 300
|
460 |
+
},
|
461 |
+
"text_closure": {
|
462 |
+
"num_examples": 259
|
463 |
+
},
|
464 |
+
"caption_relevance": {
|
465 |
+
"num_examples": 262
|
466 |
+
}
|
467 |
+
}
|
468 |
+
|
469 |
+
for skill in skills:
|
470 |
+
dataset = load_dataset("VLR-CVC/ComPAP", skill, split=split)
|
471 |
+
correct = 0
|
472 |
+
total = 0
|
473 |
+
for example in dataset:
|
474 |
+
# Your model prediction
|
475 |
+
prediction = model.generate(example)
|
476 |
+
prediction = post_process(prediction)
|
477 |
+
if prediction == example["solution_index"]:
|
478 |
+
correct += 1
|
479 |
+
total += 1
|
480 |
+
accuracy = correct / total
|
481 |
+
print(f"Accuracy for {skill}: {accuracy}")
|
482 |
+
|
483 |
+
assert total == skills[skill]["num_examples"]
|
484 |
+
skills[skill]["accuracy"] = accuracy
|
485 |
+
|
486 |
+
# Calculate overall accuracy
|
487 |
+
total_examples = sum(skill["num_examples"] for skill in skills.values())
|
488 |
+
overall_accuracy = sum(skill["num_examples"] * skill["accuracy"] for skill in skills.values()) / total_examples
|
489 |
+
print(f"Overall accuracy: {overall_accuracy}")
|
490 |
+
|
491 |
+
```
|
492 |
+
|
493 |
+
</details>
|
494 |
+
|
495 |
|
496 |
## Citation
|
497 |
_coming soon_
|