ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 26 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 25 • 1 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 51 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 24 • 1
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 39 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 20 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 15
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 17 • 4 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 14 • 1
ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 26 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 25 • 1 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 51 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 24 • 1
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 17 • 4 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 14 • 1
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 39 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 20 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 15