Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
1
6
corpus-id
stringlengths
1
6
score
stringclasses
1 value
110554
21138
1
110554
57871
1
11542
21720
1
11542
46155
1
11542
122190
1
11542
78995
1
89377
21781
1
89378
34214
1
89378
134357
1
89378
36242
1
89378
16063
1
89378
108496
1
120124
26297
1
271
134680
1
19970
30978
1
16708
20761
1
16708
105603
1
16708
131102
1
88026
93009
1
12012
63001
1
12012
361
1
12012
91405
1
12012
64415
1
12012
1787
1
17259
2175
1
17259
19873
1
17259
108836
1
17259
87835
1
17259
60790
1
17259
74029
1
17259
13952
1
17259
40968
1
17259
13297
1
17259
72598
1
17259
12736
1
17259
103729
1
17259
30497
1
17259
69839
1
17259
55731
1
17259
102106
1
17259
112666
1
17259
21122
1
17259
26742
1
17259
134446
1
17259
80442
1
17259
111067
1
17259
23466
1
17259
68746
1
17259
106535
1
17259
51646
1
17259
118472
1
17259
46553
1
17259
56319
1
17259
66632
1
17259
2272
1
17259
107798
1
17259
14390
1
17259
116333
1
17259
106718
1
17259
5456
1
17259
114798
1
64670
45448
1
53734
119309
1
46798
123156
1
46863
17524
1
134809
134147
1
118807
39442
1
29766
104294
1
29766
46643
1
111546
109776
1
111546
127402
1
111546
122275
1
60900
26549
1
60900
70725
1
60900
92125
1
60900
76927
1
60900
25591
1
60900
93653
1
60900
57402
1
60900
104937
1
60900
12049
1
60900
129073
1
60900
112000
1
60900
60519
1
60900
131708
1
60900
65547
1
60900
112494
1
60900
103060
1
60900
104964
1
16885
69303
1
391
103421
1
391
58101
1
75286
46112
1
75286
96273
1
75286
89993
1
102941
128115
1
60016
10325
1
60016
117034
1
60016
73663
1
60016
20282
1
End of preview. Expand in Data Studio

Dataset Summary

CQADupstack-physics-Fa is a Persian (Farsi) dataset developed for the Retrieval task, with a focus on duplicate question detection in community question-answering (CQA) platforms. This dataset is a translated version of the "Physics" StackExchange subforum from the English CQADupstack collection and is part of the FaMTEB benchmark under the BEIR-Fa suite.

  • Language(s): Persian (Farsi)
  • Task(s): Retrieval (Duplicate Question Retrieval)
  • Source: Translated from English using Google Translate
  • Part of FaMTEB: Yes — under BEIR-Fa

Supported Tasks and Leaderboards

This dataset evaluates text embedding models on their ability to retrieve semantically equivalent or duplicate questions in specialized domains like physics. Results can be viewed on the Persian MTEB Leaderboard (filter by language: Persian).

Construction

The dataset was created by:

  • Translating the "physics" subforum from the English CQADupstack dataset using the Google Translate API
  • Preserving the CQA structure for evaluating duplicate retrieval tasks

According to the FaMTEB paper, the BEIR-Fa datasets underwent translation quality evaluation using:

  • BM25 retrieval performance comparisons (English vs Persian)
  • GEMBA-DA evaluation, leveraging LLMs for quality control

Data Splits

As per the FaMTEB paper (Table 5), all CQADupstack-Fa sub-datasets share a pooled evaluation set:

  • Train: 0 samples
  • Dev: 0 samples
  • Test: 480,902 samples (aggregate)

The cqadupstack-physics-fa subset contains approximately 41.3k examples (user-provided figure). For more granular splits, consult the dataset provider or FaMTEB documentation.

Downloads last month
34