Update README.md
Browse files
README.md
CHANGED
|
@@ -10,22 +10,39 @@ pretty_name: RusConText Bench Dataset
|
|
| 10 |
size_categories:
|
| 11 |
- 1K<n<10K
|
| 12 |
---
|
| 13 |
-
# RusConText
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
size_categories:
|
| 11 |
- 1K<n<10K
|
| 12 |
---
|
| 13 |
+
# RusConText Benchmark
|
| 14 |
+
|
| 15 |
+
**RusConText** is a Russian-language benchmark designed to evaluate the ability of large language models to understand short contexts. It consists of four interrelated tasks: coreference resolution, discourse analysis, idiom interpretation, and ellipsis resolution. The benchmark assesses models' skills in recovering omitted information, resolving referential dependencies, and correctly interpreting meaning within narrow contexts. Evaluation results show that modern LLMs still struggle with fine-grained contextual understanding.
|
| 16 |
+
|
| 17 |
+
## Coreference
|
| 18 |
+
|
| 19 |
+
Includes two subtasks: anaphora resolution (selecting the antecedent for a pronoun) and coreference detection between noun phrases (True/False).
|
| 20 |
+
Data is sourced from the manually annotated RuCoCo corpus (Dobrovolskii et al., 2022) of news texts. The first task consists of **500 examples**, and the second – **300**.
|
| 21 |
+
|
| 22 |
+
## Discourse
|
| 23 |
+
|
| 24 |
+
Task of identifying semantic relations between two sentences (e.g., cause-effect, concession).
|
| 25 |
+
Data comes from two sources: the Russian subset of DISRPT (Braud et al., 2024) and the RuDABank dataset (Elena Vasileva, 2024), both containing annotated sentence pairs. The combined corpora consists of **2738 samples** (2238 for RuDABank and 500 for DISRPT) and **37 tags** (15 for RuDABank and 22 for DISRPT).
|
| 26 |
+
|
| 27 |
+
## Idioms
|
| 28 |
+
|
| 29 |
+
The idiom evaluation module consists of three subtasks with a total of **1,500 annotated examples**, designed to assess LLMs’ ability to interpret figurative language and resolve meaning from context.
|
| 30 |
+
|
| 31 |
+
**Literal vs. Idiomatic Meaning** – Classifies whether an expression is used literally or figuratively.
|
| 32 |
+
**500 samples** from a corpus of Russian Potentially Idiomatic Expressions.
|
| 33 |
+
|
| 34 |
+
**Idiom Disambiguation Across Texts** – Selects the text where an idiom matches a given meaning.
|
| 35 |
+
**500 contexts** from the Russian National Corpus.
|
| 36 |
+
|
| 37 |
+
**Polysemous Idiom Resolution** – Identifies the correct idiomatic meaning from multiple options.
|
| 38 |
+
**500 entries** from a dictionary of Russian idioms.
|
| 39 |
+
|
| 40 |
+
## Ellipsis
|
| 41 |
+
|
| 42 |
+
Task of recovering omitted information in elliptical constructions.
|
| 43 |
+
This corpus consists of **626 sentences**, containing such ellipsis constructions as gapping, NP ellipsis, VP ellipsis, sluicing, answer ellipsis, polarity ellipsis (100 sentences each), stripping (14 sentences), verb-stranding (3 sentences), and 9 sentences with a combination of different ellipsis types, collected from linguistic papers, the Russian National Corpus, created or elicited by the author.
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
All data and instructions are available in the [repository](https://huggingface.co/datasets/askatasuna/RusConTextBench).
|
| 48 |
+
The benchmark supports JSON/CSV formats and can be used to evaluate models via LangChain or similar frameworks.
|