File size: 2,752 Bytes
0969fc2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6811caf
 
 
 
 
 
 
 
0969fc2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: apache-2.0
---

# Konkani Figurative Language Corpus (Idioms + Metaphors) πŸ“š

**A dataset for idiom and metaphor classification in low-resource Konkani.**

---

## πŸš€ Overview

This dataset extends the Konidioms Corpus (Shaikh et al., 2024) by adding metaphor annotations. It supports binary classification for both idioms and metaphors in Konkani, a multi-script language spoken by approximately 2.5 million people.

---

## πŸ“Š Dataset Description

- **Language**: Konkani (Devanagari script)
- **Tasks**: 
  - Idiom Detection (Yes/No)
  - Metaphor Detection (Yes/No)
- **Format**: CSV with the following columns:
  - `id`: Sentence identifier
  - `sentence`: The Konkani sentence
  - `idiom`: `Yes` or `No`
  - `metaphor`: `Yes` or `No`
  - `split`: `train` or `test`
- **Size**:
  - 6,520 idiom-annotated sentences
  - 500 metaphor-annotated sentences
- **Splits**:
  - ~80% training
  - ~20% testing

---

## 🎯 Motivation

Konkani is a low-resource language with significant dialect and script variation. Figurative language, particularly metaphors, remains understudied. This dataset allows exploration of idioms and metaphors within a single corpus and supports efficient modeling efforts in underrepresented languages.

---

## 🧰 Baseline Model

From the paper: [Pruning for Performance: Efficient Idiom and Metaphor Classification in Low-Resource Konkani Using mBERT](https://arxiv.org/abs/2506.02005)

- **Model**: mBERT + BiLSTM
- **Optimization**: Gradient-based attention head pruning
- **Result**: Comparable performance to full mBERT with fewer parameters

---

## πŸ§ͺ Example Use Cases

- Figurative language analysis in Indic languages
- Efficient multilingual transformer training
- Evaluation of pruning strategies for model compression
- Cross-lingual transfer learning in low-resource contexts

---

## πŸ“š Citation

If you use this dataset, please cite:

```bibtex
@misc{do2025pruningperformanceefficientidiom,
      title={Pruning for Performance: Efficient Idiom and Metaphor Classification in Low-Resource Konkani Using mBERT}, 
      author={Timothy Do and Pranav Saran and Harshita Poojary and Pranav Prabhu and Sean O'Brien and Vasu Sharma and Kevin Zhu},
      year={2025},
      eprint={2506.02005},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.02005}, 
}
```

---

## πŸ“ License

Released under the [CC-BY 4.0 License](https://creativecommons.org/licenses/by/4.0/).

---

## ⚠️ Limitations

- Limited metaphor annotations (500 examples)
- Only Devanagari script supported
- Domain-specific language bias possible

---

## πŸ™Œ Contributions

Contributions welcome! Please open an issue or pull request for improvements or additional data.