BlackKakapo commited on
Commit
93304a4
·
verified ·
1 Parent(s): b6e20e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -3
README.md CHANGED
@@ -1,3 +1,79 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
+ language:
9
+ - ro
10
+ language_creators:
11
+ - machine-generated
12
+ dataset:
13
+ - ro_sts
14
+ license: apache-2.0
15
+ datasets:
16
+ - BlackKakapo/RoSTSC
17
+ base_model:
18
+ - sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
19
+ ---
20
+
21
+ # 🔥 cupidon-small-ro
22
+
23
+ Say hello to cupidon-mini-ro — the bigger sibling of tiny, but still on the lightweight side at just ~90MB. Fine-tuned from `sentence-transformers/all-MiniLM-L6-v2`, this sentence-transformers model smoothly maps Romanian sentences into sleek dense vectors for tasks like semantic search, clustering, and textual similarity.
24
+ It’s living proof that sometimes, a little more size is just right — still fast, still efficient, and definitely charming enough to handle your STS needs without hogging your hardware. 😎💡
25
+
26
+ ## Usage (Sentence-Transformers)
27
+
28
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
29
+
30
+ ```bash
31
+ pip install -U sentence-transformers
32
+ ```
33
+
34
+ Then you can use the model like this:
35
+
36
+ ```python
37
+ from sentence_transformers import SentenceTransformer
38
+ sentences = ["This is an example sentence", "Each sentence is converted"]
39
+
40
+ model = SentenceTransformer('BlackKakapo/cupidon-small-ro')
41
+ embeddings = model.encode(sentences)
42
+ print(embeddings)
43
+ ```
44
+
45
+ ## Usage (HuggingFace Transformers)
46
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
47
+
48
+ ```python
49
+ from transformers import AutoTokenizer, AutoModel
50
+ import torch
51
+
52
+
53
+ #Mean Pooling - Take attention mask into account for correct averaging
54
+ def mean_pooling(model_output, attention_mask):
55
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
56
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
57
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
58
+
59
+
60
+ # Sentences we want sentence embeddings for
61
+ sentences = ['This is an example sentence', 'Each sentence is converted']
62
+
63
+ # Load model from HuggingFace Hub
64
+ tokenizer = AutoTokenizer.from_pretrained('BlackKakapo/cupidon-small-ro')
65
+ model = AutoModel.from_pretrained('BlackKakapo/cupidon-small-ro')
66
+ ```
67
+
68
+ ## License
69
+ This dataset is licensed under **Apache 2.0**.
70
+
71
+ ## Citation
72
+ If you use BlackKakapo/cupidon-mini-ro in your research, please cite this dataset as follows:
73
+ ```
74
+ @misc{ro_sts_corpus,
75
+ title={BlackKakapo/cupidon-small-ro},
76
+ author={BlackKakapo},
77
+ year={2025},
78
+ }
79
+ ```