|
--- |
|
pipeline_tag: sentence-similarity |
|
tags: |
|
- sentence-transformers |
|
- feature-extraction |
|
- sentence-similarity |
|
- transformers |
|
language: |
|
- ro |
|
language_creators: |
|
- machine-generated |
|
dataset: |
|
- ro_sts |
|
license: apache-2.0 |
|
datasets: |
|
- BlackKakapo/RoSTSC |
|
base_model: |
|
- sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 |
|
--- |
|
|
|
# π₯ cupidon-small-ro |
|
|
|
Here comes cupidon-small-ro β small in name, but ready to play with the big models. Fine-tuned from the powerful sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2, this sentence-transformers model captures Romanian sentence meaning with impressive accuracy. |
|
Itβs compact enough to stay efficient, but packs a semantic punch that hits deep. Think of it as the model that proves "small" can still break hearts β especially in semantic textual similarity, search, or clustering. ππ¬ |
|
|
|
## Usage (Sentence-Transformers) |
|
|
|
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
|
|
|
```bash |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can use the model like this: |
|
|
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
sentences = ["This is an example sentence", "Each sentence is converted"] |
|
|
|
model = SentenceTransformer('BlackKakapo/cupidon-small-ro') |
|
embeddings = model.encode(sentences) |
|
print(embeddings) |
|
``` |
|
|
|
## Usage (HuggingFace Transformers) |
|
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModel |
|
import torch |
|
|
|
|
|
#Mean Pooling - Take attention mask into account for correct averaging |
|
def mean_pooling(model_output, attention_mask): |
|
token_embeddings = model_output[0] #First element of model_output contains all token embeddings |
|
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() |
|
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) |
|
|
|
|
|
# Sentences we want sentence embeddings for |
|
sentences = ['This is an example sentence', 'Each sentence is converted'] |
|
|
|
# Load model from HuggingFace Hub |
|
tokenizer = AutoTokenizer.from_pretrained('BlackKakapo/cupidon-small-ro') |
|
model = AutoModel.from_pretrained('BlackKakapo/cupidon-small-ro') |
|
``` |
|
|
|
## License |
|
This dataset is licensed under **Apache 2.0**. |
|
|
|
## Citation |
|
If you use BlackKakapo/cupidon-mini-ro in your research, please cite this model as follows: |
|
``` |
|
@misc{cupidon-small-ro, |
|
title={BlackKakapo/cupidon-small-ro}, |
|
author={BlackKakapo}, |
|
year={2025}, |
|
} |
|
``` |