File size: 1,797 Bytes
6db602e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- autotrain
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: sentence-similarity
---
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
loss: 9.164422988891602
validation_pearson_cosine: -0.10073561135203735
validation_spearman_cosine: -0.05129891760425771
validation_pearson_manhattan: -0.07223520049199797
validation_spearman_manhattan: -0.05129891760425771
validation_pearson_euclidean: -0.056592337170460805
validation_spearman_euclidean: -0.05129891760425771
validation_pearson_dot: -0.1007351930231386
validation_spearman_dot: -0.05129891760425771
validation_pearson_max: -0.056592337170460805
validation_spearman_max: -0.05129891760425771
runtime: 0.1267
samples_per_second: 39.454
steps_per_second: 7.891
: 3.0
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
```
|