zhichao-geng commited on
Commit
269e663
·
verified ·
1 Parent(s): 46ef2bf

sentence_transformers_support (#7)

Browse files

- Add support for Sentence Transformer (29e47dc8ab28efee643a6d1d1df110e937c91eb7)

1_SpladePooling/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "pooling_strategy": "max",
3
+ "activation_function": "relu",
4
+ "word_embedding_dimension": null
5
+ }
README.md CHANGED
@@ -10,6 +10,12 @@ tags:
10
  - query-expansion
11
  - document-expansion
12
  - bag-of-words
 
 
 
 
 
 
13
  ---
14
 
15
  # opensearch-neural-sparse-encoding-v2-distill
@@ -39,6 +45,91 @@ The training datasets includes MS MARCO, eli5_question_answer, squad_pairs, Wiki
39
 
40
  OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API.
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  ## Usage (HuggingFace)
44
  This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API.
 
10
  - query-expansion
11
  - document-expansion
12
  - bag-of-words
13
+ - sentence-transformers
14
+ - sparse-encoder
15
+ - sparse
16
+ - splade
17
+ pipeline_tag: feature-extraction
18
+ library_name: sentence-transformers
19
  ---
20
 
21
  # opensearch-neural-sparse-encoding-v2-distill
 
45
 
46
  OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API.
47
 
48
+ ## Usage (Sentence Transformers)
49
+
50
+ First install the Sentence Transformers library:
51
+
52
+ ```bash
53
+ pip install -U sentence-transformers
54
+ ```
55
+
56
+ Then you can load this model and run inference.
57
+
58
+ ```python
59
+ from sentence_transformers.sparse_encoder import SparseEncoder
60
+
61
+ # Download from the 🤗 Hub
62
+ model = SparseEncoder("opensearch-project/opensearch-neural-sparse-encoding-v2-distill")
63
+
64
+ query = "What's the weather in ny now?"
65
+ document = "Currently New York is rainy."
66
+
67
+ query_embed = model.encode_query(query)
68
+ document_embed = model.encode_document(document)
69
+
70
+ sim = model.similarity(query_embed, document_embed)
71
+ print(f"Similarity: {sim}")
72
+ # Similarity: tensor([[38.6113]])
73
+
74
+ decoded_query = model.decode(query_embed)
75
+ decoded_document = model.decode(document_embed)
76
+
77
+ for i in range(len(decoded_query)):
78
+ query_token, query_score = decoded_query[i]
79
+ doc_score = next((score for token, score in decoded_document if token == query_token), 0)
80
+ if doc_score != 0:
81
+ print(f"Token: {query_token}, Query score: {query_score:.4f}, Document score: {doc_score:.4f}")
82
+
83
+ # Token: york, Query score: 2.7273, Document score: 2.9088
84
+ # Token: now, Query score: 2.5734, Document score: 0.9208
85
+ # Token: ny, Query score: 2.3895, Document score: 1.7237
86
+ # Token: weather, Query score: 2.2184, Document score: 1.2368
87
+ # Token: current, Query score: 1.8693, Document score: 1.4146
88
+ # Token: today, Query score: 1.5888, Document score: 0.7450
89
+ # Token: sunny, Query score: 1.4704, Document score: 0.9247
90
+ # Token: nyc, Query score: 1.4374, Document score: 1.9737
91
+ # Token: currently, Query score: 1.4347, Document score: 1.6019
92
+ # Token: climate, Query score: 1.1605, Document score: 0.9794
93
+ # Token: upstate, Query score: 1.0944, Document score: 0.7141
94
+ # Token: forecast, Query score: 1.0471, Document score: 0.5519
95
+ # Token: verve, Query score: 0.9268, Document score: 0.6692
96
+ # Token: huh, Query score: 0.9126, Document score: 0.4486
97
+ # Token: greene, Query score: 0.8960, Document score: 0.7706
98
+ # Token: picturesque, Query score: 0.8779, Document score: 0.7120
99
+ # Token: pleasantly, Query score: 0.8471, Document score: 0.4183
100
+ # Token: windy, Query score: 0.8079, Document score: 0.2140
101
+ # Token: favorable, Query score: 0.7537, Document score: 0.4925
102
+ # Token: rain, Query score: 0.7519, Document score: 2.1456
103
+ # Token: skies, Query score: 0.7277, Document score: 0.3818
104
+ # Token: lena, Query score: 0.6995, Document score: 0.8593
105
+ # Token: sunshine, Query score: 0.6895, Document score: 0.2410
106
+ # Token: johnny, Query score: 0.6621, Document score: 0.3016
107
+ # Token: skyline, Query score: 0.6604, Document score: 0.1933
108
+ # Token: sasha, Query score: 0.6117, Document score: 0.2197
109
+ # Token: vibe, Query score: 0.5962, Document score: 0.0414
110
+ # Token: hardly, Query score: 0.5381, Document score: 0.7560
111
+ # Token: prevailing, Query score: 0.4583, Document score: 0.4243
112
+ # Token: unpredictable, Query score: 0.4539, Document score: 0.5073
113
+ # Token: presently, Query score: 0.4350, Document score: 0.8463
114
+ # Token: hail, Query score: 0.3674, Document score: 0.2496
115
+ # Token: shivered, Query score: 0.3324, Document score: 0.5506
116
+ # Token: wind, Query score: 0.3281, Document score: 0.1964
117
+ # Token: rudy, Query score: 0.3052, Document score: 0.5785
118
+ # Token: looming, Query score: 0.2797, Document score: 0.0357
119
+ # Token: atmospheric, Query score: 0.2712, Document score: 0.0870
120
+ # Token: vicky, Query score: 0.2471, Document score: 0.3490
121
+ # Token: sandy, Query score: 0.2247, Document score: 0.2383
122
+ # Token: crowded, Query score: 0.2154, Document score: 0.5737
123
+ # Token: chilly, Query score: 0.1723, Document score: 0.1857
124
+ # Token: blizzard, Query score: 0.1700, Document score: 0.4110
125
+ # Token: ##cken, Query score: 0.1183, Document score: 0.0613
126
+ # Token: unrest, Query score: 0.0923, Document score: 0.6363
127
+ # Token: russ, Query score: 0.0624, Document score: 0.2127
128
+ # Token: blackout, Query score: 0.0558, Document score: 0.5542
129
+ # Token: kahn, Query score: 0.0549, Document score: 0.1589
130
+ # Token: 2020, Query score: 0.0160, Document score: 0.0566
131
+ # Token: nighttime, Query score: 0.0125, Document score: 0.3753
132
+ ```
133
 
134
  ## Usage (HuggingFace)
135
  This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API.
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SparseEncoder",
3
+ "__version__": {
4
+ "sentence_transformers": "5.0.0",
5
+ "transformers": "4.50.3",
6
+ "pytorch": "2.6.0+cu124"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "dot"
14
+ }
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.sparse_encoder.models.MLMTransformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_SpladePooling",
12
+ "type": "sentence_transformers.sparse_encoder.models.SpladePooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }