arthurbr11 commited on
Commit
2d0e4be
·
1 Parent(s): 0f718e0

Add support for Sentence Transformer

Browse files
1_SpladePooling/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "pooling_strategy": "max",
3
+ "activation_function": "relu",
4
+ "word_embedding_dimension": null
5
+ }
README.md CHANGED
@@ -8,6 +8,11 @@ tags:
8
  - bag-of-words
9
  - passage-retrieval
10
  - knowledge-distillation
 
 
 
 
 
11
  datasets:
12
  - ms_marco
13
  ---
@@ -22,6 +27,61 @@ SPLADE model for passage retrieval. For additional details, please visit:
22
  | --- | --- | --- |
23
  | `splade-cocondenser-selfdistil` | 37.6 | 98.4 |
24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ## Citation
26
 
27
  If you use our checkpoint, please cite our work:
 
8
  - bag-of-words
9
  - passage-retrieval
10
  - knowledge-distillation
11
+ - sentence-transformers
12
+ - sparse-encoder
13
+ - sparse
14
+ pipeline_tag: feature-extraction
15
+ library_name: sentence-transformers
16
  datasets:
17
  - ms_marco
18
  ---
 
27
  | --- | --- | --- |
28
  | `splade-cocondenser-selfdistil` | 37.6 | 98.4 |
29
 
30
+ ## Model Details
31
+
32
+ This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
33
+
34
+ ### Model Description
35
+ - **Model Type:** SPLADE Sparse Encoder
36
+ - **Base model:** [Luyu/co-condenser-marco](https://huggingface.co/Luyu/co-condenser-marco)
37
+ - **Maximum Sequence Length:** 512 tokens (256 for evaluation reproduction)
38
+ - **Output Dimensionality:** 30522 dimensions
39
+ - **Similarity Function:** Dot Product
40
+
41
+ ### Full Model Architecture
42
+
43
+ ```
44
+ SparseEncoder(
45
+ (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
46
+ (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
47
+ )
48
+ ```
49
+
50
+ ## Usage
51
+
52
+ ### Direct Usage (Sentence Transformers)
53
+
54
+ First install the Sentence Transformers library:
55
+
56
+ ```bash
57
+ pip install -U sentence-transformers
58
+ ```
59
+
60
+ Then you can load this model and run inference.
61
+ ```python
62
+ from sentence_transformers import SparseEncoder
63
+
64
+ # Download from the 🤗 Hub
65
+ model = SparseEncoder("naver/splade-cocondenser-selfdistil")
66
+ # Run inference
67
+ queries = ["what causes aging fast"]
68
+ documents = [
69
+ "UV-A light, specifically, is what mainly causes tanning, skin aging, and cataracts, UV-B causes sunburn, skin aging and skin cancer, and UV-C is the strongest, and therefore most effective at killing microorganisms. Again â\x80\x93 single words and multiple bullets.",
70
+ "Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens slowly. But its speed of progression varies, depending on a person's genetic makeup, environmental factors, age at diagnosis and other medical conditions. Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing quickly â\x80\x94 or who experiences a sudden decline â\x80\x94 should see his or her doctor.",
71
+ "Bell's palsy and Extreme tiredness and Extreme fatigue (2 causes) Bell's palsy and Extreme tiredness and Hepatitis (2 causes) Bell's palsy and Extreme tiredness and Liver pain (2 causes) Bell's palsy and Extreme tiredness and Lymph node swelling in children (2 causes)",
72
+ ]
73
+ query_embeddings = model.encode_query(queries)
74
+ document_embeddings = model.encode_document(documents)
75
+ print(query_embeddings.shape, document_embeddings.shape)
76
+ # [1, 30522] [3, 30522]
77
+
78
+ # Get the similarity scores for the embeddings
79
+ similarities = model.similarity(query_embeddings, document_embeddings)
80
+ print(similarities)
81
+ # tensor([[ 8.5555, 12.8504, 3.5990]])
82
+
83
+ ```
84
+
85
  ## Citation
86
 
87
  If you use our checkpoint, please cite our work:
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SparseEncoder",
3
+ "__version__": {
4
+ "sentence_transformers": "5.0.0",
5
+ "transformers": "4.50.3",
6
+ "pytorch": "2.6.0+cu124"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "dot"
14
+ }
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.sparse_encoder.models.MLMTransformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_SpladePooling",
12
+ "type": "sentence_transformers.sparse_encoder.models.SpladePooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }