tformal commited on
Commit
81a219f
·
verified ·
1 Parent(s): cf51559

sentence_transformers_support (#2)

Browse files

- Add support for Sentence Transformer (3e2ff53c1b920e3589d474d2477017eb4cf6e805)

1_SpladePooling/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "pooling_strategy": "max",
3
+ "activation_function": "relu",
4
+ "word_embedding_dimension": null
5
+ }
README.md CHANGED
@@ -1,3 +1,91 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ language: "en"
4
+ tags:
5
+ - splade
6
+ - query-expansion
7
+ - document-expansion
8
+ - bag-of-words
9
+ - passage-retrieval
10
+ - sentence-transformers
11
+ - sparse-encoder
12
+ - sparse
13
+ pipeline_tag: feature-extraction
14
+ library_name: sentence-transformers
15
+ datasets:
16
+ - ms_marco
17
  ---
18
+
19
+ ## SPLADE_v2_max
20
+
21
+ SPLADE model for passage retrieval. For additional details, please visit:
22
+ * paper: https://arxiv.org/abs/2109.10086
23
+ * code: https://github.com/naver/splade
24
+
25
+ ## Model Details
26
+
27
+ This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
28
+
29
+ ### Model Description
30
+ - **Model Type:** SPLADE Sparse Encoder
31
+ - **Base model:** [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased)
32
+ - **Maximum Sequence Length:** 512 tokens (256 for evaluation reproduction)
33
+ - **Output Dimensionality:** 30522 dimensions
34
+ - **Similarity Function:** Dot Product
35
+
36
+ ### Full Model Architecture
37
+
38
+ ```
39
+ SparseEncoder(
40
+ (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: BertForMaskedLM
41
+ (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
42
+ )
43
+ ```
44
+
45
+ ## Usage
46
+
47
+ ### Direct Usage (Sentence Transformers)
48
+
49
+ First install the Sentence Transformers library:
50
+
51
+ ```bash
52
+ pip install -U sentence-transformers
53
+ ```
54
+
55
+ Then you can load this model and run inference.
56
+ ```python
57
+ from sentence_transformers import SparseEncoder
58
+
59
+ # Download from the 🤗 Hub
60
+ model = SparseEncoder("naver/splade_v2_max")
61
+ # Run inference
62
+ queries = ["what causes aging fast"]
63
+ documents = [
64
+ "UV-A light, specifically, is what mainly causes tanning, skin aging, and cataracts, UV-B causes sunburn, skin aging and skin cancer, and UV-C is the strongest, and therefore most effective at killing microorganisms. Again â\x80\x93 single words and multiple bullets.",
65
+ "Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens slowly. But its speed of progression varies, depending on a person's genetic makeup, environmental factors, age at diagnosis and other medical conditions. Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing quickly â\x80\x94 or who experiences a sudden decline â\x80\x94 should see his or her doctor.",
66
+ "Bell's palsy and Extreme tiredness and Extreme fatigue (2 causes) Bell's palsy and Extreme tiredness and Hepatitis (2 causes) Bell's palsy and Extreme tiredness and Liver pain (2 causes) Bell's palsy and Extreme tiredness and Lymph node swelling in children (2 causes)",
67
+ ]
68
+ query_embeddings = model.encode_query(queries)
69
+ document_embeddings = model.encode_document(documents)
70
+ print(query_embeddings.shape, document_embeddings.shape)
71
+ # [1, 30522] [3, 30522]
72
+
73
+ # Get the similarity scores for the embeddings
74
+ similarities = model.similarity(query_embeddings, document_embeddings)
75
+ print(similarities)
76
+ # tensor([[12.3349, 7.0284, 2.5650]])
77
+
78
+ ```
79
+
80
+ ## Citation
81
+
82
+ If you use our checkpoint, please cite our work:
83
+
84
+ ```
85
+ @article{formal2021splade,
86
+ title={SPLADE v2: Sparse lexical and expansion model for information retrieval},
87
+ author={Formal, Thibault and Lassance, Carlos and Piwowarski, Benjamin and Clinchant, St{\'e}phane},
88
+ journal={arXiv preprint arXiv:2109.10086},
89
+ year={2021}
90
+ }
91
+ ```
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SparseEncoder",
3
+ "__version__": {
4
+ "sentence_transformers": "5.0.0",
5
+ "transformers": "4.50.3",
6
+ "pytorch": "2.6.0+cu124"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "dot"
14
+ }
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.sparse_encoder.models.MLMTransformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_SpladePooling",
12
+ "type": "sentence_transformers.sparse_encoder.models.SpladePooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }