tomaarsen HF Staff alvarobartt HF Staff commited on
Commit
3958fb5
·
verified ·
1 Parent(s): d7e9539

Add `text-embeddings-inference` tag & snippet (#4)

Browse files

- Add `text-embeddings-inference` tag & snippet (d21f62f2ced8493148c5417b392bb39cb698d1e1)
- Update README.md (683fd8e6b35c64bd99cbd5f8d845f6098d98087d)
- embeddings models -> embedding models (0b27657c66732a78fc7a3960e54794adca3cfe1d)


Co-authored-by: Alvaro Bartolome <[email protected]>

Files changed (1) hide show
  1. README.md +35 -8
README.md CHANGED
@@ -7,6 +7,7 @@ tags:
7
  - feature-extraction
8
  - sentence-similarity
9
  - transformers
 
10
  pipeline_tag: sentence-similarity
11
  new_version: sentence-transformers/all-mpnet-base-v2
12
  ---
@@ -71,6 +72,33 @@ print("Sentence embeddings:")
71
  print(sentence_embeddings)
72
  ```
73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
  ------
75
 
76
  ## Background
@@ -79,19 +107,18 @@ The project aims to train sentence embedding models on very large sentence level
79
  contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
80
  1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
81
 
82
- We developped this model during the
83
  [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
84
- organized by Hugging Face. We developped this model as part of the project:
85
- [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
86
 
87
  ## Intended uses
88
 
89
- Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
90
  the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
91
 
92
  By default, input text longer than 128 word pieces is truncated.
93
 
94
-
95
  ## Training procedure
96
 
97
  ### Pre-training
@@ -103,9 +130,9 @@ We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/
103
  We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
104
  We then apply the cross entropy loss by comparing with true pairs.
105
 
106
- #### Hyper parameters
107
 
108
- We trained ou model on a TPU v3-8. We train the model during 920k steps using a batch size of 512 (64 per TPU core).
109
  We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
110
  a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
111
 
@@ -136,7 +163,7 @@ We sampled each dataset given a weighted probability which configuration is deta
136
  | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
137
  | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
138
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
139
- | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
140
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
141
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
142
  | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
 
7
  - feature-extraction
8
  - sentence-similarity
9
  - transformers
10
+ - text-embeddings-inference
11
  pipeline_tag: sentence-similarity
12
  new_version: sentence-transformers/all-mpnet-base-v2
13
  ---
 
72
  print(sentence_embeddings)
73
  ```
74
 
75
+ ## Usage (Text Embeddings Inference (TEI))
76
+
77
+ [Text Embeddings Inference (TEI)](https://github.com/huggingface/text-embeddings-inference) is a blazing fast inference solution for text embedding models.
78
+
79
+ - CPU:
80
+ ```bash
81
+ docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-latest --model-id sentence-transformers/all-mpnet-base-v1 --pooling mean --dtype float16
82
+ ```
83
+
84
+ - NVIDIA GPU:
85
+ ```bash
86
+ docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cuda-latest --model-id sentence-transformers/all-mpnet-base-v1 --pooling mean --dtype float16
87
+ ```
88
+
89
+ Send a request to `/v1/embeddings` to generate embeddings via the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create):
90
+ ```bash
91
+ curl http://localhost:8080/v1/embeddings \
92
+ -X POST \
93
+ -H "Content-Type: application/json" \
94
+ -d '{
95
+ "model": "sentence-transformers/all-mpnet-base-v1",
96
+ "input": ["This is an example sentence", "Each sentence is converted"]
97
+ }'
98
+ ```
99
+
100
+ Or check the [Text Embeddings Inference API specification](https://huggingface.github.io/text-embeddings-inference/) instead.
101
+
102
  ------
103
 
104
  ## Background
 
107
  contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
108
  1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
109
 
110
+ We developed this model during the
111
  [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
112
+ organized by Hugging Face. We developed this model as part of the project:
113
+ [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google's Flax, JAX, and Cloud team members about efficient deep learning frameworks.
114
 
115
  ## Intended uses
116
 
117
+ Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
118
  the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
119
 
120
  By default, input text longer than 128 word pieces is truncated.
121
 
 
122
  ## Training procedure
123
 
124
  ### Pre-training
 
130
  We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
131
  We then apply the cross entropy loss by comparing with true pairs.
132
 
133
+ #### Hyperparameters
134
 
135
+ We trained our model on a TPU v3-8. We train the model during 920k steps using a batch size of 512 (64 per TPU core).
136
  We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
137
  a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
138
 
 
163
  | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
164
  | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
165
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
166
+ | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/)) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
167
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
168
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
169
  | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |