alvarobartt HF Staff commited on
Commit
d21f62f
·
verified ·
1 Parent(s): d7e9539

Add `text-embeddings-inference` tag & snippet

Browse files

## Description

- Add `text-embeddings-inference` tag to improve discoverability
- Adds a sample snippet on how to run Text Embeddings Inference (TEI) via Docker

⚠️ **This PR has been generated automatically, so please review it before merging.**

Files changed (1) hide show
  1. README.md +36 -8
README.md CHANGED
@@ -7,6 +7,7 @@ tags:
7
  - feature-extraction
8
  - sentence-similarity
9
  - transformers
 
10
  pipeline_tag: sentence-similarity
11
  new_version: sentence-transformers/all-mpnet-base-v2
12
  ---
@@ -79,14 +80,14 @@ The project aims to train sentence embedding models on very large sentence level
79
  contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
80
  1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
81
 
82
- We developped this model during the
83
  [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
84
- organized by Hugging Face. We developped this model as part of the project:
85
- [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
86
 
87
  ## Intended uses
88
 
89
- Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
90
  the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
91
 
92
  By default, input text longer than 128 word pieces is truncated.
@@ -103,9 +104,9 @@ We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/
103
  We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
104
  We then apply the cross entropy loss by comparing with true pairs.
105
 
106
- #### Hyper parameters
107
 
108
- We trained ou model on a TPU v3-8. We train the model during 920k steps using a batch size of 512 (64 per TPU core).
109
  We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
110
  a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
111
 
@@ -136,7 +137,7 @@ We sampled each dataset given a weighted probability which configuration is deta
136
  | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
137
  | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
138
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
139
- | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
140
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
141
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
142
  | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
@@ -147,4 +148,31 @@ We sampled each dataset given a weighted probability which configuration is deta
147
  | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
148
  | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
149
  | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
150
- | **Total** | | **1,124,818,467** |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - feature-extraction
8
  - sentence-similarity
9
  - transformers
10
+ - text-embeddings-inference
11
  pipeline_tag: sentence-similarity
12
  new_version: sentence-transformers/all-mpnet-base-v2
13
  ---
 
80
  contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a
81
  1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
82
 
83
+ We developed this model during the
84
  [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
85
+ organized by Hugging Face. We developed this model as part of the project:
86
+ [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google's Flax, JAX, and Cloud team members about efficient deep learning frameworks.
87
 
88
  ## Intended uses
89
 
90
+ Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures
91
  the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
92
 
93
  By default, input text longer than 128 word pieces is truncated.
 
104
  We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
105
  We then apply the cross entropy loss by comparing with true pairs.
106
 
107
+ #### Hyperparameters
108
 
109
+ We trained our model on a TPU v3-8. We train the model during 920k steps using a batch size of 512 (64 per TPU core).
110
  We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
111
  a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
112
 
 
137
  | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
138
  | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
139
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
140
+ | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/)) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
141
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
142
  | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
143
  | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
 
148
  | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
149
  | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
150
  | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
151
+ | **Total** | | **1,124,818,467** |
152
+
153
+ ## Usage (Text Embeddings Inference (TEI))
154
+
155
+ [Text Embeddings Inference (TEI)](https://github.com/huggingface/text-embeddings-inference) is a blazing fast inference solution for text embeddings models.
156
+
157
+ - CPU:
158
+ ```bash
159
+ docker run -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-latest --model-id sentence-transformers/all-mpnet-base-v1 --pooling mean --dtype float16
160
+ ```
161
+
162
+ - NVIDIA GPU:
163
+ ```bash
164
+ docker run --gpus all -p 8080:80 -v hf_cache:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cuda-latest --model-id sentence-transformers/all-mpnet-base-v1 --pooling mean --dtype float16
165
+ ```
166
+
167
+ Send a request to `/v1/embeddings` to generate embeddings via the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings/create):
168
+ ```bash
169
+ curl http://localhost:8080/v1/embeddings \
170
+ -X POST \
171
+ -H "Content-Type: application/json" \
172
+ -d '{
173
+ "model": "sentence-transformers/all-mpnet-base-v1",
174
+ "input": ["This is an example sentence", "Each sentence is converted"]
175
+ }'
176
+ ```
177
+
178
+ Or check the [Text Embeddings Inference API specification](https://huggingface.github.io/text-embeddings-inference/) instead.