Pringled commited on
Commit
3c4fd6f
·
verified ·
1 Parent(s): edf27ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ WordSim 55.15
31
  </div>
32
 
33
 
34
- This [Model2Vec](https://github.com/MinishLab/model2vec) model is pre-trained using [Tokenlearn](https://github.com/MinishLab/tokenlearn). It is a distilled version of the [baai/bge-base-en-v1.5](https://huggingface.co/baai/bge-base-en-v1.5) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. It uses a larger vocabulary size than the [potion-science-8M](https://huggingface.co/minishlab/potion-base-8M) model which can be beneficial for tasks that require a larger vocabulary.
35
 
36
 
37
 
 
31
  </div>
32
 
33
 
34
+ This [Model2Vec](https://github.com/MinishLab/model2vec) model is pre-trained using [Tokenlearn](https://github.com/MinishLab/tokenlearn). It is a distilled version of the [baai/bge-base-en-v1.5](https://huggingface.co/baai/bge-base-en-v1.5) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. It uses a larger vocabulary size than the [potion-base-8M](https://huggingface.co/minishlab/potion-base-8M) model which can be beneficial for tasks that require a larger vocabulary.
35
 
36
 
37