Ceyda Cinarel's picture

Ceyda Cinarel

ceyda

AI & ML interests

NLP & CV ~ Location: Seoul

Recent Activity

Organizations

Notebooks-explorers's profile picture Flax Community's profile picture CVPR Demo Track's profile picture HugGAN Community's profile picture Hugging Face Fellows's profile picture Gradio-Blocks-Party's profile picture Kakao Style's profile picture fastai X Hugging Face Group 2022's profile picture BigLAM: BigScience Libraries, Archives and Museums's profile picture ECCV 2022's profile picture NAACL 2022's profile picture Kornia AI's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Blog-explorers's profile picture Social Post Explorers's profile picture Nerdy Face's profile picture

ceyda's activity

upvoted 2 articles 2 days ago
view article
Article

Visualize and understand GPU memory in PyTorch

β€’ 162
view article
Article

Mastering Tensor Dimensions in Transformers

By not-lain β€’
β€’ 33
reacted to tomaarsen's post with ❀️ 2 days ago
view post
Post
4092
🏎️ Today I'm introducing a method to train static embedding models that run 100x to 400x faster on CPU than common embedding models, while retaining 85%+ of the quality! Including 2 fully open models: training scripts, datasets, metrics.

We apply our recipe to train 2 Static Embedding models that we release today! We release:
2️⃣ an English Retrieval model and a general-purpose Multilingual similarity model (e.g. classification, clustering, etc.), both Apache 2.0
🧠 my modern training strategy: ideation -> dataset choice -> implementation -> evaluation
πŸ“œ my training scripts, using the Sentence Transformers library
πŸ“Š my Weights & Biases reports with losses & metrics
πŸ“• my list of 30 training and 13 evaluation datasets

The 2 Static Embedding models have the following properties:
🏎️ Extremely fast, e.g. 107500 sentences per second on a consumer CPU, compared to 270 for 'all-mpnet-base-v2' and 56 for 'gte-large-en-v1.5'
0️⃣ Zero active parameters: No Transformer blocks, no attention, not even a matrix multiplication. Super speed!
πŸ“ No maximum sequence length! Embed texts at any length (note: longer texts may embed worse)
πŸ“ Linear instead of exponential complexity: 2x longer text takes 2x longer, instead of 2.5x or more.
πŸͺ† Matryoshka support: allow you to truncate embeddings with minimal performance loss (e.g. 4x smaller with a 0.56% perf. decrease for English Similarity tasks)

Check out the full blogpost if you'd like to 1) use these lightning-fast models or 2) learn how to train them with consumer-level hardware: https://huggingface.co/blog/static-embeddings

The blogpost contains a lengthy list of possible advancements; I'm very confident that our 2 models are only the tip of the iceberg, and we may be able to get even better performance.

Alternatively, check out the models:
* sentence-transformers/static-retrieval-mrl-en-v1
* sentence-transformers/static-similarity-mrl-multilingual-v1
  • 1 reply
Β·
upvoted an article 2 days ago
view article
Article

You could have designed state of the art positional encoding

β€’ 127