Sentence Similarity
Transformers
Safetensors
multilingual
nllb-llm2vec
feature-extraction
text-embedding
embeddings
information-retrieval
beir
text-classification
language-model
text-clustering
text-semantic-similarity
text-evaluation
text-reranking
Sentence Similarity
natural_questions
ms_marco
fever
hotpot_qa
mteb
custom_code
File size: 4,939 Bytes
3d285e5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
library_name: transformers
license: mit
language:
- multilingual
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
---
# `NLLB-LLM2Vec': Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages
- **Repository:** https://github.com/fdschmidt93/trident-nllb-llm2vec
- **Paper:** https://arxiv.org/abs/2406.12739
`NLLB-LLM2Vec` multilingually extends [LLM2Vec](https://github.com/McGill-NLP/llm2vec) via efficient self-supervised distillation. We train the up-projection and LoRA adapters of the
`NLLB-LLM2Vec` by forcing its mean-pooled token embeddings to match (via mean-squared error) the output of the original LLM2Vec.

This model has only been trained on self-supervised data and not yet been fine-tuned on any downstream task! This version is expected to perform better than self-supervised adaptation in the original paper, as LoRAs are merged into the model prior to task fine-tuning. The backbone of this model is [LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse](https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse). We use the encoder of [NLLB-600M](https://huggingface.co/facebook/nllb-200-distilled-600M).
## Usage
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel, AutoConfig
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs.
tokenizer = AutoTokenizer.from_pretrained(
"facebook/nllb-200-distilled-600M"
)
model = AutoModel.from_pretrained(
"fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = F.normalize(q_reps, p=2, dim=1)
d_reps_norm = F.normalize(d_reps, p=2, dim=1)
cos_sim = q_reps_norm @ d_reps_norm.T
print(cos_sim)
"""
tensor([[0.7740, 0.5580],
[0.4845, 0.4993]])
"""
```
## Fine-tuning
You should fine-tune the model on labelled data unless you are using the model for unsupervised retrieval-style tasks.
`NLLB-LLM2Vec` supports both `AutoModelForSequenceClassification` and `AutoModelForTokenClassification`.
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoModelForTokenClassification
from peft import get_peft_model
from peft.tuners.lora.config import LoraConfig
# Only attach LoRAs to the linear layers of LLM2Vec inside NLLB-LLM2Vec
lora_config = LoraConfig(
lora_alpha = 32,
target_modules = r".*llm2vec.*(self_attn\.(q|k|v|o)_proj|mlp\.(gate|up|down)_proj).*",
bias = "none",
task_type = "SEQ_CLS"
)
model = AutoModel.from_pretrained(
"fdschmidt93/NLLB-LLM2Vec-Meta-Llama-31-8B-Instruct-mntp-unsup-simcse",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
model = get_peft_model(model, lora_config)
```
## Questions
If you have any question about the code, feel free to email Fabian David Schmidt (`[email protected]`).
## Citation
If you are using `NLLB-LLM2Vec` in your work, please cite
```
@misc{schmidt2024selfdistillationmodelstackingunlocks,
title={Self-Distillation for Model Stacking Unlocks Cross-Lingual NLU in 200+ Languages},
author={Fabian David Schmidt and Philipp Borchert and Ivan Vulić and Goran Glavaš},
year={2024},
eprint={2406.12739},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.12739},
}
```
The work has been accepted to EMNLP findings. The Bibtex will therefore be updated when the paper will be released on ACLAnthology.
|