|
--- |
|
|
|
library_name: transformers |
|
tags: |
|
- custom_generate |
|
--- |
|
|
|
## Description |
|
Implementation of [Contrastive Search](https://huggingface.co/blog/introducing-csearch), a decoding strategy that jointly optimizes model confidence and a degeneration penalty to produce fluent, coherent, and low-repetition text. At each step, the model considers the top-k candidate tokens and selects the one maximizing: |
|
|
|
score(v) = (1 - alpha) * p(v | context) - alpha * max_cosine_similarity(h_v, H_context) |
|
|
|
where `alpha` is the trade-off between confidence and the cosine-similarity-based penalty. |
|
|
|
This strategy typically: |
|
|
|
- Reduces repetition compared to greedy/beam search |
|
- Preserves semantic coherence better than pure sampling |
|
|
|
--- |
|
|
|
## Base model |
|
|
|
- `Qwen/Qwen2.5-0.5B-Instruct` (example) |
|
|
|
--- |
|
|
|
## Model compatibility |
|
|
|
- Decoder and encoder-decoder transformer models for causal LM |
|
|
|
--- |
|
|
|
## Additional Arguments |
|
|
|
- `top_k` (int): Number of candidate tokens to consider each step (e.g., 4) |
|
- `penalty_alpha` (float): Weight of the degeneration penalty (e.g., 0.6) |
|
|
|
Tips: |
|
- Larger `top_k` explores more candidates but increases compute |
|
- `penalty_alpha` in [0.3, 0.8] often works well; `0.0` reduces to greedy |
|
|
|
--- |
|
|
|
## Output Type changes |
|
|
|
(none) — returns the same structure as standard `transformers` generation |
|
|
|
--- |
|
|
|
## Example usage |
|
|
|
```py |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, infer_device |
|
|
|
device = infer_device() |
|
|
|
model_id = "Qwen/Qwen2.5-0.5B-Instruct" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto").to(device) |
|
|
|
inputs = tokenizer(["DeepMind Company is"], return_tensors="pt").to(device) |
|
|
|
# Contrastive search |
|
gen_out = model.generate( |
|
**inputs, |
|
custom_generate="contrastive_search", |
|
penalty_alpha=0.6, |
|
top_k=4, |
|
max_new_tokens=128, |
|
trust_remote_code=True, |
|
) |
|
|
|
print(tokenizer.batch_decode(gen_out, skip_special_tokens=True)) |
|
``` |
|
|