Tom Aarsen commited on
Commit
2f6ecfd
·
1 Parent(s): 4ee1aa5

Add "device_map": "auto" to automatically move the model to CUDA if possible

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -72,7 +72,7 @@ model = SentenceTransformer("Qwen/Qwen3-Embedding-0.6B")
72
  # together with setting `padding_side` to "left":
73
  # model = SentenceTransformer(
74
  # "Qwen/Qwen3-Embedding-0.6B",
75
- # model_kwargs={"attn_implementation": "flash_attention_2"},
76
  # tokenizer_kwargs={"padding_side": "left"},
77
  # )
78
 
 
72
  # together with setting `padding_side` to "left":
73
  # model = SentenceTransformer(
74
  # "Qwen/Qwen3-Embedding-0.6B",
75
+ # model_kwargs={"attn_implementation": "flash_attention_2", "device_map": "auto"},
76
  # tokenizer_kwargs={"padding_side": "left"},
77
  # )
78