NMIXX-BGE
This repository contains a bge-large-en-v1.5-based SentenceTransformer model fine-tuned with a triplet-loss setup on the nmixx-fin/NMIXX_train
dataset. It produces high-quality sentence embeddings for Korean financial text, optimized for semantic similarity tasks in the finance domain.
How to Use
from transformers import AutoTokenizer, AutoModel
import torch
import torch.nn.functional as F
# 1. Load tokenizer & model from Hugging Face Hub
repo_name = "nmixx-fin/nmixx-bge"
tokenizer = AutoTokenizer.from_pretrained(repo_name)
model = AutoModel.from_pretrained(repo_name)
# 2. Move to GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
# 3. Prepare input sentences
sentences = [
"์ด ๋ชจ๋ธ์ ํ๊ตญ ๊ธ์ต ๋๋ฉ์ธ์ ํนํ๋ ์๋ฒ ๋ฉ์ ์ ๊ณตํฉ๋๋ค.",
"NMIXX ๋ฐ์ดํฐ์
์ผ๋ก fine-tuning๋ sentence transformer์
๋๋ค."
]
# 4. Tokenize
encoded_input = tokenizer(
sentences,
padding=True,
truncation=True,
max_length=512,
return_tensors="pt"
)
input_ids = encoded_input["input_ids"].to(device)
attention_mask = encoded_input["attention_mask"].to(device)
# 5. Forward pass (token embeddings)
with torch.no_grad():
model_output = model(input_ids=input_ids, attention_mask=attention_mask)
# 6. CLS Pooling (BGE models use CLS token)
sentence_embeddings = model_output[0][:, 0] # Use CLS token (first token)
# 7. L2 Normalization
sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings shape:", sentence_embeddings.shape)
print(sentence_embeddings.cpu())
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for nmixx-fin/nmixx-bge
Base model
BAAI/bge-large-en-v1.5