Jarbas's picture
Update README.md
c6f4b11 verified
metadata
base_model: Jarbas/m2v-256-roberta-base-ca-cased-sts
library_name: model2vec
license: mit
model_name: ovos-model2vec-intents-roberta-base-ca-cased-sts
tags:
  - embeddings
  - static-embeddings
  - sentence-transformers
datasets:
  - Jarbas/ovos-llm-augmented-intents
  - Jarbas/ovos-weather-intents
  - Jarbas/music_queries_templates
  - Jarbas/OVOSGitLocalize-Intents
  - Jarbas/ovos_intent_examples
  - Jarbas/ovos-common-query-intents
language:
  - ca

model_ca_m2v-256-roberta-base-ca-cased-sts Model Card

This Model2Vec model is a fine-tuned version of the unknown Model2Vec model. It also includes a classifier head on top.

Installation

Install model2vec using pip:

pip install model2vec[inference]

Usage

Load this model using the from_pretrained method:

from model2vec.inference import StaticModelPipeline

# Load a pretrained Model2Vec model
model = StaticModelPipeline.from_pretrained("model_ca_m2v-256-roberta-base-ca-cased-sts")

# Predict labels
predicted = model.predict(["Example sentence"])

Additional Resources

Library Authors

Model2Vec was developed by the Minish Lab team consisting of Stephan Tulkens and Thomas van Dongen.

Citation

Please cite the Model2Vec repository if you use this model in your work.

@article{minishlab2024model2vec,
  author = {Tulkens, Stephan and {van Dongen}, Thomas},
  title = {Model2Vec: Fast State-of-the-Art Static Embeddings},
  year = {2024},
  url = {https://github.com/MinishLab/model2vec}
}