lucienbaumgartner's picture
Add SetFit model
6241c2e verified
metadata
tags:
  - setfit
  - sentence-transformers
  - text-classification
  - generated_from_setfit_trainer
widget:
  - text: >-
      most of the results look perfectly healthy, but there are a few that are
      over thresholds, they are: 

       
  - text: >-
      so here's my question: is it possible to have a very slow natural
      breathing rate and be healthy?
  - text: >-
      never had an issue with reflux before, i eat very healthy....but gave it a
      go.  
  - text: >-
      does every other person at their healthy weight range feel like this all
      the time?
  - text: >-
      penis overall just looks very unhealthy compared to last year and i have
      no idea what it could be and everywhere i’ve looked suggest it is penile
      cancer.
metrics:
  - accuracy
  - precision
  - recall
  - f1
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
  - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
    results:
      - task:
          type: text-classification
          name: Text Classification
        dataset:
          name: Unknown
          type: unknown
          split: test
        metrics:
          - type: accuracy
            value: 0.9411764705882353
            name: Accuracy
          - type: precision
            value: 0.9411764705882353
            name: Precision
          - type: recall
            value: 0.9411764705882353
            name: Recall
          - type: f1
            value: 0.9411764705882353
            name: F1

SetFit with sentence-transformers/paraphrase-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
lifestyle
  • 'i am 21, live a healthy lifestyle, i don’t smoke and only drink socially every once in a while.'
  • 'i know staying up all night and sleeping during the day isnt good for you, brain wise and hormonaly, i will try my best to eat healthy and have good sleep hygiene, but am i risking my health or anything ?'
  • 'i have been eating a bit more unhealthy foods like fried foods.\n\n'
disease
  • 'i was told there’s no way to know what caused it & no treatment options or ways to help fix it besides med options to help manage symptoms but my doc doesn’t want to start that yet due to me being “young & healthy”.'
  • "i gave the whole history because i've been very ill like this for 6 years now after being healthy."
  • 'no baseline medical information included, so the following assumes you are healthy.'

Evaluation

Metrics

Label Accuracy Precision Recall F1
all 0.9412 0.9412 0.9412 0.9412

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("never had an issue with reflux before, i eat very healthy....but gave it a go.  ")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 12 25.8308 60
Label Training Sample Count
disease 30
lifestyle 35

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (10, 10)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 3786
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0061 1 0.2143 -
0.3067 50 0.2243 -
0.6135 100 0.0812 -
0.9202 150 0.0019 -
1.2270 200 0.0003 -
1.5337 250 0.0002 -
1.8405 300 0.0002 -
2.1472 350 0.0001 -
2.4540 400 0.0001 -
2.7607 450 0.0001 -
3.0675 500 0.0001 -
3.3742 550 0.0001 -
3.6810 600 0.0001 -
3.9877 650 0.0001 -
4.2945 700 0.0001 -
4.6012 750 0.0001 -
4.9080 800 0.0001 -
5.2147 850 0.0001 -
5.5215 900 0.0001 -
5.8282 950 0.0001 -
6.1350 1000 0.0 -
6.4417 1050 0.0 -
6.7485 1100 0.0 -
7.0552 1150 0.0 -
7.3620 1200 0.0 -
7.6687 1250 0.0 -
7.9755 1300 0.0 -
8.2822 1350 0.0 -
8.5890 1400 0.0 -
8.8957 1450 0.0 -
9.2025 1500 0.0 -
9.5092 1550 0.0 -
9.8160 1600 0.0 -

Framework Versions

  • Python: 3.11.7
  • SetFit: 1.1.1
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}