dwb2023's picture
Add new SentenceTransformer model
04daf17 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:157
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: >-
      Why does the author find large language models (LLMs) infuriating as a
      computer scientist and software engineer?
    sentences:
      - >-
        Stuff we figured out about AI in 2023






















        Simon Willison’s Weblog

        Subscribe







        Stuff we figured out about AI in 2023

        31st December 2023

        2023 was the breakthrough year for Large Language Models (LLMs). I think
        it’s OK to call these AI—they’re the latest and (currently) most
        interesting development in the academic field of Artificial Intelligence
        that dates back to the 1950s.

        Here’s my attempt to round up the highlights in one place!
      - >-
        Still, I’m surprised that no-one has beaten the now almost year old
        GPT-4 by now. OpenAI clearly have some substantial tricks that they
        haven’t shared yet.

        Vibes Based Development

        As a computer scientist and software engineer, LLMs are infuriating.

        Even the openly licensed ones are still the world’s most convoluted
        black boxes. We continue to have very little idea what they can do, how
        exactly they work and how best to control them.

        I’m used to programming where the computer does exactly what I tell it
        to do. Prompting an LLM is decidedly not that!

        The worst part is the challenge of evaluating them.

        There are plenty of benchmarks, but no benchmark is going to tell you if
        an LLM actually “feels” right when you try it for a given task.
      - >-
        Then in December, the Chatbot Arena team introduced a whole new
        leaderboard for this feature, driven by users building the same
        interactive app twice with two different models and voting on the
        answer. Hard to come up with a more convincing argument that this
        feature is now a commodity that can be effectively implemented against
        all of the leading models.

        I’ve been tinkering with a version of this myself for my Datasette
        project, with the goal of letting users use prompts to build and iterate
        on custom widgets and data visualizations against their own data. I also
        figured out a similar pattern for writing one-shot Python programs,
        enabled by uv.
  - source_sentence: >-
      What are some examples of large language models that can run entirely in a
      browser or on personal devices, as mentioned in the context?
    sentences:
      - >-
        The May 13th announcement of GPT-4o included a demo of a brand new voice
        mode, where the true multi-modal GPT-4o (the o is for “omni”) model
        could accept audio input and output incredibly realistic sounding speech
        without needing separate TTS or STT models.

        The demo also sounded conspicuously similar to Scarlett Johansson... and
        after she complained the voice from the demo, Skye, never made it to a
        production product.

        The delay in releasing the new voice mode after the initial demo caused
        quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is
        not running the new features yet.
      - >-
        Now add a walrus: Prompt engineering in DALL-E 3

        32.8k

        41.2k



        Web LLM runs the vicuna-7b Large Language Model entirely in your
        browser, and it’s very impressive

        32.5k

        38.2k



        ChatGPT can’t access the internet, even though it really looks like it
        can

        30.5k

        34.2k



        Stanford Alpaca, and the acceleration of on-device large language model
        development

        29.7k

        35.7k



        Run Llama 2 on your own Mac using LLM and Homebrew

        27.9k

        33.6k



        Midjourney 5.1

        26.7k

        33.4k



        Think of language models like ChatGPT as a “calculator for words”

        25k

        31.8k



        Multi-modal prompt injection image attacks against GPT-4V

        23.7k

        27.4k
      - >-
        One way to think about these models is an extension of the
        chain-of-thought prompting trick, first explored in the May 2022 paper
        Large Language Models are Zero-Shot Reasoners.

        This is that trick where, if you get a model to talk out loud about a
        problem it’s solving, you often get a result which the model would not
        have achieved otherwise.

        o1 takes this process and further bakes it into the model itself. The
        details are somewhat obfuscated: o1 models spend “reasoning tokens”
        thinking through the problem that are not directly visible to the user
        (though the ChatGPT UI shows a summary of them), then outputs a final
        result.
  - source_sentence: >-
      How did the rollout of ChatGPT Advanced Voice mode take place, and what
      was the user’s experience with it?
    sentences:
      - >-
        Meta’s Llama 3.2 models deserve a special mention. They may not be GPT-4
        class, but at 1B and 3B sizes they punch massively above their weight. I
        run Llama 3.2 3B on my iPhone using the free MLC Chat iOS app and it’s a
        shockingly capable model for its tiny (<2GB) size. Try firing it up and
        asking it for “a plot outline of a Netflix Christmas movie where a data
        journalist falls in love with a local ceramacist”. Here’s what I got, at
        a respectable 20 tokens per second:
      - >-
        When ChatGPT Advanced Voice mode finally did roll out (a slow roll from
        August through September) it was spectacular. I’ve been using it
        extensively on walks with my dog and it’s amazing how much the
        improvement in intonation elevates the material. I’ve also had a lot of
        fun experimenting with the OpenAI audio APIs.

        Even more fun: Advanced Voice mode can do accents! Here’s what happened
        when I told it I need you to pretend to be a California brown pelican
        with a very thick Russian accent, but you talk to me exclusively in
        Spanish.
      - >-
        Today $30/mTok gets you OpenAI’s most expensive model, o1. GPT-4o is
        $2.50 (12x cheaper than GPT-4) and GPT-4o mini is $0.15/mTok—200x
        cheaper than GPT-4, nearly 7x cheaper than GPT-3.5 and massively more
        capable than that model.

        Other model providers charge even less. Anthropic’s Claude 3 Haiku (from
        March, but still their cheapest model) is $0.25/mTok. Google’s Gemini
        1.5 Flash is $0.075/mTok and their Gemini 1.5 Flash 8B is
        $0.0375/mTok—that’s 27x cheaper than GPT-3.5 Turbo last year.

        I’ve been tracking these pricing changes under my llm-pricing tag.
  - source_sentence: >-
      What challenge does the author identify as a major limitation for LLMs and
      similar systems in making meaningful decisions?
    sentences:
      - >-
        Your browser does not support the audio element.


        OpenAI aren’t the only group with a multi-modal audio model. Google’s
        Gemini also accepts audio input, and the Google Gemini apps can speak in
        a similar way to ChatGPT now. Amazon also pre-announced voice mode for
        Amazon Nova, but that’s meant to roll out in Q1 of 2025.

        Google’s NotebookLM, released in September, took audio output to a new
        level by producing spookily realistic conversations between two “podcast
        hosts” about anything you fed into their tool. They later added custom
        instructions, so naturally I turned them into pelicans:



        Your browser does not support the audio element.
      - >-
        Just this week, the New York Times launched a landmark lawsuit against
        OpenAI and Microsoft over this issue. The 69 page PDF is genuinely worth
        reading—especially the first few pages, which lay out the issues in a
        way that’s surprisingly easy to follow. The rest of the document
        includes some of the clearest explanations of what LLMs are, how they
        work and how they are built that I’ve read anywhere.

        The legal arguments here are complex. I’m not a lawyer, but I don’t
        think this one will be easily decided. Whichever way it goes, I expect
        this case to have a profound impact on how this technology develops in
        the future.
      - >-
        Terminology aside, I remain skeptical as to their utility based, once
        again, on the challenge of gullibility. LLMs believe anything you tell
        them. Any systems that attempts to make meaningful decisions on your
        behalf will run into the same roadblock: how good is a travel agent, or
        a digital assistant, or even a research tool if it can’t distinguish
        truth from fiction?

        Just the other day Google Search was caught serving up an entirely fake
        description of the non-existant movie “Encanto 2”. It turned out to be
        summarizing an imagined movie listing from a fan fiction wiki.
  - source_sentence: >-
      Why is there a need for better criticism of LLMs according to the 2024
      blog posts?
    sentences:
      - |-
        The year of slop
        Synthetic training data works great
        LLMs somehow got even harder to use
        Knowledge is incredibly unevenly distributed
        LLMs need better criticism
        Everything tagged “llms” on my blog in 2024
      - >-
        There’s now a fascinating ecosystem of people training their own models
        on top of these foundations, publishing those models, building
        fine-tuning datasets and sharing those too.

        The Hugging Face Open LLM Leaderboard is one place that tracks these. I
        can’t even attempt to count them, and any count would be out-of-date
        within a few hours.

        The best overall openly licensed LLM at any time is rarely a foundation
        model: instead, it’s whichever fine-tuned community model has most
        recently discovered the best combination of fine-tuning data.

        This is a huge advantage for open over closed models: the closed, hosted
        models don’t have thousands of researchers and hobbyists around the
        world collaborating and competing to improve them.
      - >-
        Things we learned about LLMs in 2024






















        Simon Willison’s Weblog

        Subscribe







        Things we learned about LLMs in 2024

        31st December 2024

        A lot has happened in the world of Large Language Models over the course
        of 2024. Here’s a review of things we figured out about the field in the
        past twelve months, plus my attempt at identifying key themes and
        pivotal moments.

        This is a sequel to my review of 2023.

        In this article:
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.9583333333333334
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 1
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.9583333333333334
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3333333333333333
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.9583333333333334
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 1
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9846220730654774
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9791666666666666
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9791666666666666
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("dwb2023/legal-ft-35c151f9-26b7-4fe9-8a15-ce3914830ac9")
# Run inference
sentences = [
    'Why is there a need for better criticism of LLMs according to the 2024 blog posts?',
    'The year of slop\nSynthetic training data works great\nLLMs somehow got even harder to use\nKnowledge is incredibly unevenly distributed\nLLMs need better criticism\nEverything tagged “llms” on my blog in 2024',
    'Things we learned about LLMs in 2024\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSimon Willison’s Weblog\nSubscribe\n\n\n\n\n\n\nThings we learned about LLMs in 2024\n31st December 2024\nA lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.\nThis is a sequel to my review of 2023.\nIn this article:',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9583
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.9583
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.9583
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9846
cosine_mrr@10 0.9792
cosine_map@100 0.9792

Training Details

Training Dataset

Unnamed Dataset

  • Size: 157 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 157 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 2 tokens
    • mean: 20.91 tokens
    • max: 37 tokens
    • min: 43 tokens
    • mean: 135.42 tokens
    • max: 214 tokens
  • Samples:
    sentence_0 sentence_1
    When did Meta release the original Llama model? Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.
    I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!
    This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.
    Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.
    What was significant about the release of Llama 2 in July? Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.
    I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!
    This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.
    Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.
    What new feature does ChatGPT voice mode offer as of December? The most recent twist, again from December (December was a lot) is live video. ChatGPT voice mode now provides the option to share your camera feed with the model and talk about what you can see in real time. Google Gemini have a preview of the same feature, which they managed to ship the day before ChatGPT did.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 16 0.9638
2.0 32 0.9638
3.0 48 0.9638
3.125 50 0.9638
4.0 64 0.9692
5.0 80 0.9846
6.0 96 0.9846
6.25 100 0.9846
7.0 112 0.9846
8.0 128 0.9846
9.0 144 0.9846
9.375 150 0.9846
10.0 160 0.9846

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.5.1
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}