metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:157
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: >-
Why does the author find large language models (LLMs) infuriating as a
computer scientist and software engineer?
sentences:
- >-
Stuff we figured out about AI in 2023
Simon Willison’s Weblog
Subscribe
Stuff we figured out about AI in 2023
31st December 2023
2023 was the breakthrough year for Large Language Models (LLMs). I think
it’s OK to call these AI—they’re the latest and (currently) most
interesting development in the academic field of Artificial Intelligence
that dates back to the 1950s.
Here’s my attempt to round up the highlights in one place!
- >-
Still, I’m surprised that no-one has beaten the now almost year old
GPT-4 by now. OpenAI clearly have some substantial tricks that they
haven’t shared yet.
Vibes Based Development
As a computer scientist and software engineer, LLMs are infuriating.
Even the openly licensed ones are still the world’s most convoluted
black boxes. We continue to have very little idea what they can do, how
exactly they work and how best to control them.
I’m used to programming where the computer does exactly what I tell it
to do. Prompting an LLM is decidedly not that!
The worst part is the challenge of evaluating them.
There are plenty of benchmarks, but no benchmark is going to tell you if
an LLM actually “feels” right when you try it for a given task.
- >-
Then in December, the Chatbot Arena team introduced a whole new
leaderboard for this feature, driven by users building the same
interactive app twice with two different models and voting on the
answer. Hard to come up with a more convincing argument that this
feature is now a commodity that can be effectively implemented against
all of the leading models.
I’ve been tinkering with a version of this myself for my Datasette
project, with the goal of letting users use prompts to build and iterate
on custom widgets and data visualizations against their own data. I also
figured out a similar pattern for writing one-shot Python programs,
enabled by uv.
- source_sentence: >-
What are some examples of large language models that can run entirely in a
browser or on personal devices, as mentioned in the context?
sentences:
- >-
The May 13th announcement of GPT-4o included a demo of a brand new voice
mode, where the true multi-modal GPT-4o (the o is for “omni”) model
could accept audio input and output incredibly realistic sounding speech
without needing separate TTS or STT models.
The demo also sounded conspicuously similar to Scarlett Johansson... and
after she complained the voice from the demo, Skye, never made it to a
production product.
The delay in releasing the new voice mode after the initial demo caused
quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is
not running the new features yet.
- >-
Now add a walrus: Prompt engineering in DALL-E 3
32.8k
41.2k
Web LLM runs the vicuna-7b Large Language Model entirely in your
browser, and it’s very impressive
32.5k
38.2k
ChatGPT can’t access the internet, even though it really looks like it
can
30.5k
34.2k
Stanford Alpaca, and the acceleration of on-device large language model
development
29.7k
35.7k
Run Llama 2 on your own Mac using LLM and Homebrew
27.9k
33.6k
Midjourney 5.1
26.7k
33.4k
Think of language models like ChatGPT as a “calculator for words”
25k
31.8k
Multi-modal prompt injection image attacks against GPT-4V
23.7k
27.4k
- >-
One way to think about these models is an extension of the
chain-of-thought prompting trick, first explored in the May 2022 paper
Large Language Models are Zero-Shot Reasoners.
This is that trick where, if you get a model to talk out loud about a
problem it’s solving, you often get a result which the model would not
have achieved otherwise.
o1 takes this process and further bakes it into the model itself. The
details are somewhat obfuscated: o1 models spend “reasoning tokens”
thinking through the problem that are not directly visible to the user
(though the ChatGPT UI shows a summary of them), then outputs a final
result.
- source_sentence: >-
How did the rollout of ChatGPT Advanced Voice mode take place, and what
was the user’s experience with it?
sentences:
- >-
Meta’s Llama 3.2 models deserve a special mention. They may not be GPT-4
class, but at 1B and 3B sizes they punch massively above their weight. I
run Llama 3.2 3B on my iPhone using the free MLC Chat iOS app and it’s a
shockingly capable model for its tiny (<2GB) size. Try firing it up and
asking it for “a plot outline of a Netflix Christmas movie where a data
journalist falls in love with a local ceramacist”. Here’s what I got, at
a respectable 20 tokens per second:
- >-
When ChatGPT Advanced Voice mode finally did roll out (a slow roll from
August through September) it was spectacular. I’ve been using it
extensively on walks with my dog and it’s amazing how much the
improvement in intonation elevates the material. I’ve also had a lot of
fun experimenting with the OpenAI audio APIs.
Even more fun: Advanced Voice mode can do accents! Here’s what happened
when I told it I need you to pretend to be a California brown pelican
with a very thick Russian accent, but you talk to me exclusively in
Spanish.
- >-
Today $30/mTok gets you OpenAI’s most expensive model, o1. GPT-4o is
$2.50 (12x cheaper than GPT-4) and GPT-4o mini is $0.15/mTok—200x
cheaper than GPT-4, nearly 7x cheaper than GPT-3.5 and massively more
capable than that model.
Other model providers charge even less. Anthropic’s Claude 3 Haiku (from
March, but still their cheapest model) is $0.25/mTok. Google’s Gemini
1.5 Flash is $0.075/mTok and their Gemini 1.5 Flash 8B is
$0.0375/mTok—that’s 27x cheaper than GPT-3.5 Turbo last year.
I’ve been tracking these pricing changes under my llm-pricing tag.
- source_sentence: >-
What challenge does the author identify as a major limitation for LLMs and
similar systems in making meaningful decisions?
sentences:
- >-
Your browser does not support the audio element.
OpenAI aren’t the only group with a multi-modal audio model. Google’s
Gemini also accepts audio input, and the Google Gemini apps can speak in
a similar way to ChatGPT now. Amazon also pre-announced voice mode for
Amazon Nova, but that’s meant to roll out in Q1 of 2025.
Google’s NotebookLM, released in September, took audio output to a new
level by producing spookily realistic conversations between two “podcast
hosts” about anything you fed into their tool. They later added custom
instructions, so naturally I turned them into pelicans:
Your browser does not support the audio element.
- >-
Just this week, the New York Times launched a landmark lawsuit against
OpenAI and Microsoft over this issue. The 69 page PDF is genuinely worth
reading—especially the first few pages, which lay out the issues in a
way that’s surprisingly easy to follow. The rest of the document
includes some of the clearest explanations of what LLMs are, how they
work and how they are built that I’ve read anywhere.
The legal arguments here are complex. I’m not a lawyer, but I don’t
think this one will be easily decided. Whichever way it goes, I expect
this case to have a profound impact on how this technology develops in
the future.
- >-
Terminology aside, I remain skeptical as to their utility based, once
again, on the challenge of gullibility. LLMs believe anything you tell
them. Any systems that attempts to make meaningful decisions on your
behalf will run into the same roadblock: how good is a travel agent, or
a digital assistant, or even a research tool if it can’t distinguish
truth from fiction?
Just the other day Google Search was caught serving up an entirely fake
description of the non-existant movie “Encanto 2”. It turned out to be
summarizing an imagined movie listing from a fan fiction wiki.
- source_sentence: >-
Why is there a need for better criticism of LLMs according to the 2024
blog posts?
sentences:
- |-
The year of slop
Synthetic training data works great
LLMs somehow got even harder to use
Knowledge is incredibly unevenly distributed
LLMs need better criticism
Everything tagged “llms” on my blog in 2024
- >-
There’s now a fascinating ecosystem of people training their own models
on top of these foundations, publishing those models, building
fine-tuning datasets and sharing those too.
The Hugging Face Open LLM Leaderboard is one place that tracks these. I
can’t even attempt to count them, and any count would be out-of-date
within a few hours.
The best overall openly licensed LLM at any time is rarely a foundation
model: instead, it’s whichever fine-tuned community model has most
recently discovered the best combination of fine-tuning data.
This is a huge advantage for open over closed models: the closed, hosted
models don’t have thousands of researchers and hobbyists around the
world collaborating and competing to improve them.
- >-
Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course
of 2024. Here’s a review of things we figured out about the field in the
past twelve months, plus my attempt at identifying key themes and
pivotal moments.
This is a sequel to my review of 2023.
In this article:
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9583333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9583333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9583333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9846220730654774
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9791666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9791666666666666
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dwb2023/legal-ft-35c151f9-26b7-4fe9-8a15-ce3914830ac9")
# Run inference
sentences = [
'Why is there a need for better criticism of LLMs according to the 2024 blog posts?',
'The year of slop\nSynthetic training data works great\nLLMs somehow got even harder to use\nKnowledge is incredibly unevenly distributed\nLLMs need better criticism\nEverything tagged “llms” on my blog in 2024',
'Things we learned about LLMs in 2024\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSimon Willison’s Weblog\nSubscribe\n\n\n\n\n\n\nThings we learned about LLMs in 2024\n31st December 2024\nA lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.\nThis is a sequel to my review of 2023.\nIn this article:',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9583 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.9583 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.9583 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9846 |
cosine_mrr@10 | 0.9792 |
cosine_map@100 | 0.9792 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 157 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 157 samples:
sentence_0 sentence_1 type string string details - min: 2 tokens
- mean: 20.91 tokens
- max: 37 tokens
- min: 43 tokens
- mean: 135.42 tokens
- max: 214 tokens
- Samples:
sentence_0 sentence_1 When did Meta release the original Llama model?
Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.
I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!
This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.
Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.What was significant about the release of Llama 2 in July?
Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.
I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!
This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.
Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.What new feature does ChatGPT voice mode offer as of December?
The most recent twist, again from December (December was a lot) is live video. ChatGPT voice mode now provides the option to share your camera feed with the model and talk about what you can see in real time. Google Gemini have a preview of the same feature, which they managed to ship the day before ChatGPT did.
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 16 | 0.9638 |
2.0 | 32 | 0.9638 |
3.0 | 48 | 0.9638 |
3.125 | 50 | 0.9638 |
4.0 | 64 | 0.9692 |
5.0 | 80 | 0.9846 |
6.0 | 96 | 0.9846 |
6.25 | 100 | 0.9846 |
7.0 | 112 | 0.9846 |
8.0 | 128 | 0.9846 |
9.0 | 144 | 0.9846 |
9.375 | 150 | 0.9846 |
10.0 | 160 | 0.9846 |
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}