---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:157
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: Why does the author find large language models (LLMs) infuriating
as a computer scientist and software engineer?
sentences:
- 'Stuff we figured out about AI in 2023
Simon Willison’s Weblog
Subscribe
Stuff we figured out about AI in 2023
31st December 2023
2023 was the breakthrough year for Large Language Models (LLMs). I think it’s
OK to call these AI—they’re the latest and (currently) most interesting development
in the academic field of Artificial Intelligence that dates back to the 1950s.
Here’s my attempt to round up the highlights in one place!'
- 'Still, I’m surprised that no-one has beaten the now almost year old GPT-4 by
now. OpenAI clearly have some substantial tricks that they haven’t shared yet.
Vibes Based Development
As a computer scientist and software engineer, LLMs are infuriating.
Even the openly licensed ones are still the world’s most convoluted black boxes.
We continue to have very little idea what they can do, how exactly they work and
how best to control them.
I’m used to programming where the computer does exactly what I tell it to do.
Prompting an LLM is decidedly not that!
The worst part is the challenge of evaluating them.
There are plenty of benchmarks, but no benchmark is going to tell you if an LLM
actually “feels” right when you try it for a given task.'
- 'Then in December, the Chatbot Arena team introduced a whole new leaderboard for
this feature, driven by users building the same interactive app twice with two
different models and voting on the answer. Hard to come up with a more convincing
argument that this feature is now a commodity that can be effectively implemented
against all of the leading models.
I’ve been tinkering with a version of this myself for my Datasette project, with
the goal of letting users use prompts to build and iterate on custom widgets and
data visualizations against their own data. I also figured out a similar pattern
for writing one-shot Python programs, enabled by uv.'
- source_sentence: What are some examples of large language models that can run entirely
in a browser or on personal devices, as mentioned in the context?
sentences:
- 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode,
where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio
input and output incredibly realistic sounding speech without needing separate
TTS or STT models.
The demo also sounded conspicuously similar to Scarlett Johansson... and after
she complained the voice from the demo, Skye, never made it to a production product.
The delay in releasing the new voice mode after the initial demo caused quite
a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running
the new features yet.'
- 'Now add a walrus: Prompt engineering in DALL-E 3
32.8k
41.2k
Web LLM runs the vicuna-7b Large Language Model entirely in your browser, and
it’s very impressive
32.5k
38.2k
ChatGPT can’t access the internet, even though it really looks like it can
30.5k
34.2k
Stanford Alpaca, and the acceleration of on-device large language model development
29.7k
35.7k
Run Llama 2 on your own Mac using LLM and Homebrew
27.9k
33.6k
Midjourney 5.1
26.7k
33.4k
Think of language models like ChatGPT as a “calculator for words”
25k
31.8k
Multi-modal prompt injection image attacks against GPT-4V
23.7k
27.4k'
- 'One way to think about these models is an extension of the chain-of-thought prompting
trick, first explored in the May 2022 paper Large Language Models are Zero-Shot
Reasoners.
This is that trick where, if you get a model to talk out loud about a problem
it’s solving, you often get a result which the model would not have achieved otherwise.
o1 takes this process and further bakes it into the model itself. The details
are somewhat obfuscated: o1 models spend “reasoning tokens” thinking through the
problem that are not directly visible to the user (though the ChatGPT UI shows
a summary of them), then outputs a final result.'
- source_sentence: How did the rollout of ChatGPT Advanced Voice mode take place,
and what was the user’s experience with it?
sentences:
- 'Meta’s Llama 3.2 models deserve a special mention. They may not be GPT-4 class,
but at 1B and 3B sizes they punch massively above their weight. I run Llama 3.2
3B on my iPhone using the free MLC Chat iOS app and it’s a shockingly capable
model for its tiny (<2GB) size. Try firing it up and asking it for “a plot outline
of a Netflix Christmas movie where a data journalist falls in love with a local
ceramacist”. Here’s what I got, at a respectable 20 tokens per second:'
- 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August
through September) it was spectacular. I’ve been using it extensively on walks
with my dog and it’s amazing how much the improvement in intonation elevates the
material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.
Even more fun: Advanced Voice mode can do accents! Here’s what happened when I
told it I need you to pretend to be a California brown pelican with a very thick
Russian accent, but you talk to me exclusively in Spanish.'
- 'Today $30/mTok gets you OpenAI’s most expensive model, o1. GPT-4o is $2.50 (12x
cheaper than GPT-4) and GPT-4o mini is $0.15/mTok—200x cheaper than GPT-4, nearly
7x cheaper than GPT-3.5 and massively more capable than that model.
Other model providers charge even less. Anthropic’s Claude 3 Haiku (from March,
but still their cheapest model) is $0.25/mTok. Google’s Gemini 1.5 Flash is $0.075/mTok
and their Gemini 1.5 Flash 8B is $0.0375/mTok—that’s 27x cheaper than GPT-3.5
Turbo last year.
I’ve been tracking these pricing changes under my llm-pricing tag.'
- source_sentence: What challenge does the author identify as a major limitation for
LLMs and similar systems in making meaningful decisions?
sentences:
- 'Your browser does not support the audio element.
OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also
accepts audio input, and the Google Gemini apps can speak in a similar way to
ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s
meant to roll out in Q1 of 2025.
Google’s NotebookLM, released in September, took audio output to a new level by
producing spookily realistic conversations between two “podcast hosts” about anything
you fed into their tool. They later added custom instructions, so naturally I
turned them into pelicans:
Your browser does not support the audio element.'
- 'Just this week, the New York Times launched a landmark lawsuit against OpenAI
and Microsoft over this issue. The 69 page PDF is genuinely worth reading—especially
the first few pages, which lay out the issues in a way that’s surprisingly easy
to follow. The rest of the document includes some of the clearest explanations
of what LLMs are, how they work and how they are built that I’ve read anywhere.
The legal arguments here are complex. I’m not a lawyer, but I don’t think this
one will be easily decided. Whichever way it goes, I expect this case to have
a profound impact on how this technology develops in the future.'
- 'Terminology aside, I remain skeptical as to their utility based, once again,
on the challenge of gullibility. LLMs believe anything you tell them. Any systems
that attempts to make meaningful decisions on your behalf will run into the same
roadblock: how good is a travel agent, or a digital assistant, or even a research
tool if it can’t distinguish truth from fiction?
Just the other day Google Search was caught serving up an entirely fake description
of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
movie listing from a fan fiction wiki.'
- source_sentence: Why is there a need for better criticism of LLMs according to the
2024 blog posts?
sentences:
- 'The year of slop
Synthetic training data works great
LLMs somehow got even harder to use
Knowledge is incredibly unevenly distributed
LLMs need better criticism
Everything tagged “llms” on my blog in 2024'
- 'There’s now a fascinating ecosystem of people training their own models on top
of these foundations, publishing those models, building fine-tuning datasets and
sharing those too.
The Hugging Face Open LLM Leaderboard is one place that tracks these. I can’t
even attempt to count them, and any count would be out-of-date within a few hours.
The best overall openly licensed LLM at any time is rarely a foundation model:
instead, it’s whichever fine-tuned community model has most recently discovered
the best combination of fine-tuning data.
This is a huge advantage for open over closed models: the closed, hosted models
don’t have thousands of researchers and hobbyists around the world collaborating
and competing to improve them.'
- 'Things we learned about LLMs in 2024
Simon Willison’s Weblog
Subscribe
Things we learned about LLMs in 2024
31st December 2024
A lot has happened in the world of Large Language Models over the course of 2024.
Here’s a review of things we figured out about the field in the past twelve months,
plus my attempt at identifying key themes and pivotal moments.
This is a sequel to my review of 2023.
In this article:'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9583333333333334
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9583333333333334
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9583333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9846220730654774
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9791666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9791666666666666
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("dwb2023/legal-ft-35c151f9-26b7-4fe9-8a15-ce3914830ac9")
# Run inference
sentences = [
'Why is there a need for better criticism of LLMs according to the 2024 blog posts?',
'The year of slop\nSynthetic training data works great\nLLMs somehow got even harder to use\nKnowledge is incredibly unevenly distributed\nLLMs need better criticism\nEverything tagged “llms” on my blog in 2024',
'Things we learned about LLMs in 2024\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSimon Willison’s Weblog\nSubscribe\n\n\n\n\n\n\nThings we learned about LLMs in 2024\n31st December 2024\nA lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.\nThis is a sequel to my review of 2023.\nIn this article:',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [InformationRetrievalEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9583 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9583 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9583 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9846** |
| cosine_mrr@10 | 0.9792 |
| cosine_map@100 | 0.9792 |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 157 training samples
* Columns: sentence_0
and sentence_1
* Approximate statistics based on the first 157 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details |
When did Meta release the original Llama model?
| Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.
I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!
This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.
Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.
|
| What was significant about the release of Llama 2 in July?
| Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.
I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!
This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.
Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.
|
| What new feature does ChatGPT voice mode offer as of December?
| The most recent twist, again from December (December was a lot) is live video. ChatGPT voice mode now provides the option to share your camera feed with the model and talk about what you can see in real time. Google Gemini have a preview of the same feature, which they managed to ship the day before ChatGPT did.
|
* Loss: [MatryoshkaLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters