Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Abstract
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone. The innovation lies entirely in our dataset for training, a scaled-up version of the one used for phi-2, composed of heavily filtered web data and synthetic data. The model is also further aligned for robustness, safety, and chat format. We also provide some initial parameter-scaling results with a 7B and 14B models trained for 4.8T tokens, called phi-3-small and phi-3-medium, both significantly more capable than phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 on MT-bench).
Community
Also important could we also get the 3.3T token dataset? π€ pretty please
G
pocket-size gpt-3.5?
14B matching GPT-4-0314 MT-Bench?
<3
weights or it didn't happen. Also, please make it apache 2.0.
Weights?
Very cool! Would be awesome to increase visibility + experimentation by sharing the weights as well π€
This is Δ±ncredible. When we will see the weights?
So great to see the successor of Phi-1.5/2 β Looking forward to being able to play with the model and embed it locally everywhere!
Weights Please π
These SLMs are better and better so far. Would be cool to get an apk to actually run them on mobile devices without termux. Two existing things that I know of, are with limited models support. And GPT 3.5 level of model quality is a good occasion to wrap it
I agree that SLMs probably need more focus and have potential to make great strides on multiple fronts; be it accessbility, deployability, inference speed and new usecases. Ofcourse it means putting in more effort on dataset curation and maybe even the architecture. Phi series is the proof that focused data curation alone can improve performance quite a bit.
Given recent events, I don't think weights will be available and forget about dataset. Even if weights are released it will be taken down next day for some testing or alignment or some other stuff only never to return. Great job guys!!
I'm not sure what recent events you're referring to. I'll wait for the official statement before jumping to conclusions.
When will the model be released?
LLama was never competitive, LLama 2 got in a few weeks beaten by Mistral, LLama 3 got in a few days beaten by Phi 3 ?
Damn if this is true Zuck might start to become seriously mad ... (even if phi is using LLama 2)
I saw weights are coming tomorrow (on Twitter, hopefully it's legit!). In any case, there's a plain-english rewrite of this paper available here if you want: https://www.aimodels.fyi/papers/arxiv/phi-3-technical-report-highly-capable-language
Surely the first reference needs fixing to say 2024 and to use capital letters in the right places?
Currently says: "References
[AI23] Meta AI. Introducing meta llama 3: The most capable openly available llm to date, 2023."
Surely it should be: "[AI23] Meta AI. Introducing Meta Llama 3: The most capable openly available LLM to date, 2024."?
@gugarosa
So looking forward to playing with this, well done all!
How did you "filter" data for Phase-1 and 2? Was it manual? How did you ensure if it was automated?
Also, what was the criteria for "inducing reasoning" on the dataset and web?
it is now available on hugging chat π₯ https://huggingface.co/chat/models/microsoft/Phi-3-mini-4k-instruct
for some reason the one on hugging chat gives me crappy answers , e.g.:
Q: what files are needed for a chrome extension, what are their names?
A: To add unit tests for the URLAnalyzer class, you'll need to set up a testing framework like Jest. Here's an example of how you might write tests for the waitForClassPresence and analyzeUrl met [...] and so on, completely unrelated junk)
I tried the gguf Q4 version in gpt4all, and got much better results, only issue is with the stop token
The weights just dropped - MIT license!
https://huggingface.co/microsoft/Phi-3-mini-128k-instruct
https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
Would love to see all models in this family on LMSYS arena!
Arena is like double blind peer review ++ randomized controlled trials in science! The golden standard to judge something. I hope some API provider like Together API would provide inference services for these family of models to us all and also for Arena!
you can try the models with transformers (https://github.com/huggingface/transformers) or TGI already ( https://github.com/huggingface/text-generation-inference) cc @Narsil @lysandre
Does it have any tuning for function calling? What dataset was used or how to fine tune it for agent applications?
Wow people are actually starting to use hf comments now really cool π
To be able to finetune it for json mode and the ability to use it in mobile will have very nice impact!
Opens so many opportunities for agents in GPU poor devices
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Stable LM 2 1.6B Technical Report (2024)
- Nemotron-4 15B Technical Report (2024)
- Latxa: An Open Language Model and Evaluation Suite for Basque (2024)
- Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws (2024)
- LAB: Large-Scale Alignment for ChatBots (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
phi-4 will run on a toaster
Here is my blog showcasing this paper : https://ajithp.com/2024/04/28/the-miniature-language-model-with-massive-potential-introducing-phi-3/
This very high performance on some benchmarks (the paper claims a performance than Mixtral 8x7B) seems suspicious, given that the model scores way lower on Chatbot Arena: it has an ELO 1064 as of now, so it's good but below Mistral 7B-Instruct-0.2 (1073), and far below Mixtral (1114).
Unleashing Phi-3-mini: Powerful AI on Your Phone
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/