--- title: README emoji: 🌍 colorFrom: gray colorTo: purple sdk: static pinned: false ---
---- # 🌍 Join the Pruna AI community! [](https://twitter.com/PrunaAI) [](https://github.com/PrunaAI/pruna) [](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [](https://discord.com/invite/rskEr4BZJx) [](https://www.reddit.com/r/PrunaAI/) ---- # 💜 Simply make AI models faster, cheaper, smaller, greener! [Pruna AI](https://www.pruna.ai/) makes AI models faster, cheaper, smaller, greener with the `pruna` package. - It supports **various models including CV, NLP, audio, graphs for predictive and generative AI**. - It supports **various hardware including GPU, CPU, Edge**. - It supports **various compression algortihms including quantization, pruning, distillation, caching, recovery, compilation** that can be **combined together**. - You can either **play on your own** with smash/compression configurations or **let the smashing/compressing agent** find the optimal configuration **[Pro]**. - You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models. You can set it up in minutes and compress your first models in few lines of code! ---- # ⏩ How to get started? You can smash your own models by installing pruna with pip: ``` pip install pruna ``` or directly [from source](https://github.com/PrunaAI/pruna). You can start with simple notebooks to experience efficiency gains with: | Use Case | Free Notebooks | |------------------------------------------------------------|----------------------------------------------------------------| | **3x Faster Stable Diffusion Models** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/sd_deepcache.ipynb) | | **Making your LLMs 4x smaller** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/llms.ipynb) | | **Smash your model with a CPU only** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/cv_cpu.ipynb) | | **Transcribe 2 hours of audio in less than 2 minutes with Whisper** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/asr_tutorial.ipynb) | | **100% faster Whisper Transcription** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/asr_whisper.ipynb) | | **Run your Flux model without an A100** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/flux_small.ipynb) | | **x2 smaller Sana in action** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/sana_diffusers_int8.ipynb) | For more details about installation, free tutorials and Pruna Pro tutorials, you can check the [Pruna AI documentation](https://docs.pruna.ai/). ----