|
--- |
|
title: README |
|
emoji: π |
|
colorFrom: gray |
|
colorTo: purple |
|
sdk: static |
|
pinned: false |
|
--- |
|
<!-- header start --> |
|
<!-- 200823 --> |
|
<div style="width: auto; margin-left: auto; margin-right: auto"> |
|
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> |
|
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
|
</a> |
|
</div> |
|
<!-- header end --> |
|
|
|
---- |
|
|
|
# π Join the Pruna AI community! |
|
[](https://twitter.com/PrunaAI) |
|
[](https://github.com/PrunaAI/pruna) |
|
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) |
|
[](https://discord.com/invite/rskEr4BZJx) |
|
[](https://www.reddit.com/r/PrunaAI/) |
|
|
|
---- |
|
|
|
# π Simply make AI models faster, cheaper, smaller, greener! |
|
[Pruna AI](https://www.pruna.ai/) makes AI models faster, cheaper, smaller, greener with the `pruna` package. |
|
- It supports **various models including CV, NLP, audio, graphs for predictive and generative AI**. |
|
- It supports **various hardware including GPU, CPU, Edge**. |
|
- It supports **various compression algortihms including quantization, pruning, distillation, caching, recovery, compilation** that can be **combined together**. |
|
- You can either **play on your own** with smash/compression configurations or **let the smashing/compressing agent** find the optimal configuration **[Pro]**. |
|
- You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models. |
|
You can set it up in minutes and compress your first models in few lines of code! |
|
|
|
---- |
|
|
|
# β© How to get started? |
|
You can smash your own models by installing pruna with pip: |
|
``` |
|
pip install pruna |
|
``` |
|
or directly [from source](https://github.com/PrunaAI/pruna). |
|
|
|
You can start with simple notebooks to experience efficiency gains with: |
|
|
|
| Use Case | Free Notebooks | |
|
|------------------------------------------------------------|----------------------------------------------------------------| |
|
| **3x Faster Stable Diffusion Models** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/sd_deepcache.ipynb) | |
|
| **Making your LLMs 4x smaller** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/llms.ipynb) | |
|
| **Smash your model with a CPU only** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/cv_cpu.ipynb) | |
|
| **Transcribe 2 hours of audio in less than 2 minutes with Whisper** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/asr_tutorial.ipynb) | |
|
| **100% faster Whisper Transcription** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/asr_whisper.ipynb) | |
|
| **Run your Flux model without an A100** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/flux_small.ipynb) | |
|
| **x2 smaller Sana in action** | β© [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/sana_diffusers_int8.ipynb) | |
|
|
|
For more details about installation, free tutorials and Pruna Pro tutorials, you can check the [Pruna AI documentation](https://docs.pruna.ai/). |
|
|
|
---- |
|
|