Text-to-Image
Diffusers
Safetensors
Pruna AI
StableDiffusionXLPipeline
johnrachwanpruna commited on
Commit
a8d6f1d
·
verified ·
1 Parent(s): 0a7d1cc

Add files using upload-large-folder tool

Browse files
README.md CHANGED
@@ -1,7 +1,13 @@
1
  ---
 
 
 
2
  library_name: diffusers
 
3
  tags:
4
  - pruna-ai
 
 
5
  ---
6
 
7
  # Model Card for PrunaAI/Segmind-Vega-smashed
@@ -23,12 +29,15 @@ To ensure that all optimizations are applied, use the pruna library to load the
23
  ```python
24
  from pruna import PrunaModel
25
 
26
- loaded_model = PrunaModel.from_hub(
27
  "PrunaAI/Segmind-Vega-smashed"
28
  )
 
29
  ```
30
 
31
- After loading the model, you can use the inference methods of the original model. Take a look at the [documentation](https://pruna.readthedocs.io/en/latest/index.html) for more usage information.
 
 
32
 
33
  ## Smash Configuration
34
 
@@ -40,6 +49,7 @@ The compression configuration of the model is stored in the `smash_config.json`
40
  "cacher": null,
41
  "compiler": null,
42
  "factorizer": null,
 
43
  "pruner": null,
44
  "quantizer": "hqq_diffusers",
45
  "hqq_diffusers_backend": "torchao_int4",
@@ -58,6 +68,7 @@ The compression configuration of the model is stored in the `smash_config.json`
58
  "factorizer": null,
59
  "pruner": null,
60
  "quantizer": null,
 
61
  "cacher": null,
62
  "compiler": null,
63
  "batcher": null
@@ -71,4 +82,4 @@ The compression configuration of the model is stored in the `smash_config.json`
71
  [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
72
  [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
73
  [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/rskEr4BZJx)
74
- [![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/)
 
1
  ---
2
+ datasets:
3
+ - zzliang/GRIT
4
+ - wanng/midjourney-v5-202304-clean
5
  library_name: diffusers
6
+ license: apache-2.0
7
  tags:
8
  - pruna-ai
9
+ - safetensors
10
+ pinned: true
11
  ---
12
 
13
  # Model Card for PrunaAI/Segmind-Vega-smashed
 
29
  ```python
30
  from pruna import PrunaModel
31
 
32
+ loaded_model = PrunaModel.from_pretrained(
33
  "PrunaAI/Segmind-Vega-smashed"
34
  )
35
+ # we can then run inference using the methods supported by the base model
36
  ```
37
 
38
+
39
+ For inference, you can use the inference methods of the original model like shown in [the original model card](https://huggingface.co/segmind/Segmind-Vega?library=diffusers).
40
+ Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information.
41
 
42
  ## Smash Configuration
43
 
 
49
  "cacher": null,
50
  "compiler": null,
51
  "factorizer": null,
52
+ "kernel": null,
53
  "pruner": null,
54
  "quantizer": "hqq_diffusers",
55
  "hqq_diffusers_backend": "torchao_int4",
 
68
  "factorizer": null,
69
  "pruner": null,
70
  "quantizer": null,
71
+ "kernel": null,
72
  "cacher": null,
73
  "compiler": null,
74
  "batcher": null
 
82
  [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
83
  [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
84
  [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/rskEr4BZJx)
85
+ [![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/)
smash_config.json CHANGED
@@ -3,6 +3,7 @@
3
  "cacher": null,
4
  "compiler": null,
5
  "factorizer": null,
 
6
  "pruner": null,
7
  "quantizer": "hqq_diffusers",
8
  "hqq_diffusers_backend": "torchao_int4",
@@ -21,6 +22,7 @@
21
  "factorizer": null,
22
  "pruner": null,
23
  "quantizer": null,
 
24
  "cacher": null,
25
  "compiler": null,
26
  "batcher": null
 
3
  "cacher": null,
4
  "compiler": null,
5
  "factorizer": null,
6
+ "kernel": null,
7
  "pruner": null,
8
  "quantizer": "hqq_diffusers",
9
  "hqq_diffusers_backend": "torchao_int4",
 
22
  "factorizer": null,
23
  "pruner": null,
24
  "quantizer": null,
25
+ "kernel": null,
26
  "cacher": null,
27
  "compiler": null,
28
  "batcher": null
text_encoder/config.json CHANGED
@@ -19,6 +19,6 @@
19
  "pad_token_id": 1,
20
  "projection_dim": 768,
21
  "torch_dtype": "float32",
22
- "transformers_version": "4.52.4",
23
  "vocab_size": 49408
24
  }
 
19
  "pad_token_id": 1,
20
  "projection_dim": 768,
21
  "torch_dtype": "float32",
22
+ "transformers_version": "4.53.2",
23
  "vocab_size": 49408
24
  }
text_encoder_2/config.json CHANGED
@@ -19,6 +19,6 @@
19
  "pad_token_id": 1,
20
  "projection_dim": 1280,
21
  "torch_dtype": "float32",
22
- "transformers_version": "4.52.4",
23
  "vocab_size": 49408
24
  }
 
19
  "pad_token_id": 1,
20
  "projection_dim": 1280,
21
  "torch_dtype": "float32",
22
+ "transformers_version": "4.53.2",
23
  "vocab_size": 49408
24
  }