sharpenb commited on
Commit
8e30dfc
·
verified ·
1 Parent(s): 0bbce87

8e260d3c9b68167d931ac3fa33c35ebb87f5ce6dcb258b5aa55859bd18898973

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
3
- base_model: McGill-NLP/Llama-3-8B-Web
4
  metrics:
5
  - memory_disk
6
  - memory_inference
@@ -31,7 +31,7 @@ tags:
31
  - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
32
  - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
33
  - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
34
- - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
35
 
36
  ## Results
37
 
@@ -40,7 +40,7 @@ tags:
40
  **Frequently Asked Questions**
41
  - ***How does the compression work?*** The model is compressed with llm-int8.
42
  - ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
43
- - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
44
  - ***What is the model format?*** We use safetensors.
45
  - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
46
  - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
@@ -52,7 +52,7 @@ tags:
52
 
53
  You can run the smashed model with these steps:
54
 
55
- 0. Check requirements from the original repo McGill-NLP/Llama-3-8B-Web installed. In particular, check python, cuda, and transformers versions.
56
  1. Make sure that you have installed quantization related packages.
57
  ```bash
58
  pip install transformers accelerate bitsandbytes>0.37.0
@@ -63,7 +63,7 @@ You can run the smashed model with these steps:
63
 
64
 
65
  model = AutoModelForCausalLM.from_pretrained("PrunaAI/McGill-NLP-Llama-3-8B-Web-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
66
- tokenizer = AutoTokenizer.from_pretrained("McGill-NLP/Llama-3-8B-Web")
67
 
68
  input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
69
 
@@ -77,7 +77,7 @@ The configuration info are in `smash_config.json`.
77
 
78
  ## Credits & License
79
 
80
- The license of the smashed model follows the license of the original model. Please check the license of the original model McGill-NLP/Llama-3-8B-Web before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
81
 
82
  ## Want to compress other models?
83
 
 
1
  ---
2
  thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
3
+ base_model: ORIGINAL_REPO_NAME
4
  metrics:
5
  - memory_disk
6
  - memory_inference
 
31
  - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
32
  - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
33
  - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
34
+ - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
35
 
36
  ## Results
37
 
 
40
  **Frequently Asked Questions**
41
  - ***How does the compression work?*** The model is compressed with llm-int8.
42
  - ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
43
+ - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
44
  - ***What is the model format?*** We use safetensors.
45
  - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
46
  - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
 
52
 
53
  You can run the smashed model with these steps:
54
 
55
+ 0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
56
  1. Make sure that you have installed quantization related packages.
57
  ```bash
58
  pip install transformers accelerate bitsandbytes>0.37.0
 
63
 
64
 
65
  model = AutoModelForCausalLM.from_pretrained("PrunaAI/McGill-NLP-Llama-3-8B-Web-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
66
+ tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
67
 
68
  input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
69
 
 
77
 
78
  ## Credits & License
79
 
80
+ The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
81
 
82
  ## Want to compress other models?
83
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "/tmp/tmpzkd7tnsk",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
@@ -7,11 +7,13 @@
7
  "attention_dropout": 0.0,
8
  "bos_token_id": 128000,
9
  "eos_token_id": 128001,
 
10
  "hidden_act": "silu",
11
  "hidden_size": 4096,
12
  "initializer_range": 0.02,
13
  "intermediate_size": 14336,
14
  "max_position_embeddings": 8192,
 
15
  "model_type": "llama",
16
  "num_attention_heads": 32,
17
  "num_hidden_layers": 32,
@@ -38,8 +40,8 @@
38
  "rope_scaling": null,
39
  "rope_theta": 500000.0,
40
  "tie_word_embeddings": false,
41
- "torch_dtype": "float16",
42
- "transformers_version": "4.40.0",
43
  "use_cache": true,
44
  "vocab_size": 128256
45
  }
 
1
  {
2
+ "_name_or_path": "/tmp/models/tmpjyjfbg6q06azzt4z",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
 
7
  "attention_dropout": 0.0,
8
  "bos_token_id": 128000,
9
  "eos_token_id": 128001,
10
+ "head_dim": 128,
11
  "hidden_act": "silu",
12
  "hidden_size": 4096,
13
  "initializer_range": 0.02,
14
  "intermediate_size": 14336,
15
  "max_position_embeddings": 8192,
16
+ "mlp_bias": false,
17
  "model_type": "llama",
18
  "num_attention_heads": 32,
19
  "num_hidden_layers": 32,
 
40
  "rope_scaling": null,
41
  "rope_theta": 500000.0,
42
  "tie_word_embeddings": false,
43
+ "torch_dtype": "bfloat16",
44
+ "transformers_version": "4.48.2",
45
  "use_cache": true,
46
  "vocab_size": 128256
47
  }
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  128001,
6
  128009
7
  ],
8
- "transformers_version": "4.40.0"
9
  }
 
5
  128001,
6
  128009
7
  ],
8
+ "transformers_version": "4.48.2"
9
  }
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dc7f7c99a906f9b3130e120dd1e28803e9337de4aca7bd53a043247264cb32bd
3
  size 1050673280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec8e7d6d47280e40ba4e12c5f32ef4c0c83fb322e864679233deabc324d0a1da
3
  size 1050673280
model.safetensors.index.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "metadata": {
3
- "total_size": 6027779904
4
  },
5
  "weight_map": {
6
  "lm_head.weight": "model-00002-of-00002.safetensors",
 
1
  {
2
  "metadata": {
3
+ "total_size": 6027780128
4
  },
5
  "weight_map": {
6
  "lm_head.weight": "model-00002-of-00002.safetensors",
smash_config.json CHANGED
@@ -1,31 +1,23 @@
1
  {
2
- "api_key": null,
3
- "verify_url": "http://johnrachwan.pythonanywhere.com",
4
- "smash_config": {
5
- "pruners": "None",
6
- "pruning_ratio": 0.0,
7
- "factorizers": "None",
8
- "quantizers": "['llm-int8']",
9
- "weight_quantization_bits": 4,
10
- "output_deviation": 0.005,
11
- "compilers": "None",
12
- "static_batch": true,
13
- "static_shape": true,
14
- "controlnet": "None",
15
- "unet_dim": 4,
16
- "device": "cuda",
17
- "cache_dir": "/ceph/hdd/staff/charpent/.cache/models4gd1n_a8",
18
- "batch_size": 1,
19
- "model_name": "McGill-NLP/Llama-3-8B-Web",
20
- "task": "text_text_generation",
21
- "max_batch_size": 1,
22
- "qtype_weight": "torch.qint8",
23
- "qtype_activation": "torch.quint8",
24
- "qobserver": "<class 'torch.ao.quantization.observer.MinMaxObserver'>",
25
- "qscheme": "torch.per_tensor_symmetric",
26
- "qconfig": "x86",
27
- "group_size": 128,
28
- "damp_percent": 0.1,
29
- "save_load_fn": "bitsandbytes"
30
- }
31
  }
 
1
  {
2
+ "batchers": null,
3
+ "cachers": null,
4
+ "compilers": null,
5
+ "distillers": null,
6
+ "pruners": null,
7
+ "quantizers": "llm-int8",
8
+ "recoverers": null,
9
+ "quant_llm-int8_compute_dtype": "bfloat16",
10
+ "quant_llm-int8_double_quant": false,
11
+ "quant_llm-int8_enable_fp32_cpu_offload": false,
12
+ "quant_llm-int8_has_fp16_weight": false,
13
+ "quant_llm-int8_quant_type": "fp4",
14
+ "quant_llm-int8_threshold": 6.0,
15
+ "quant_llm-int8_weight_bits": 4,
16
+ "max_batch_size": 1,
17
+ "device": "cuda",
18
+ "cache_dir": "/tmp/models/tmpjyjfbg6q",
19
+ "task": "",
20
+ "save_load_fn": "llm-int8",
21
+ "save_load_fn_args": {},
22
+ "api_key": null
 
 
 
 
 
 
 
 
23
  }