Commit
·
f869420
1
Parent(s):
9a6f0f4
add ckpt and artifacts
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- README.md +123 -3
- config.json +70 -0
- configuration_deepseek.py +210 -0
- hf_quant_config.json +258 -0
- model-00001-of-00080.safetensors +3 -0
- model-00002-of-00080.safetensors +3 -0
- model-00003-of-00080.safetensors +3 -0
- model-00004-of-00080.safetensors +3 -0
- model-00005-of-00080.safetensors +3 -0
- model-00006-of-00080.safetensors +3 -0
- model-00007-of-00080.safetensors +3 -0
- model-00008-of-00080.safetensors +3 -0
- model-00009-of-00080.safetensors +3 -0
- model-00010-of-00080.safetensors +3 -0
- model-00011-of-00080.safetensors +3 -0
- model-00012-of-00080.safetensors +3 -0
- model-00013-of-00080.safetensors +3 -0
- model-00014-of-00080.safetensors +3 -0
- model-00015-of-00080.safetensors +3 -0
- model-00016-of-00080.safetensors +3 -0
- model-00017-of-00080.safetensors +3 -0
- model-00018-of-00080.safetensors +3 -0
- model-00019-of-00080.safetensors +3 -0
- model-00020-of-00080.safetensors +3 -0
- model-00021-of-00080.safetensors +3 -0
- model-00022-of-00080.safetensors +3 -0
- model-00023-of-00080.safetensors +3 -0
- model-00024-of-00080.safetensors +3 -0
- model-00025-of-00080.safetensors +3 -0
- model-00026-of-00080.safetensors +3 -0
- model-00027-of-00080.safetensors +3 -0
- model-00028-of-00080.safetensors +3 -0
- model-00029-of-00080.safetensors +3 -0
- model-00030-of-00080.safetensors +3 -0
- model-00031-of-00080.safetensors +3 -0
- model-00032-of-00080.safetensors +3 -0
- model-00033-of-00080.safetensors +3 -0
- model-00034-of-00080.safetensors +3 -0
- model-00035-of-00080.safetensors +3 -0
- model-00036-of-00080.safetensors +3 -0
- model-00037-of-00080.safetensors +3 -0
- model-00038-of-00080.safetensors +3 -0
- model-00039-of-00080.safetensors +3 -0
- model-00040-of-00080.safetensors +3 -0
- model-00041-of-00080.safetensors +3 -0
- model-00042-of-00080.safetensors +3 -0
- model-00043-of-00080.safetensors +3 -0
- model-00044-of-00080.safetensors +3 -0
- model-00045-of-00080.safetensors +3 -0
- model-00046-of-00080.safetensors +3 -0
README.md
CHANGED
@@ -1,3 +1,123 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: text-generation
|
3 |
+
base_model:
|
4 |
+
- deepseek-ai/DeepSeek-V3-0324
|
5 |
+
license: mit
|
6 |
+
---
|
7 |
+
# Model Overview
|
8 |
+
|
9 |
+
## Description:
|
10 |
+
The NVIDIA DeepSeek V3-0324 FP4 model is the quantized version of the DeepSeek AI's DeepSeek V3-0324 model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check [here](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324). The NVIDIA DeepSeek V3-0324 FP4 model is quantized with [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
|
11 |
+
|
12 |
+
This model is ready for commercial/non-commercial use. <br>
|
13 |
+
|
14 |
+
## Third-Party Community Consideration
|
15 |
+
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA [(DeepSeek V3-0324) Model Card](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324).
|
16 |
+
|
17 |
+
### License/Terms of Use:
|
18 |
+
[MIT](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md)
|
19 |
+
|
20 |
+
|
21 |
+
## Model Architecture:
|
22 |
+
**Architecture Type:** Transformers <br>
|
23 |
+
**Network Architecture:** DeepSeek V3-0324 <br>
|
24 |
+
|
25 |
+
## Input:
|
26 |
+
**Input Type(s):** Text <br>
|
27 |
+
**Input Format(s):** String <br>
|
28 |
+
**Input Parameters:** 1D (One Dimensional): Sequences <br>
|
29 |
+
**Other Properties Related to Input:** Context length up to 128K <br>
|
30 |
+
|
31 |
+
## Output:
|
32 |
+
**Output Type(s):** Text <br>
|
33 |
+
**Output Format:** String <br>
|
34 |
+
**Output Parameters:** 1D (One Dimensional): Sequences <br>
|
35 |
+
**Other Properties Related to Output:** N/A <br>
|
36 |
+
|
37 |
+
## Software Integration:
|
38 |
+
**Supported Runtime Engine(s):** <br>
|
39 |
+
* Tensor(RT)-LLM <br>
|
40 |
+
|
41 |
+
**Supported Hardware Microarchitecture Compatibility:** <br>
|
42 |
+
* NVIDIA Blackwell <br>
|
43 |
+
|
44 |
+
**Preferred Operating System(s):** <br>
|
45 |
+
* Linux <br>
|
46 |
+
|
47 |
+
## Model Version(s):
|
48 |
+
The model is quantized with nvidia-modelopt **v0.27.0** <br>
|
49 |
+
|
50 |
+
## Datasets:
|
51 |
+
* Calibration Dataset: [cnn_dailymail](https://huggingface.co/datasets/abisee/cnn_dailymail) <br>
|
52 |
+
** Data collection method: Automated. <br>
|
53 |
+
** Labeling method: Unknown. <br>
|
54 |
+
* Evaluation Dataset: [MMLU](https://github.com/hendrycks/test) <br>
|
55 |
+
** Data collection method: Unknown. <br>
|
56 |
+
** Labeling method: N/A. <br>
|
57 |
+
|
58 |
+
|
59 |
+
## Inference:
|
60 |
+
**Engine:** Tensor(RT)-LLM <br>
|
61 |
+
**Test Hardware:** B200 <br>
|
62 |
+
|
63 |
+
## Post Training Quantization
|
64 |
+
This model was obtained by quantizing the weights and activations of DeepSeek V3-0324 to FP4 data type, ready for inference with TensorRT-LLM. Only the weights and activations of the linear operators within transformers blocks are quantized. This optimization reduces the number of bits per parameter from 8 to 4, reducing the disk size and GPU memory requirements by approximately 1.6x.
|
65 |
+
|
66 |
+
## Usage
|
67 |
+
|
68 |
+
### Deploy with TensorRT-LLM
|
69 |
+
|
70 |
+
To deploy the quantized FP4 checkpoint with [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) LLM API, follow the sample codes below (you need 8xB200 GPU and TensorRT LLM built from source with the latest main branch):
|
71 |
+
|
72 |
+
* LLM API sample usage:
|
73 |
+
```
|
74 |
+
from tensorrt_llm import SamplingParams
|
75 |
+
from tensorrt_llm._torch import LLM
|
76 |
+
|
77 |
+
def main():
|
78 |
+
|
79 |
+
prompts = [
|
80 |
+
"Hello, my name is",
|
81 |
+
"The president of the United States is",
|
82 |
+
"The capital of France is",
|
83 |
+
"The future of AI is",
|
84 |
+
]
|
85 |
+
sampling_params = SamplingParams(max_tokens=32)
|
86 |
+
|
87 |
+
llm = LLM(model="nvidia/DeepSeek-V3-0324-FP4", tensor_parallel_size=8, enable_attention_dp=True)
|
88 |
+
|
89 |
+
outputs = llm.generate(prompts, sampling_params)
|
90 |
+
|
91 |
+
# Print the outputs.
|
92 |
+
for output in outputs:
|
93 |
+
prompt = output.prompt
|
94 |
+
generated_text = output.outputs[0].text
|
95 |
+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
96 |
+
|
97 |
+
|
98 |
+
# The entry point of the program need to be protected for spawning processes.
|
99 |
+
if __name__ == '__main__':
|
100 |
+
main()
|
101 |
+
|
102 |
+
```
|
103 |
+
|
104 |
+
## Benchmarks
|
105 |
+
|
106 |
+
This section compares the accuracy of the original DeepSeek V3-0324 model with our FP4-quantized version across benchmarks.
|
107 |
+
|
108 |
+
| Benchmark | DeepSeek V3-0324<sup>1</sup> | DeepSeek V3-0324-FP4 |
|
109 |
+
| :---: | :---: | :---: |
|
110 |
+
| MMMU Pro | 82 | 82.9 |
|
111 |
+
| GPQA Diamond | 66 | 67.2 |
|
112 |
+
| LiveCodeBench | 41 | 52.23 |
|
113 |
+
| AIME 2024 | 52 | 49.3 |
|
114 |
+
| MATH-500 | 94 | 94.4 |
|
115 |
+
| MGSM | 92 | 92.8 |
|
116 |
+
|
117 |
+
> *<sup>1</sup> Reference scores for DeepSeek V3-0324 sourced from [artificialanalysis](https://artificialanalysis.ai/models/deepseek-v3-0324).*
|
118 |
+
|
119 |
+
## Ethical Considerations
|
120 |
+
|
121 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
122 |
+
|
123 |
+
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
config.json
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"DeepseekV3ForCausalLM"
|
4 |
+
],
|
5 |
+
"attention_bias": false,
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"auto_map": {
|
8 |
+
"AutoConfig": "configuration_deepseek.DeepseekV3Config",
|
9 |
+
"AutoModel": "modeling_deepseek.DeepseekV3Model",
|
10 |
+
"AutoModelForCausalLM": "modeling_deepseek.DeepseekV3ForCausalLM"
|
11 |
+
},
|
12 |
+
"aux_loss_alpha": 0.001,
|
13 |
+
"bos_token_id": 0,
|
14 |
+
"eos_token_id": 1,
|
15 |
+
"ep_size": 1,
|
16 |
+
"first_k_dense_replace": 3,
|
17 |
+
"hidden_act": "silu",
|
18 |
+
"hidden_size": 7168,
|
19 |
+
"initializer_range": 0.02,
|
20 |
+
"intermediate_size": 18432,
|
21 |
+
"kv_lora_rank": 512,
|
22 |
+
"max_position_embeddings": 163840,
|
23 |
+
"model_type": "deepseek_v3",
|
24 |
+
"moe_intermediate_size": 2048,
|
25 |
+
"moe_layer_freq": 1,
|
26 |
+
"n_group": 8,
|
27 |
+
"n_routed_experts": 256,
|
28 |
+
"n_shared_experts": 1,
|
29 |
+
"norm_topk_prob": true,
|
30 |
+
"num_attention_heads": 128,
|
31 |
+
"num_experts_per_tok": 8,
|
32 |
+
"num_hidden_layers": 61,
|
33 |
+
"num_key_value_heads": 128,
|
34 |
+
"num_nextn_predict_layers": 1,
|
35 |
+
"pretraining_tp": 1,
|
36 |
+
"q_lora_rank": 1536,
|
37 |
+
"qk_nope_head_dim": 128,
|
38 |
+
"qk_rope_head_dim": 64,
|
39 |
+
"quantization_config": {
|
40 |
+
"activation_scheme": "dynamic",
|
41 |
+
"fmt": "e4m3",
|
42 |
+
"quant_method": "fp8",
|
43 |
+
"weight_block_size": [
|
44 |
+
128,
|
45 |
+
128
|
46 |
+
]
|
47 |
+
},
|
48 |
+
"rms_norm_eps": 1e-06,
|
49 |
+
"rope_scaling": {
|
50 |
+
"beta_fast": 32,
|
51 |
+
"beta_slow": 1,
|
52 |
+
"factor": 40,
|
53 |
+
"mscale": 1.0,
|
54 |
+
"mscale_all_dim": 1.0,
|
55 |
+
"original_max_position_embeddings": 4096,
|
56 |
+
"type": "yarn"
|
57 |
+
},
|
58 |
+
"rope_theta": 10000,
|
59 |
+
"routed_scaling_factor": 2.5,
|
60 |
+
"scoring_func": "sigmoid",
|
61 |
+
"seq_aux": true,
|
62 |
+
"tie_word_embeddings": false,
|
63 |
+
"topk_group": 4,
|
64 |
+
"topk_method": "noaux_tc",
|
65 |
+
"torch_dtype": "bfloat16",
|
66 |
+
"transformers_version": "4.46.3",
|
67 |
+
"use_cache": true,
|
68 |
+
"v_head_dim": 128,
|
69 |
+
"vocab_size": 129280
|
70 |
+
}
|
configuration_deepseek.py
ADDED
@@ -0,0 +1,210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from transformers.configuration_utils import PretrainedConfig
|
2 |
+
from transformers.utils import logging
|
3 |
+
|
4 |
+
logger = logging.get_logger(__name__)
|
5 |
+
|
6 |
+
DEEPSEEK_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
|
7 |
+
class DeepseekV3Config(PretrainedConfig):
|
8 |
+
r"""
|
9 |
+
This is the configuration class to store the configuration of a [`DeepseekV3Model`]. It is used to instantiate an DeepSeek
|
10 |
+
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
|
11 |
+
defaults will yield a similar configuration to that of the DeepSeek-V3.
|
12 |
+
|
13 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
14 |
+
documentation from [`PretrainedConfig`] for more information.
|
15 |
+
|
16 |
+
|
17 |
+
Args:
|
18 |
+
vocab_size (`int`, *optional*, defaults to 129280):
|
19 |
+
Vocabulary size of the Deep model. Defines the number of different tokens that can be represented by the
|
20 |
+
`inputs_ids` passed when calling [`DeepseekV3Model`]
|
21 |
+
hidden_size (`int`, *optional*, defaults to 4096):
|
22 |
+
Dimension of the hidden representations.
|
23 |
+
intermediate_size (`int`, *optional*, defaults to 11008):
|
24 |
+
Dimension of the MLP representations.
|
25 |
+
moe_intermediate_size (`int`, *optional*, defaults to 1407):
|
26 |
+
Dimension of the MoE representations.
|
27 |
+
num_hidden_layers (`int`, *optional*, defaults to 32):
|
28 |
+
Number of hidden layers in the Transformer decoder.
|
29 |
+
num_nextn_predict_layers (`int`, *optional*, defaults to 1):
|
30 |
+
Number of nextn predict layers in the DeepSeekV3 Model.
|
31 |
+
num_attention_heads (`int`, *optional*, defaults to 32):
|
32 |
+
Number of attention heads for each attention layer in the Transformer decoder.
|
33 |
+
n_shared_experts (`int`, *optional*, defaults to None):
|
34 |
+
Number of shared experts, None means dense model.
|
35 |
+
n_routed_experts (`int`, *optional*, defaults to None):
|
36 |
+
Number of routed experts, None means dense model.
|
37 |
+
routed_scaling_factor (`float`, *optional*, defaults to 1.0):
|
38 |
+
Scaling factor or routed experts.
|
39 |
+
topk_method (`str`, *optional*, defaults to `gready`):
|
40 |
+
Topk method used in routed gate.
|
41 |
+
n_group (`int`, *optional*, defaults to None):
|
42 |
+
Number of groups for routed experts.
|
43 |
+
topk_group (`int`, *optional*, defaults to None):
|
44 |
+
Number of selected groups for each token(for each token, ensuring the selected experts is only within `topk_group` groups).
|
45 |
+
num_experts_per_tok (`int`, *optional*, defaults to None):
|
46 |
+
Number of selected experts, None means dense model.
|
47 |
+
moe_layer_freq (`int`, *optional*, defaults to 1):
|
48 |
+
The frequency of the MoE layer: one expert layer for every `moe_layer_freq - 1` dense layers.
|
49 |
+
first_k_dense_replace (`int`, *optional*, defaults to 0):
|
50 |
+
Number of dense layers in shallow layers(embed->dense->dense->...->dense->moe->moe...->lm_head).
|
51 |
+
\--k dense layers--/
|
52 |
+
norm_topk_prob (`bool`, *optional*, defaults to False):
|
53 |
+
Whether to normalize the weights of the routed experts.
|
54 |
+
scoring_func (`str`, *optional*, defaults to 'softmax'):
|
55 |
+
Method of computing expert weights.
|
56 |
+
aux_loss_alpha (`float`, *optional*, defaults to 0.001):
|
57 |
+
Auxiliary loss weight coefficient.
|
58 |
+
seq_aux = (`bool`, *optional*, defaults to True):
|
59 |
+
Whether to compute the auxiliary loss for each individual sample.
|
60 |
+
num_key_value_heads (`int`, *optional*):
|
61 |
+
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
62 |
+
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
63 |
+
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
64 |
+
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
65 |
+
by meanpooling all the original heads within that group. For more details checkout [this
|
66 |
+
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
|
67 |
+
`num_attention_heads`.
|
68 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
69 |
+
The non-linear activation function (function or string) in the decoder.
|
70 |
+
max_position_embeddings (`int`, *optional*, defaults to 2048):
|
71 |
+
The maximum sequence length that this model might ever be used with.
|
72 |
+
initializer_range (`float`, *optional*, defaults to 0.02):
|
73 |
+
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
74 |
+
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
|
75 |
+
The epsilon used by the rms normalization layers.
|
76 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
77 |
+
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
78 |
+
relevant if `config.is_decoder=True`.
|
79 |
+
pad_token_id (`int`, *optional*):
|
80 |
+
Padding token id.
|
81 |
+
bos_token_id (`int`, *optional*, defaults to 1):
|
82 |
+
Beginning of stream token id.
|
83 |
+
eos_token_id (`int`, *optional*, defaults to 2):
|
84 |
+
End of stream token id.
|
85 |
+
pretraining_tp (`int`, *optional*, defaults to 1):
|
86 |
+
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
|
87 |
+
document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
|
88 |
+
necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
|
89 |
+
issue](https://github.com/pytorch/pytorch/issues/76232).
|
90 |
+
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
|
91 |
+
Whether to tie weight embeddings
|
92 |
+
rope_theta (`float`, *optional*, defaults to 10000.0):
|
93 |
+
The base period of the RoPE embeddings.
|
94 |
+
rope_scaling (`Dict`, *optional*):
|
95 |
+
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
|
96 |
+
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
|
97 |
+
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
|
98 |
+
`max_position_embeddings` to the expected new maximum.
|
99 |
+
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
|
100 |
+
Whether to use a bias in the query, key, value and output projection layers during self-attention.
|
101 |
+
attention_dropout (`float`, *optional*, defaults to 0.0):
|
102 |
+
The dropout ratio for the attention probabilities.
|
103 |
+
|
104 |
+
```python
|
105 |
+
>>> from transformers import DeepseekV3Model, DeepseekV3Config
|
106 |
+
|
107 |
+
>>> # Initializing a Deepseek-V3 style configuration
|
108 |
+
>>> configuration = DeepseekV3Config()
|
109 |
+
|
110 |
+
>>> # Accessing the model configuration
|
111 |
+
>>> configuration = model.config
|
112 |
+
```"""
|
113 |
+
|
114 |
+
model_type = "deepseek_v3"
|
115 |
+
keys_to_ignore_at_inference = ["past_key_values"]
|
116 |
+
|
117 |
+
def __init__(
|
118 |
+
self,
|
119 |
+
vocab_size=129280,
|
120 |
+
hidden_size=7168,
|
121 |
+
intermediate_size=18432,
|
122 |
+
moe_intermediate_size = 2048,
|
123 |
+
num_hidden_layers=61,
|
124 |
+
num_nextn_predict_layers=1,
|
125 |
+
num_attention_heads=128,
|
126 |
+
num_key_value_heads=128,
|
127 |
+
n_shared_experts = 1,
|
128 |
+
n_routed_experts = 256,
|
129 |
+
ep_size = 1,
|
130 |
+
routed_scaling_factor = 2.5,
|
131 |
+
kv_lora_rank = 512,
|
132 |
+
q_lora_rank = 1536,
|
133 |
+
qk_rope_head_dim = 64,
|
134 |
+
v_head_dim = 128,
|
135 |
+
qk_nope_head_dim = 128,
|
136 |
+
topk_method = 'noaux_tc',
|
137 |
+
n_group = 8,
|
138 |
+
topk_group = 4,
|
139 |
+
num_experts_per_tok = 8,
|
140 |
+
moe_layer_freq = 1,
|
141 |
+
first_k_dense_replace = 3,
|
142 |
+
norm_topk_prob = True,
|
143 |
+
scoring_func = 'sigmoid',
|
144 |
+
aux_loss_alpha = 0.001,
|
145 |
+
seq_aux = True,
|
146 |
+
hidden_act="silu",
|
147 |
+
max_position_embeddings=4096,
|
148 |
+
initializer_range=0.02,
|
149 |
+
rms_norm_eps=1e-6,
|
150 |
+
use_cache=True,
|
151 |
+
pad_token_id=None,
|
152 |
+
bos_token_id=0,
|
153 |
+
eos_token_id=1,
|
154 |
+
pretraining_tp=1,
|
155 |
+
tie_word_embeddings=False,
|
156 |
+
rope_theta=10000.0,
|
157 |
+
rope_scaling=None,
|
158 |
+
attention_bias=False,
|
159 |
+
attention_dropout=0.0,
|
160 |
+
**kwargs,
|
161 |
+
):
|
162 |
+
self.vocab_size = vocab_size
|
163 |
+
self.max_position_embeddings = max_position_embeddings
|
164 |
+
self.hidden_size = hidden_size
|
165 |
+
self.intermediate_size = intermediate_size
|
166 |
+
self.moe_intermediate_size = moe_intermediate_size
|
167 |
+
self.num_hidden_layers = num_hidden_layers
|
168 |
+
self.num_nextn_predict_layers = num_nextn_predict_layers
|
169 |
+
self.num_attention_heads = num_attention_heads
|
170 |
+
self.n_shared_experts = n_shared_experts
|
171 |
+
self.n_routed_experts = n_routed_experts
|
172 |
+
self.ep_size = ep_size
|
173 |
+
self.routed_scaling_factor = routed_scaling_factor
|
174 |
+
self.kv_lora_rank = kv_lora_rank
|
175 |
+
self.q_lora_rank = q_lora_rank
|
176 |
+
self.qk_rope_head_dim = qk_rope_head_dim
|
177 |
+
self.v_head_dim = v_head_dim
|
178 |
+
self.qk_nope_head_dim = qk_nope_head_dim
|
179 |
+
self.topk_method = topk_method
|
180 |
+
self.n_group = n_group
|
181 |
+
self.topk_group = topk_group
|
182 |
+
self.num_experts_per_tok = num_experts_per_tok
|
183 |
+
self.moe_layer_freq = moe_layer_freq
|
184 |
+
self.first_k_dense_replace = first_k_dense_replace
|
185 |
+
self.norm_topk_prob = norm_topk_prob
|
186 |
+
self.scoring_func = scoring_func
|
187 |
+
self.aux_loss_alpha = aux_loss_alpha
|
188 |
+
self.seq_aux = seq_aux
|
189 |
+
# for backward compatibility
|
190 |
+
if num_key_value_heads is None:
|
191 |
+
num_key_value_heads = num_attention_heads
|
192 |
+
|
193 |
+
self.num_key_value_heads = num_key_value_heads
|
194 |
+
self.hidden_act = hidden_act
|
195 |
+
self.initializer_range = initializer_range
|
196 |
+
self.rms_norm_eps = rms_norm_eps
|
197 |
+
self.pretraining_tp = pretraining_tp
|
198 |
+
self.use_cache = use_cache
|
199 |
+
self.rope_theta = rope_theta
|
200 |
+
self.rope_scaling = rope_scaling
|
201 |
+
self.attention_bias = attention_bias
|
202 |
+
self.attention_dropout = attention_dropout
|
203 |
+
|
204 |
+
super().__init__(
|
205 |
+
pad_token_id=pad_token_id,
|
206 |
+
bos_token_id=bos_token_id,
|
207 |
+
eos_token_id=eos_token_id,
|
208 |
+
tie_word_embeddings=tie_word_embeddings,
|
209 |
+
**kwargs,
|
210 |
+
)
|
hf_quant_config.json
ADDED
@@ -0,0 +1,258 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"producer": {
|
3 |
+
"name": "modelopt",
|
4 |
+
"version": "0.27.0"
|
5 |
+
},
|
6 |
+
"quantization": {
|
7 |
+
"quant_algo": "NVFP4",
|
8 |
+
"kv_cache_quant_algo": null,
|
9 |
+
"group_size": 16,
|
10 |
+
"exclude_modules": [
|
11 |
+
"model.layers.11.mlp.gate",
|
12 |
+
"model.layers.11.input_layernorm",
|
13 |
+
"model.layers.46.post_attention_layernorm",
|
14 |
+
"model.layers.18.input_layernorm",
|
15 |
+
"model.layers.21.self_attn*",
|
16 |
+
"model.layers.44.self_attn*",
|
17 |
+
"model.layers.16.self_attn*",
|
18 |
+
"model.layers.21.input_layernorm",
|
19 |
+
"model.layers.57.post_attention_layernorm",
|
20 |
+
"model.layers.21.post_attention_layernorm",
|
21 |
+
"model.layers.59.mlp.gate",
|
22 |
+
"model.layers.10.post_attention_layernorm",
|
23 |
+
"model.layers.25.post_attention_layernorm",
|
24 |
+
"model.layers.20.input_layernorm",
|
25 |
+
"model.layers.23.self_attn*",
|
26 |
+
"model.layers.60.post_attention_layernorm",
|
27 |
+
"model.layers.14.input_layernorm",
|
28 |
+
"model.layers.1.input_layernorm",
|
29 |
+
"model.layers.53.input_layernorm",
|
30 |
+
"model.layers.4.post_attention_layernorm",
|
31 |
+
"model.layers.36.input_layernorm",
|
32 |
+
"model.layers.52.mlp.gate",
|
33 |
+
"model.layers.35.post_attention_layernorm",
|
34 |
+
"model.layers.43.self_attn*",
|
35 |
+
"model.layers.38.input_layernorm",
|
36 |
+
"model.layers.46.mlp.gate",
|
37 |
+
"model.layers.15.mlp.gate",
|
38 |
+
"model.layers.57.input_layernorm",
|
39 |
+
"model.layers.39.input_layernorm",
|
40 |
+
"model.layers.50.mlp.gate",
|
41 |
+
"model.layers.23.mlp.gate",
|
42 |
+
"model.layers.9.mlp.gate",
|
43 |
+
"model.layers.18.mlp.gate",
|
44 |
+
"model.layers.13.self_attn*",
|
45 |
+
"model.layers.25.mlp.gate",
|
46 |
+
"model.layers.52.input_layernorm",
|
47 |
+
"model.layers.33.post_attention_layernorm",
|
48 |
+
"model.layers.42.mlp.gate",
|
49 |
+
"model.layers.13.mlp.gate",
|
50 |
+
"model.layers.56.post_attention_layernorm",
|
51 |
+
"model.layers.26.post_attention_layernorm",
|
52 |
+
"model.layers.55.post_attention_layernorm",
|
53 |
+
"model.layers.56.self_attn*",
|
54 |
+
"model.layers.39.post_attention_layernorm",
|
55 |
+
"model.layers.37.mlp.gate",
|
56 |
+
"model.layers.31.input_layernorm",
|
57 |
+
"model.layers.14.self_attn*",
|
58 |
+
"model.layers.22.post_attention_layernorm",
|
59 |
+
"model.layers.60.mlp.gate",
|
60 |
+
"model.layers.48.self_attn*",
|
61 |
+
"model.layers.52.self_attn*",
|
62 |
+
"model.layers.43.mlp.gate",
|
63 |
+
"model.layers.16.input_layernorm",
|
64 |
+
"model.layers.10.input_layernorm",
|
65 |
+
"model.layers.24.post_attention_layernorm",
|
66 |
+
"model.layers.2.post_attention_layernorm",
|
67 |
+
"model.layers.40.input_layernorm",
|
68 |
+
"model.layers.9.input_layernorm",
|
69 |
+
"model.layers.31.self_attn*",
|
70 |
+
"model.layers.57.self_attn*",
|
71 |
+
"model.layers.3.input_layernorm",
|
72 |
+
"model.layers.11.self_attn*",
|
73 |
+
"model.layers.50.input_layernorm",
|
74 |
+
"model.layers.4.mlp.gate",
|
75 |
+
"model.layers.58.input_layernorm",
|
76 |
+
"model.layers.5.post_attention_layernorm",
|
77 |
+
"model.layers.29.post_attention_layernorm",
|
78 |
+
"model.layers.20.post_attention_layernorm",
|
79 |
+
"model.layers.58.mlp.gate",
|
80 |
+
"model.layers.9.post_attention_layernorm",
|
81 |
+
"model.layers.37.self_attn*",
|
82 |
+
"model.layers.2.input_layernorm",
|
83 |
+
"model.layers.15.input_layernorm",
|
84 |
+
"model.layers.57.mlp.gate",
|
85 |
+
"model.layers.19.input_layernorm",
|
86 |
+
"model.layers.35.self_attn*",
|
87 |
+
"model.layers.21.mlp.gate",
|
88 |
+
"model.layers.51.input_layernorm",
|
89 |
+
"model.layers.41.input_layernorm",
|
90 |
+
"model.layers.52.post_attention_layernorm",
|
91 |
+
"model.layers.45.post_attention_layernorm",
|
92 |
+
"model.layers.54.post_attention_layernorm",
|
93 |
+
"lm_head",
|
94 |
+
"model.layers.8.mlp.gate",
|
95 |
+
"model.layers.17.post_attention_layernorm",
|
96 |
+
"model.layers.13.post_attention_layernorm",
|
97 |
+
"model.layers.3.mlp.gate",
|
98 |
+
"model.layers.1.post_attention_layernorm",
|
99 |
+
"model.layers.55.mlp.gate",
|
100 |
+
"model.layers.34.mlp.gate",
|
101 |
+
"model.layers.61*",
|
102 |
+
"model.layers.37.input_layernorm",
|
103 |
+
"model.layers.12.mlp.gate",
|
104 |
+
"model.layers.27.mlp.gate",
|
105 |
+
"model.layers.48.mlp.gate",
|
106 |
+
"model.embed_tokens",
|
107 |
+
"model.layers.3.self_attn*",
|
108 |
+
"model.layers.12.post_attention_layernorm",
|
109 |
+
"model.layers.49.mlp.gate",
|
110 |
+
"model.layers.17.mlp.gate",
|
111 |
+
"model.layers.55.self_attn*",
|
112 |
+
"model.layers.54.input_layernorm",
|
113 |
+
"model.layers.24.input_layernorm",
|
114 |
+
"model.layers.32.self_attn*",
|
115 |
+
"model.layers.23.input_layernorm",
|
116 |
+
"model.layers.10.self_attn*",
|
117 |
+
"model.layers.42.self_attn*",
|
118 |
+
"model.layers.51.self_attn*",
|
119 |
+
"model.layers.38.self_attn*",
|
120 |
+
"model.layers.7.input_layernorm",
|
121 |
+
"model.layers.51.mlp.gate",
|
122 |
+
"model.layers.47.mlp.gate",
|
123 |
+
"model.layers.28.mlp.gate",
|
124 |
+
"model.layers.27.self_attn*",
|
125 |
+
"model.layers.12.self_attn*",
|
126 |
+
"model.layers.43.input_layernorm",
|
127 |
+
"model.layers.14.post_attention_layernorm",
|
128 |
+
"model.layers.6.post_attention_layernorm",
|
129 |
+
"model.layers.42.input_layernorm",
|
130 |
+
"model.layers.37.post_attention_layernorm",
|
131 |
+
"model.layers.12.input_layernorm",
|
132 |
+
"model.layers.32.mlp.gate",
|
133 |
+
"model.layers.17.input_layernorm",
|
134 |
+
"model.layers.27.post_attention_layernorm",
|
135 |
+
"model.layers.33.mlp.gate",
|
136 |
+
"model.layers.30.self_attn*",
|
137 |
+
"model.layers.8.self_attn*",
|
138 |
+
"model.layers.60.input_layernorm",
|
139 |
+
"model.layers.41.mlp.gate",
|
140 |
+
"model.layers.58.post_attention_layernorm",
|
141 |
+
"model.layers.22.self_attn*",
|
142 |
+
"model.layers.11.post_attention_layernorm",
|
143 |
+
"model.layers.20.mlp.gate",
|
144 |
+
"model.layers.41.self_attn*",
|
145 |
+
"model.layers.58.self_attn*",
|
146 |
+
"model.layers.23.post_attention_layernorm",
|
147 |
+
"model.layers.20.self_attn*",
|
148 |
+
"model.layers.30.mlp.gate",
|
149 |
+
"model.layers.6.mlp.gate",
|
150 |
+
"model.layers.56.input_layernorm",
|
151 |
+
"model.layers.2.self_attn*",
|
152 |
+
"model.layers.35.mlp.gate",
|
153 |
+
"model.layers.6.self_attn*",
|
154 |
+
"model.layers.28.input_layernorm",
|
155 |
+
"model.layers.1.self_attn*",
|
156 |
+
"model.norm",
|
157 |
+
"model.layers.40.post_attention_layernorm",
|
158 |
+
"model.layers.0.input_layernorm",
|
159 |
+
"model.layers.16.mlp.gate",
|
160 |
+
"model.layers.25.input_layernorm",
|
161 |
+
"model.layers.32.post_attention_layernorm",
|
162 |
+
"model.layers.5.input_layernorm",
|
163 |
+
"model.layers.32.input_layernorm",
|
164 |
+
"model.layers.0.post_attention_layernorm",
|
165 |
+
"model.layers.29.self_attn*",
|
166 |
+
"model.layers.29.input_layernorm",
|
167 |
+
"model.layers.56.mlp.gate",
|
168 |
+
"model.layers.15.self_attn*",
|
169 |
+
"model.layers.16.post_attention_layernorm",
|
170 |
+
"model.layers.54.mlp.gate",
|
171 |
+
"model.layers.53.post_attention_layernorm",
|
172 |
+
"model.layers.34.post_attention_layernorm",
|
173 |
+
"model.layers.33.input_layernorm",
|
174 |
+
"model.layers.8.input_layernorm",
|
175 |
+
"model.layers.41.post_attention_layernorm",
|
176 |
+
"model.layers.7.mlp.gate",
|
177 |
+
"model.layers.9.self_attn*",
|
178 |
+
"model.layers.28.self_attn*",
|
179 |
+
"model.layers.50.self_attn*",
|
180 |
+
"model.layers.18.post_attention_layernorm",
|
181 |
+
"model.layers.47.input_layernorm",
|
182 |
+
"model.layers.27.input_layernorm",
|
183 |
+
"model.layers.25.self_attn*",
|
184 |
+
"model.layers.6.input_layernorm",
|
185 |
+
"model.layers.24.mlp.gate",
|
186 |
+
"model.layers.48.input_layernorm",
|
187 |
+
"model.layers.44.mlp.gate",
|
188 |
+
"model.layers.46.input_layernorm",
|
189 |
+
"model.layers.3.post_attention_layernorm",
|
190 |
+
"model.layers.35.input_layernorm",
|
191 |
+
"model.layers.26.input_layernorm",
|
192 |
+
"model.layers.39.self_attn*",
|
193 |
+
"model.layers.48.post_attention_layernorm",
|
194 |
+
"model.layers.18.self_attn*",
|
195 |
+
"model.layers.38.mlp.gate",
|
196 |
+
"model.layers.5.self_attn*",
|
197 |
+
"model.layers.42.post_attention_layernorm",
|
198 |
+
"model.layers.8.post_attention_layernorm",
|
199 |
+
"model.layers.19.post_attention_layernorm",
|
200 |
+
"model.layers.49.self_attn*",
|
201 |
+
"model.layers.59.input_layernorm",
|
202 |
+
"model.layers.10.mlp.gate",
|
203 |
+
"model.layers.36.mlp.gate",
|
204 |
+
"model.layers.26.mlp.gate",
|
205 |
+
"model.layers.45.mlp.gate",
|
206 |
+
"model.layers.39.mlp.gate",
|
207 |
+
"model.layers.47.post_attention_layernorm",
|
208 |
+
"model.layers.49.post_attention_layernorm",
|
209 |
+
"model.layers.34.self_attn*",
|
210 |
+
"model.layers.46.self_attn*",
|
211 |
+
"model.layers.44.post_attention_layernorm",
|
212 |
+
"model.layers.44.input_layernorm",
|
213 |
+
"model.layers.59.post_attention_layernorm",
|
214 |
+
"model.layers.33.self_attn*",
|
215 |
+
"model.layers.36.self_attn*",
|
216 |
+
"model.layers.51.post_attention_layernorm",
|
217 |
+
"model.layers.28.post_attention_layernorm",
|
218 |
+
"model.layers.7.self_attn*",
|
219 |
+
"model.layers.55.input_layernorm",
|
220 |
+
"model.layers.40.mlp.gate",
|
221 |
+
"model.layers.4.self_attn*",
|
222 |
+
"model.layers.45.input_layernorm",
|
223 |
+
"model.layers.38.post_attention_layernorm",
|
224 |
+
"model.layers.45.self_attn*",
|
225 |
+
"model.layers.31.mlp.gate",
|
226 |
+
"model.layers.19.self_attn*",
|
227 |
+
"model.layers.31.post_attention_layernorm",
|
228 |
+
"model.layers.30.input_layernorm",
|
229 |
+
"model.layers.29.mlp.gate",
|
230 |
+
"model.layers.5.mlp.gate",
|
231 |
+
"model.layers.19.mlp.gate",
|
232 |
+
"model.layers.54.self_attn*",
|
233 |
+
"model.layers.13.input_layernorm",
|
234 |
+
"model.layers.40.self_attn*",
|
235 |
+
"model.layers.7.post_attention_layernorm",
|
236 |
+
"model.layers.36.post_attention_layernorm",
|
237 |
+
"model.layers.53.mlp.gate",
|
238 |
+
"model.layers.49.input_layernorm",
|
239 |
+
"model.layers.0.self_attn*",
|
240 |
+
"model.layers.50.post_attention_layernorm",
|
241 |
+
"model.layers.26.self_attn*",
|
242 |
+
"model.layers.47.self_attn*",
|
243 |
+
"model.layers.22.input_layernorm",
|
244 |
+
"model.layers.59.self_attn*",
|
245 |
+
"model.layers.43.post_attention_layernorm",
|
246 |
+
"model.layers.24.self_attn*",
|
247 |
+
"model.layers.14.mlp.gate",
|
248 |
+
"model.layers.60.self_attn*",
|
249 |
+
"model.layers.30.post_attention_layernorm",
|
250 |
+
"model.layers.53.self_attn*",
|
251 |
+
"model.layers.15.post_attention_layernorm",
|
252 |
+
"model.layers.17.self_attn*",
|
253 |
+
"model.layers.34.input_layernorm",
|
254 |
+
"model.layers.22.mlp.gate",
|
255 |
+
"model.layers.4.input_layernorm"
|
256 |
+
]
|
257 |
+
}
|
258 |
+
}
|
model-00001-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:05325724002e4c8809e7a9f3d9c322c1f78b428f2cf00c14d77699bb73b69d3b
|
3 |
+
size 5367981060
|
model-00002-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a59b0ab0d6bf927fb8e3d09f6eb84d7207ecb04a2d4f67116fff28778e2d23d1
|
3 |
+
size 5368656744
|
model-00003-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:acc449ed574b291df189feb4485cff9312e87428adf8f447e481763f0e56bd8e
|
3 |
+
size 5365742592
|
model-00004-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:08b71671fd7a2cd9bb2eaeb272640c59737fff119db6f3a0cb668adffc5ca50b
|
3 |
+
size 5365742432
|
model-00005-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:08db83b6ad87195403ffd22252f46842a442fac5a83e86d1e70d0ab5e0256ea0
|
3 |
+
size 5365757104
|
model-00006-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:15a954c9a53c48a23ef8a3af550dc1295d1c5e2b3ff824522b05d30acd2330cf
|
3 |
+
size 5367738752
|
model-00007-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dd85f7574d27080bf63b02defd193abc96c8fe8f3304ea7f981ecdabc46804d9
|
3 |
+
size 5365757560
|
model-00008-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9a3cf365094f197904496ed9e121732394f3a08895ddde0790ef94077c7e8b67
|
3 |
+
size 5365728072
|
model-00009-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7701d9c04443cecd7ed6fc033bb9eb4067e3fdb4463062042450e6389c3ffad3
|
3 |
+
size 5365756864
|
model-00010-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a0b674b5deb8f7c330b97242a5253f5e85c18d01a9128dbe77abb6917a1b576e
|
3 |
+
size 5365758128
|
model-00011-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:98cdb5e36258570f81693ec09bcdad454d9f5e2a38ed931a30e76200f35466e7
|
3 |
+
size 5365760056
|
model-00012-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:afe9d5279584a9ab8df94fba9cc3f988f63e4d6da535a2658c706d667a8c13e9
|
3 |
+
size 5367741544
|
model-00013-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9f54f25446571678692904a37b8f1db0f7640f478f7583a4f673cd7fdea4cb3b
|
3 |
+
size 5365730424
|
model-00014-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fb2b00547c0e35d0380decbdd6d62fec7d70c3205340e8eaef13ee895f88c35f
|
3 |
+
size 5365759416
|
model-00015-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:56d4f1c9ab2cb63ae605f88fa33a2752f0a7f6e4ef919d3142f834e3fda39bbe
|
3 |
+
size 5365759432
|
model-00016-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f5cb3fd5942326cdfe05474605510bfa5d022b1c680c22bfa61fbb5ab336af7d
|
3 |
+
size 5367741488
|
model-00017-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e50784db149cf5290fa582637d5a1801250edc15743e54be5d7ec33509972ddf
|
3 |
+
size 5365745504
|
model-00018-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5da9016fc8f2ffc40312804b4553123a0af66d5238a10e6f2b67eaaae1a8ee99
|
3 |
+
size 5365744824
|
model-00019-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0255864eb2c59a54258ef3239d3d5d60b835a20ffa48b87b4d6e69ba75c3a7e2
|
3 |
+
size 5365759432
|
model-00020-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6b83c643a60d1f70047daaae3d25d6a43cda5deda546ba8767f1b195fd83ac1d
|
3 |
+
size 5148969068
|
model-00021-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eccd170f38c4fe26c345355f8353b8e81e48efb3d136074af1c224f6ce21a89b
|
3 |
+
size 5366666288
|
model-00022-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7a073da2749ee973bb24d892c8c0750b1d73fe9caea7084f8b0b19669e2574b2
|
3 |
+
size 5368902860
|
model-00023-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6ab3158436d0861a9843fd9cda9fec060543867e04ba4c1101be40556402223f
|
3 |
+
size 5365744824
|
model-00024-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b5ec49a58505578c37d84fed6311a28d0f261a57f10ac5eedf86ada9323acec4
|
3 |
+
size 5365759416
|
model-00025-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bcd8999171e24a9888869463a11497a0e6190290b15dbbd942a694b3e8b881df
|
3 |
+
size 5365759688
|
model-00026-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:49c0f0e6911739a55c59d1a16b4dfa8a204c37b420cd104d77c60132900c4491
|
3 |
+
size 5367741512
|
model-00027-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:85645e0e1670b51849bbefc80f59f19ff990acd3038d9cf9c6a825a6ced8a347
|
3 |
+
size 5365745240
|
model-00028-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ed8d08dc10db9d7e4e4478314bf1f4d6715c7d4027f1af1b1eba0c3d1a6a0819
|
3 |
+
size 5365744824
|
model-00029-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bd19fc1a4665ac1f3f203e3f40038d3fcb383a1218664ee87053307cbad2c506
|
3 |
+
size 5365759688
|
model-00030-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b02e9b46dd474bd00dec3d8540513fdf4471b33d9ac1d17e6b4081c9b80af1d5
|
3 |
+
size 5367741008
|
model-00031-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:da4992c789b331df8706c3bfee17306e9a66eb992dc50ab45a1015d246703a12
|
3 |
+
size 5361174228
|
model-00032-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a583aad3dd5caef1914f394d817273ee753bce0c225303192dfdb15a2f486708
|
3 |
+
size 5362072908
|
model-00033-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d180fdf6472820ef43e2e210db626e69122d4c949512782c35f4ca5a6b6bec7b
|
3 |
+
size 5365744824
|
model-00034-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d642722d928709ce6d15e91d654c1a7f08a55790b1d76250a3469c6acc1b2c8
|
3 |
+
size 5365759416
|
model-00035-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:886756a883956f8859c10714b2078e4e82e723e06b8799332337e33d8e812cf5
|
3 |
+
size 5365759832
|
model-00036-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:26e34e82e9f2a3238775d04c1b65442b4cdcb2440086bcc2058b8f337c83bbda
|
3 |
+
size 5367741528
|
model-00037-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8c810232eb1bd8894590902e8f72efc9ddca5990be7c871f0e26c63c36632531
|
3 |
+
size 5365745072
|
model-00038-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:103a5c2acbc4c26267ad98bab7292d0ee035c77194be664e9f0e9c6b3f0bed92
|
3 |
+
size 5365744824
|
model-00039-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:732f4293d4d7c6394b88e434f6e570aa20bb53850685def7fdcf072db4974d2b
|
3 |
+
size 5365759768
|
model-00040-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ca14789c666f9e99fff999c62be35e8034f02b2be86b9835ef8acb9e4579a055
|
3 |
+
size 5367741088
|
model-00041-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f7cf1e666679691c41103a8e90cabbb0d4c24b09ae48804d38786207176733ae
|
3 |
+
size 5365760000
|
model-00042-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a762dca2b5c25f88909705c88f07d1ce36b5cdb617e1b31aaff8925f1fb7e57f
|
3 |
+
size 5365730560
|
model-00043-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d13ff117174cb286d0f1b7feb4cd23bdc635db1a1a427165427292a1a3a96aed
|
3 |
+
size 5365759264
|
model-00044-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2bef141290bc924ba8eea6468246fc9aaf987cb8a0b8b2f8666139df0de190f4
|
3 |
+
size 5365759416
|
model-00045-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1cd5809b7bf24cd1779050202c49687b0a3ac6babb6e43645a08294dc629833c
|
3 |
+
size 5365759976
|
model-00046-of-00080.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:05b064f019985465254745c7958733ad460cc43aa25918dddfc7ae927457f566
|
3 |
+
size 5367741536
|