metadata
library_name: peft
Training procedure
The following bitsandbytes
quantization config was used during training:
- bits: 4
- checkpoint_format: gptq
- damp_percent: 0.01
- desc_act: True
- group_size: 128
- model_file_base_name: None
- model_name_or_path: None
- quant_method: gptq
- static_groups: False
- sym: True
- true_sequential: True
Framework versions
- PEFT 0.5.0