Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
CHZY-1
/
sqlcoder-7b-2_FineTuned_PEFT_QLORA_adapter_alpha_r_32_alpha_64
like
0
PEFT
TensorBoard
Safetensors
trl
sft
Generated from Trainer
License:
cc-by-sa-4.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Use this model
main
sqlcoder-7b-2_FineTuned_PEFT_QLORA_adapter_alpha_r_32_alpha_64
1 contributor
History:
2 commits
CHZY-1
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
9572d9d
verified
26 days ago
runs
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
.gitattributes
Safe
1.52 kB
initial commit
26 days ago
README.md
Safe
1.33 kB
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
adapter_config.json
Safe
721 Bytes
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
adapter_model.safetensors
Safe
1.37 GB
LFS
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
added_tokens.json
Safe
21 Bytes
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
special_tokens_map.json
Safe
717 Bytes
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
tokenizer.json
Safe
1.84 MB
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
tokenizer.model
Safe
500 kB
LFS
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
tokenizer_config.json
Safe
2.08 kB
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago
training_args.bin
pickle
Detected Pickle imports (9)
"transformers.training_args.OptimizerNames"
,
"transformers.trainer_utils.SchedulerType"
,
"accelerate.state.PartialState"
,
"transformers.trainer_utils.IntervalStrategy"
,
"accelerate.utils.dataclasses.DistributedType"
,
"torch.device"
,
"transformers.trainer_pt_utils.AcceleratorConfig"
,
"transformers.trainer_utils.HubStrategy"
,
"trl.trainer.sft_config.SFTConfig"
How to fix it?
5.56 kB
LFS
Trained QLora Adapter with 260 data (5 epoch), R:32, Alpha:64, Dropout:0.1, added [PAD] token
26 days ago