Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

haidermasood99
/
openhermes-mistral-dpo-gptq

PEFT
TensorBoard
Safetensors
trl
dpo
Generated from Trainer
Model card Files Files and versions Metrics Training metrics Community
openhermes-mistral-dpo-gptq
Ctrl+K
Ctrl+K
  • 1 contributor
History: 4 commits
haidermasood99's picture
haidermasood99
haidermasood99/dpo-final
942f60f verified about 1 year ago
  • runs
    haidermasood99/dpo-final about 1 year ago
  • .gitattributes
    1.52 kB
    initial commit about 1 year ago
  • README.md
    2.9 kB
    haidermasood99/dpo-final about 1 year ago
  • adapter_config.json
    625 Bytes
    haidermasood99/dpo-final about 1 year ago
  • adapter_model.safetensors
    13.7 MB
    LFS
    haidermasood99/dpo-final about 1 year ago
  • added_tokens.json
    51 Bytes
    Vasanth/openhermes-mistral-dpo-gptq about 1 year ago
  • special_tokens_map.json
    630 Bytes
    Vasanth/openhermes-mistral-dpo-gptq about 1 year ago
  • tokenizer.json
    1.8 MB
    Vasanth/openhermes-mistral-dpo-gptq about 1 year ago
  • tokenizer.model
    493 kB
    LFS
    Vasanth/openhermes-mistral-dpo-gptq about 1 year ago
  • tokenizer_config.json
    1.45 kB
    Vasanth/openhermes-mistral-dpo-gptq about 1 year ago
  • training_args.bin

    Detected Pickle imports (9)

    • "transformers.trainer_utils.IntervalStrategy",
    • "torch.device",
    • "accelerate.utils.dataclasses.DistributedType",
    • "transformers.trainer_pt_utils.AcceleratorConfig",
    • "transformers.training_args.OptimizerNames",
    • "trl.trainer.dpo_config.DPOConfig",
    • "transformers.trainer_utils.SchedulerType",
    • "transformers.trainer_utils.HubStrategy",
    • "accelerate.state.PartialState"

    How to fix it?

    5.31 kB
    LFS
    haidermasood99/dpo-final about 1 year ago