Uploaded model

  • Developed by: mervinpraison
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

UR Fall Detection Dataset

Dataset renamed to: mervinpraison/ur-fall-raw

(test) โžœ  test praisonai train \
    --model unsloth/Llama-3.2-3B-Instruct-bnb-4bit \
    --dataset mervinpraison/test-dataset-2 \
    --hf mervinpraison/llama3.2-3B-instruct-test-2 \
    --ollama mervinpraison/llama3.2-3B-instruct-test-2

Conda environment 'praison_env' found.
๐Ÿฆฅ Unsloth: Will patch your computer to enable 2x faster free finetuning.
๐Ÿฆฅ Unsloth Zoo will now patch everything to make training faster!
DEBUG: Loaded config: {'dataset': [{'name': 'mervinpraison/test-dataset-2'}], 'dataset_num_proc': 2, 'dataset_text_field':
'text', 'gradient_accumulation_steps': 2, 'hf_model_name': 'mervinpraison/llama3.2-3B-instruct-test-2', 
'huggingface_save': 'true', 'learning_rate': 0.0002, 'load_in_4bit': True, 'loftq_config': None, 'logging_steps': 1, 
'lora_alpha': 16, 'lora_bias': 'none', 'lora_dropout': 0, 'lora_r': 16, 'lora_target_modules': ['q_proj', 'k_proj', 
'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'], 'lr_scheduler_type': 'linear', 'max_seq_length': 2048, 
'max_steps': 10, 'model_name': 'unsloth/Llama-3.2-3B-Instruct-bnb-4bit', 'model_parameters': '8b', 'num_train_epochs': 1, 
'ollama_model': 'mervinpraison/llama3.2-3B-instruct-test-2', 'ollama_save': 'true', 'optim': 'adamw_8bit', 'output_dir': 
'outputs', 'packing': False, 'per_device_train_batch_size': 2, 'quantization_method': ['q4_k_m'], 'random_state': 3407, 
'seed': 3407, 'train': 'true', 'use_gradient_checkpointing': 'unsloth', 'use_rslora': False, 'warmup_steps': 5, 
'weight_decay': 0.01}
Downloads last month
5
Safetensors
Model size
3.21B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support