Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
surrey-nlp
/
Et-En_Mono-AG-Llama-2-13b
like
0
Follow
Surrey NLP-AI
126
PEFT
Safetensors
llama-factory
lora
Generated from Trainer
License:
other
Model card
Files
Files and versions
xet
Community
Use this model
et-en
Training hyperparameters
Framework versions
et-en
Training hyperparameters
The following hyperparameters were used during training:
learning_rate: 5e-05
train_batch_size: 2
eval_batch_size: 8
seed: 42
gradient_accumulation_steps: 2
total_train_batch_size: 4
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: cosine
num_epochs: 1.0
mixed_precision_training: Native AMP
Framework versions
PEFT 0.10.0
Transformers 4.39.3
Pytorch 2.1.2+cu121
Datasets 2.18.0
Tokenizers 0.15.2
Downloads last month
3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
surrey-nlp/Et-En_Mono-AG-Llama-2-13b
Base model
meta-llama/Llama-2-13b-chat-hf
Adapter
(
145
)
this model
Collection including
surrey-nlp/Et-En_Mono-AG-Llama-2-13b
AG-Prompt-Independent Lang-PairTraining-with Llama2-13BModel
Collection
8 items
โข
Updated
Sep 4
Evaluation results
Metadata error: specify a dataset to view leaderboard