Model Card for Llama-3.2-1B-HuAMR
Model Details
Model Description
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct on the None dataset. It achieves the following results on the evaluation set:
- Model type: Abstract Meaning Representation parser
- Language(s) (NLP): Hungarian
Model Sources [optional]
- Repository: GitHub Repo
Training Details
Training Procedure
Training Hyperparameters
- learning_rate: 5e-05
- train_batch_size: 1
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: AdamW
- lr_scheduler_type: linear
- max_grad_norm: 0.3
Metrics
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
Framework versions
- Transformers 4.34.1
- Pytorch 2.3.0+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 11
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for SZTAKI-HLT/Llama-3.2-1B-HuAMR
Base model
meta-llama/Llama-3.2-1B-Instruct