|
--- |
|
base_model: |
|
- google-bert/bert-base-cased |
|
- google-bert/bert-large-cased |
|
- FacebookAI/roberta-base |
|
- FacebookAI/roberta-large |
|
--- |
|
This repository includes the models for a research project focused on Personality Trait Prediction using four models: BERT base, BERT large, RoBERTa base and RoBERTa large. The objective is to investigate how well these models predict personality traits and identify correlations among them based on user-generated text collected from Reddit (arXiv:2004.04460v3). |
|
Unlike the models used in the multiple-models approach, the models for the single-model approach are not included separately, as the final RoBERTa base model was directly used for evaluation and comparison in the single-model setting.\ |
|
The full paper of this project can be found at the following link: https://www.mdpi.com/2078-2489/16/5/418 |
|
|
|
The models are organised into the following modules:\ |
|
bertbase_best_model : Best BERT base model saved during the fine-tuning process\ |
|
bertbase_final_model : Final BERT base model saved after the fine-tuning process\ |
|
bertlarge_best_model : Best BERT large model saved during the fine-tuning process\ |
|
bertlarge_final_model : Final BERT large model saved after the fine-tuning process\ |
|
robertabase_final_model : Final RoBERTa base model saved after the fine-tuning process\ |
|
robertalarge_best_model : Best RoBERTa large model saved during the fine-tuning process\ |
|
robertalarge_final_model : Final RoBERTa large model saved after the fine-tuning process\ |
|
r_a_best_model : Best RoBERTa base model for Agreeableness saved during the fine-tuning process for the multiple-models approach\ |
|
r_a_final_model : Best RoBERTa base model for Agreeableness saved during the fine-tuning process for the multiple-models approach\ |
|
r_c_best_model : Best RoBERTa base model for Conscientiousness saved during the fine-tuning process for the multiple-models approach\ |
|
r_c_final_model : Best RoBERTa base model for Conscientiousness saved during the fine-tuning process for the multiple-models approach\ |
|
r_e_best_model : Best RoBERTa base model for Extraversion saved during the fine-tuning process for the multiple-models approach\ |
|
r_e_final_model : Best RoBERTa base model for Extraversion saved during the fine-tuning process for the multiple-models approach\ |
|
r_n_best_model : Best RoBERTa base model for Neuroticism saved during the fine-tuning process for the multiple-models approach\ |
|
r_n_final_model : Best RoBERTa base model for Neuroticism saved during the fine-tuning process for the multiple-models approach\ |
|
r_o_best_model : Best RoBERTa base model for Openness saved during the fine-tuning process for the multiple-models approach\ |
|
r_o_final_model : Best RoBERTa base model for Openness saved during the fine-tuning process for the multiple-models approach\ |
|
\ |
|
*Note: The final model for RoBERTa base was also the best model; therefore, no separate best checkpoint was saved. |