CBDC Stance Classifier: A Domain-Specific BERT for CBDC-Related Stance Detection
The CBDC-BERT-Stance model classifies Central Bank Digital Currency (CBDC)βrelated text into three stance categories: Pro-CBDC β supportive of CBDC adoption (e.g., highlighting benefits, efficiency, innovation), Wait-and-See β neutral or cautious, expressing neither strong support nor strong opposition, often highlighting the need for further study, and Anti-CBDC β critical of CBDC adoption (e.g., highlighting risks, concerns, opposition).
Base Model: bilalzafar/CentralBank-BERT
β CentralBank-BERT is a domain-adapted BERT base (uncased), pretrained on 66M+ tokens across 2M+ sentences from central-bank speeches published via the Bank for International Settlements (1996β2024). It is optimized for masked-token prediction within the specialized domains of monetary policy, financial regulation, and macroeconomic communication, enabling better contextual understanding of central-bank discourse and financial narratives.
Training Data: The training dataset consisted of 1,647 CBDC-related sentences from BIS speeches, manually annotated into three sentiment categories: Pro-CBDC (742 sentences), Wait-and-See (694 sentences), and Anti-CBDC (211 sentences).
Intended Uses: The model is designed for classifying stance in speeches, articles, or statements related to CBDCs, supporting research into CBDC discourse analysis, and monitoring stance trends in central banking communications.
Training Details
The model was trained starting from the bilalzafar/CentralBank-BERT
checkpoint, using a BERT-base architecture with a new three-way softmax classification head and a maximum sequence length of 320 tokens. Training was run for up to 8 epochs, with early stopping at epoch 6, a batch size of 16, a learning rate of 2e-5, weight decay of 0.01, and a warmup ratio of 0.06, optimized using AdamW. The loss function was Focal Loss (Ξ³ = 1.0, soft focal, no extra class weights), and a WeightedRandomSampler based on the square root of inverse frequency was applied to handle class imbalance. FP16 precision was enabled for efficiency, and the best checkpoint was selected based on the Macro-F1 score. The dataset was split into 80% training, 10% validation, and 10% test sets, stratified by label with class balance applied.
Performance and Metrics
On the test set, the model achieved an accuracy of 0.8485, a macro F1-score of 0.8519, and a weighted F1-score of 0.8484. Class-wise performance showed strong results across all categories: Anti-CBDC (Precision: 0.8261, Recall: 0.9048, F1: 0.8636), Pro-CBDC (Precision: 0.8421, Recall: 0.8533, F1: 0.8477), and Wait-and-See (Precision: 0.8636, Recall: 0.8261, F1: 0.8444). The best validation checkpoint recorded an accuracy of 0.8303, macro F1 of 0.7936, and weighted F1 of 0.8338, with a validation loss of 0.3883. On the final test evaluation, loss increased slightly to 0.4223, while all key metrics improved compared to the validation set.
Files in Repository
config.json
β model configurationmodel.safetensors
β trained model weightstokenizer.json
,tokenizer_config.json
,vocab.txt
β tokenizer filesspecial_tokens_map.json
β tokenizer special tokenslabel_mapping.json
β label β id mapping
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = "bilalzafar/cbdc-stance"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = pipeline(
"text-classification",
model=model,
tokenizer=tokenizer,
truncation=True,
padding=True,
top_k=True # returns only the top prediction
)
text = "CBDCs will reduce costs and improve payments."
print(classifier(text))
# Output: [{'label': 'Pro-CBDC', 'score': 0.9788}]
Model tree for bilalzafar/CBDC-Stance
Base model
google-bert/bert-base-uncased