leippold commited on
Commit
2e8a30b
·
verified ·
1 Parent(s): 6bddf4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -3
README.md CHANGED
@@ -1,3 +1,100 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ library_name: transformers
5
+ tags:
6
+ - economics
7
+ - finance
8
+ - bert
9
+ - language-model
10
+ - financial-nlp
11
+ - economic-analysis
12
+ datasets:
13
+ - custom_economic_corpus
14
+ metrics:
15
+ - accuracy
16
+ - f1
17
+ - precision
18
+ - recall
19
+ pipeline_tag: fill-mask
20
+ ---
21
+
22
+ # EconBERT
23
+
24
+ ## Model Description
25
+
26
+ EconBERT is a BERT-based language model specifically fine-tuned for economic and financial text analysis. The model is designed to capture domain-specific language patterns, terminology, and contextual relationships in economic literature, research papers, financial reports, and related documents.
27
+
28
+ > **Note**: The complete details of model architecture, training methodology, evaluation, and performance metrics are available in our paper. Please refer to the citation section below.
29
+
30
+ ## Intended Uses & Limitations
31
+
32
+ ### Intended Uses
33
+
34
+ - **Economic Text Classification**: Categorizing economic documents, papers, or news articles
35
+ - **Named Entity Recognition**: Identifying organizations, financial metrics, economic indicators, etc.
36
+ - **Sentiment Analysis**: Analyzing market sentiment in financial news and reports
37
+ - **Question Answering**: Supporting research queries on economic literature
38
+ - **Information Extraction**: Extracting structured data from unstructured economic texts
39
+
40
+ ### Limitations
41
+
42
+ - The model is specialized for economic and financial domains and may not perform as well on general text
43
+ - Performance may vary on highly technical economic sub-domains not well-represented in the training data
44
+ - For detailed discussion of limitations, please refer to our paper
45
+
46
+ ## Training Data
47
+
48
+ EconBERT was trained on a large corpus of economic and financial texts. For comprehensive information about the training data, including sources, size, preprocessing steps, and other details, please refer to our paper.
49
+
50
+ ## Evaluation Results
51
+
52
+ We evaluated EconBERT on several economic NLP tasks and compared its performance with general-purpose and other domain-specific models. The detailed evaluation methodology and complete results are available in our paper.
53
+
54
+ Key findings include:
55
+ - Improved performance on economic domain tasks compared to general BERT models
56
+ - State-of-the-art results on [specific tasks, if applicable]
57
+ - [Any other high-level results worth highlighting]
58
+
59
+ ## How to Use
60
+
61
+ ```python
62
+ from transformers import AutoTokenizer, AutoModel
63
+
64
+ # Load model and tokenizer
65
+ tokenizer = AutoTokenizer.from_pretrained("YourUsername/EconBERT")
66
+ model = AutoModel.from_pretrained("YourUsername/EconBERT")
67
+
68
+ # Example usage
69
+ text = "The Federal Reserve increased interest rates by 25 basis points."
70
+ inputs = tokenizer(text, return_tensors="pt")
71
+ outputs = model(**inputs)
72
+ ```
73
+
74
+ For task-specific fine-tuning and applications, please refer to our paper and the examples provided in our GitHub repository.
75
+
76
+ ## Citation
77
+
78
+ If you use EconBERT in your research, please cite our paper:
79
+
80
+ ```bibtex
81
+ @article{LastName2025econbert,
82
+ title={EconBERT: A Large Language Model for Economics},
83
+ author={Zhang, Philip and Rojcek, Jakub and Leippold, Markus},
84
+ journal={SSRN Working Paper},
85
+ year={2025},
86
+ volume={},
87
+ pages={},
88
+ publisher={University of Zurich},
89
+ doi={}
90
+ }
91
+ ```
92
+
93
+ ## Additional Information
94
+
95
+ - **Model Type**: BERT
96
+ - **Language(s)**: English
97
+ - **License**: MIT
98
+ -
99
+
100
+ For more detailed information about model architecture, training methodology, evaluation results, and applications, please refer to our paper.