idevede commited on
Commit
2d2f738
1 Parent(s): 30f1435

Add the practice

Browse files
Files changed (1) hide show
  1. README.md +21 -21
README.md CHANGED
@@ -19,9 +19,9 @@ base_model:
19
  ---
20
  # [TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting](https://arxiv.org/abs/2310.04948)
21
 
22
- [![preprint](https://img.shields.io/static/v1?label=arXiv&message=2310.04948&color=B31B1B&logo=arXiv)](https://arxiv.org/pdf/2310.04948) [![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-FFD21E)](https://huggingface.co/Melady/TEMPO) [![License: MIT](https://img.shields.io/badge/License-Apache--2.0-green.svg)](https://opensource.org/licenses/Apache-2.0)
23
 
24
- ![TEMPO_logo](pics/TEMPO_logo.png)
25
 
26
 
27
 
@@ -43,27 +43,27 @@ We use the following Colab page to show the demo of building the customer datase
43
 
44
 
45
 
46
- # Practice
47
 
48
- ## Download the repo
49
 
50
  ```
51
  git clone [email protected]:DC-research/TEMPO.git
52
  ```
53
 
54
- ## [Optional] Download the model and config file via commands
55
  ```
56
  huggingface-cli download Melady/TEMPO config.json --local-dir ./TEMPO/TEMPO_checkpoints
57
  ```
58
  ```
59
- huggingface-cli download Melady/TEMPO TEMPO-80M_v2.pth --local-dir ./TEMPO/TEMPO_checkpoints
60
  ```
61
 
62
  ```
63
- !huggingface-cli download Melady/TEMPO TEMPO-80M_v1.pth --local-dir ./TEMPO/TEMPO_checkpoints
64
  ```
65
 
66
- ## Build the environment
67
 
68
  ```
69
  conda create -n tempo python=3.8
@@ -78,7 +78,7 @@ cd TEMPO
78
  pip install -r requirements.txt
79
  ```
80
 
81
- ## Script Demo
82
 
83
  A streamlining example showing how to perform forecasting using TEMPO:
84
 
@@ -107,31 +107,31 @@ print(predicted_values)
107
  ```
108
 
109
 
110
- ## Online demo:
111
 
112
  Please try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).
113
 
114
  ![TEMPO_demo.jpg](pics/TEMPO_demo.jpg)
115
 
116
 
117
- ## Practice on your end
118
 
119
  We also updated our models on HuggingFace: [[Melady/TEMPO]](https://huggingface.co/Melady/TEMPO).
120
 
121
 
122
 
123
- ### Get Data
124
 
125
  Download the data from [[Google Drive]](https://drive.google.com/drive/folders/13Cg1KYOlzM5C7K8gK8NfC-F3EYxkM3D2?usp=sharing) or [[Baidu Drive]](https://pan.baidu.com/s/1r3KhGd0Q9PJIUZdfEYoymg?pwd=i9iy), and place the downloaded data in the folder`./dataset`. You can also download the STL results from [[Google Drive]](https://drive.google.com/file/d/1gWliIGDDSi2itUAvYaRgACru18j753Kw/view?usp=sharing), and place the downloaded data in the folder`./stl`.
126
 
127
- ### Run TEMPO
128
 
129
- ### Pre-Training Stage
130
  ```
131
  bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather].sh
132
  ```
133
 
134
- ### Test/ Inference Stage
135
 
136
  After training, we can test TEMPO model under the zero-shot setting:
137
 
@@ -142,11 +142,11 @@ bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather]_test.sh
142
  ![TEMPO-results](pics/results.jpg)
143
 
144
 
145
- ## Pre-trained Models
146
 
147
  You can download the pre-trained model from [[Google Drive]](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link) and then run the test script for fun.
148
 
149
- ## TETS dataset
150
 
151
  Here is the prompts use to generate the coresponding textual informaton of time series via [[OPENAI ChatGPT-3.5 API]](https://platform.openai.com/docs/guides/text-generation)
152
 
@@ -159,7 +159,7 @@ The time series data are come from [[S&P 500]](https://www.spglobal.com/spdji/en
159
 
160
  Example of generated contextual information for the Company marked above:
161
 
162
- ![Company1_ebitda_summary_words.jpg](pics/Company1_ebitda_summary_words.jpg.png)
163
 
164
 
165
 
@@ -167,7 +167,7 @@ Example of generated contextual information for the Company marked above:
167
  You can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link
168
  ).
169
 
170
- ## 🚀 News
171
 
172
 
173
  - **Oct 2024**: 🚀 We've streamlined our code structure, enabling users to download the pre-trained model and perform zero-shot inference with a single line of code! Check out our [demo](./run_TEMPO_demo.py) for more details. Our model's download count on HuggingFace is now trackable!
@@ -192,10 +192,10 @@ You can download the processed data with text embedding from GPT2 from: [[TETS]]
192
  - [] Multimodal pre-training script
193
 
194
 
195
- ## Contact
196
  Feel free to connect [email protected] / [email protected] if you’re interested in applying TEMPO to your real-world application.
197
 
198
- ## Cite our work
199
  ```
200
  @inproceedings{
201
  cao2024tempo,
 
19
  ---
20
  # [TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting](https://arxiv.org/abs/2310.04948)
21
 
22
+ [![preprint](https://img.shields.io/static/v1?label=arXiv&message=2310.04948&color=B31B1B&logo=arXiv)](https://arxiv.org/pdf/2310.04948)
23
 
24
+ ![TEMPO_logo|50%](pics/TEMPO_logo.png)
25
 
26
 
27
 
 
43
 
44
 
45
 
46
+ # 🔧 Hands-on: Using Foundation Model
47
 
48
+ ## 1. Download the repo
49
 
50
  ```
51
  git clone [email protected]:DC-research/TEMPO.git
52
  ```
53
 
54
+ ## 2. [Optional] Download the model and config file via commands
55
  ```
56
  huggingface-cli download Melady/TEMPO config.json --local-dir ./TEMPO/TEMPO_checkpoints
57
  ```
58
  ```
59
+ huggingface-cli download Melady/TEMPO TEMPO-80M_v1.pth --local-dir ./TEMPO/TEMPO_checkpoints
60
  ```
61
 
62
  ```
63
+ huggingface-cli download Melady/TEMPO TEMPO-80M_v2.pth --local-dir ./TEMPO/TEMPO_checkpoints
64
  ```
65
 
66
+ ## 3. Build the environment
67
 
68
  ```
69
  conda create -n tempo python=3.8
 
78
  pip install -r requirements.txt
79
  ```
80
 
81
+ ## 4. Script Demo
82
 
83
  A streamlining example showing how to perform forecasting using TEMPO:
84
 
 
107
  ```
108
 
109
 
110
+ ## 5. Online demo
111
 
112
  Please try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).
113
 
114
  ![TEMPO_demo.jpg](pics/TEMPO_demo.jpg)
115
 
116
 
117
+ # 🔨 Advanced Practice: Full Training Workflow!
118
 
119
  We also updated our models on HuggingFace: [[Melady/TEMPO]](https://huggingface.co/Melady/TEMPO).
120
 
121
 
122
 
123
+ ## 1. Get Data
124
 
125
  Download the data from [[Google Drive]](https://drive.google.com/drive/folders/13Cg1KYOlzM5C7K8gK8NfC-F3EYxkM3D2?usp=sharing) or [[Baidu Drive]](https://pan.baidu.com/s/1r3KhGd0Q9PJIUZdfEYoymg?pwd=i9iy), and place the downloaded data in the folder`./dataset`. You can also download the STL results from [[Google Drive]](https://drive.google.com/file/d/1gWliIGDDSi2itUAvYaRgACru18j753Kw/view?usp=sharing), and place the downloaded data in the folder`./stl`.
126
 
127
+ ## 2. Run Scripts
128
 
129
+ ### 2.1 Pre-Training Stage
130
  ```
131
  bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather].sh
132
  ```
133
 
134
+ ### 2.2 Test/ Inference Stage
135
 
136
  After training, we can test TEMPO model under the zero-shot setting:
137
 
 
142
  ![TEMPO-results](pics/results.jpg)
143
 
144
 
145
+ # Pre-trained Models
146
 
147
  You can download the pre-trained model from [[Google Drive]](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link) and then run the test script for fun.
148
 
149
+ # TETS dataset
150
 
151
  Here is the prompts use to generate the coresponding textual informaton of time series via [[OPENAI ChatGPT-3.5 API]](https://platform.openai.com/docs/guides/text-generation)
152
 
 
159
 
160
  Example of generated contextual information for the Company marked above:
161
 
162
+ ![Company1_ebitda_summary_words.jpg](pics/Company1_ebitda_summary_words.jpg)
163
 
164
 
165
 
 
167
  You can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link
168
  ).
169
 
170
+ # 🚀 News
171
 
172
 
173
  - **Oct 2024**: 🚀 We've streamlined our code structure, enabling users to download the pre-trained model and perform zero-shot inference with a single line of code! Check out our [demo](./run_TEMPO_demo.py) for more details. Our model's download count on HuggingFace is now trackable!
 
192
  - [] Multimodal pre-training script
193
 
194
 
195
+ # Contact
196
  Feel free to connect [email protected] / [email protected] if you’re interested in applying TEMPO to your real-world application.
197
 
198
+ # Cite our work
199
  ```
200
  @inproceedings{
201
  cao2024tempo,