Yong99 commited on
Commit
9ea2757
·
verified ·
1 Parent(s): efed8a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -29,11 +29,11 @@ tags:
29
 
30
  **Update** (2024.5) [Timer](https://arxiv.org/abs/2402.02368), a large-scale pre-trained time sereis Transformer is accepted by ICML 2024.
31
 
32
- Large time-series model introduced in this [paper](https://arxiv.org/abs/2402.02368) and enhanced with our [further work](https://arxiv.org/abs/2410.04803).
33
 
34
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fbe24a2d20ced4e91de38a/nbzk91Z_yffYHmKau18qo.png)
35
 
36
- This version is univariate pre-trained on **260B** time points with **84M** parameters, a lightweight generative Transformer for zero-shot point forecasting.
37
 
38
  We evaluate the model on the following benchmark: [TSLib Dataset](https://cdn-uploads.huggingface.co/production/uploads/64fbe24a2d20ced4e91de38a/AXhZLVGR8Cnuxe8CVK4Fu.png).
39
 
@@ -68,12 +68,12 @@ A notebook example is also provided [here](https://github.com/thuml/Large-Time-S
68
 
69
  ## Specification
70
 
71
- * Architecture: Causal Transformer (Decoder-only)
72
- * Pre-training Scale: 260B time points
73
- * Context Length: up to 2880
74
- * Parameter Count: 84M
75
- * Patch Length: 96
76
- * Number of Layers: 8
77
 
78
  ## Adaptation
79
 
 
29
 
30
  **Update** (2024.5) [Timer](https://arxiv.org/abs/2402.02368), a large-scale pre-trained time sereis Transformer is accepted by ICML 2024.
31
 
32
+ **Timer** is a large time-series model introduced in this [paper](https://arxiv.org/abs/2402.02368) and enhanced with [further work](https://arxiv.org/abs/2410.04803).
33
 
34
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64fbe24a2d20ced4e91de38a/nbzk91Z_yffYHmKau18qo.png)
35
 
36
+ This version is univariate pre-trained on **260B** time points with **84M** parameters, a lightweight generative Transformer for **zero-shot** point forecasting.
37
 
38
  We evaluate the model on the following benchmark: [TSLib Dataset](https://cdn-uploads.huggingface.co/production/uploads/64fbe24a2d20ced4e91de38a/AXhZLVGR8Cnuxe8CVK4Fu.png).
39
 
 
68
 
69
  ## Specification
70
 
71
+ * **Architecture**: Causal Transformer (Decoder-only)
72
+ * **Pre-training Scale**: 260B time points
73
+ * **Context Length**: up to 2880
74
+ * **Parameter Count**: 84M
75
+ * **Patch Length**: 96
76
+ * **Number of Layers**: 8
77
 
78
  ## Adaptation
79