Dataset Viewer
Auto-converted to Parquet
venue
stringclasses
9 values
original_openreview_id
stringlengths
8
17
revision_openreview_id
stringlengths
8
11
content
stringlengths
2
620k
time
stringdate
2016-11-04 05:38:56
2025-05-23 04:52:50
ICLR.cc/2025/Conference
NNlg3eUZ8N
cX02yuzwWI
[{'section': '2 RELATED WORK', 'after_section': '2 RELATED WORK', 'context_after': 'Theoretical Frameworks for GNNs. Despite the empirical success of Graph Neural Networks (GNNs), establishing theories to explain their behaviors is still an evolving field. Recent works have made significant progress in understanding over-smoothing (Li et al., 2018; Zhao & Akoglu, 2019; Oono & Suzuki, 2019; Rong et al., 2020), interpretability (Ying et al., 2019; Luo et al., 2020; Vu & Thai, 2020; Yuan et al., 2020; 2021), expressiveness (Xu et al., 2018; Chen et al., 2019; Maron et al., 2018; Dehmamy et al., 2019; Feng et al., 2022), and generalization (Scarselli et al., 2018; Du ', 'paragraph_idx': 8, 'before_section': '2 RELATED WORK', 'context_before': 'in Bayesian neural networks, providing a probabilistic framework for understanding its effects. Regularization in Graph Neural Networks. Graph Neural Networks (GNNs), while powerful, ', 'modified_lines': 'are prone to overfitting and over-smoothing (Li et al., 2018). Various regularization techniques (Yang et al., 2021; Rong et al., 2019; Fang et al., 2023; Feng et al., 2020) have been proposed to address these issues. DropEdge (Rong et al., 2019) randomly removes edges from the input graph dur- ing training, reducing over-smoothing and improving generalization. Graph diffusion-based meth- ods (Gasteiger et al., 2019) incorporate higher-order neighborhood information to enhance model robustness. Spectral-based approaches (Wu et al., 2019) leverage the graph spectrum to design effec- tive regularization strategies. Empirical studies have shown that traditional dropout can be effective in GNNs (Hamilton et al., 2017), but its interaction with graph structure remains poorly understood. Some works have proposed adaptive dropout strategies for GNNs (Gao & Ji, 2019), but these are primarily heuristic approaches without comprehensive theoretical grounding. 2 Under review as a conference paper at ICLR 2025 ', 'original_lines': 'are prone to overfitting and over-smoothing (Li et al., 2018). Various regularization techniques have been proposed to address these issues. DropEdge (Rong et al., 2020) randomly removes edges from the input graph during training, reducing over-smoothing and improving generalization. Graph diffusion-based methods (Gasteiger et al., 2019) incorporate higher-order neighborhood informa- tion to enhance model robustness. Spectral-based approaches (Wu et al., 2019) leverage the graph spectrum to design effective regularization strategies. Empirical studies have shown that traditional dropout can be effective in GNNs (Hamilton et al., 2017), but its interaction with graph structure re- mains poorly understood. Some works have proposed adaptive dropout strategies for GNNs (Gao & Ji, 2019), but these are primarily heuristic approaches without comprehensive theoretical grounding. 2 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 7}, {'section': '1 − p', 'after_section': None, 'context_after': '(cid:113) p This generalization bound reveals how dropout affects GCNs’ learning capabilities and presents several practical insights: First, network depth plays a crucial role. As signals propagate through ', 'paragraph_idx': 45, 'before_section': '1 − p', 'context_before': 'Ll capturing weight and graph effects, the magnitude of feature activations ∥σ( ˜AH (l−1)W (l))∥F , the dropout rate p through the term ', 'modified_lines': '1−p (The complete proof is in the Appendix.A.3). ', 'original_lines': '1−p ', 'after_paragraph_idx': None, 'before_paragraph_idx': 45}, {'section': '1 − p', 'after_section': '1 − p', 'context_after': 'd , γ(l) ', 'paragraph_idx': 46, 'before_section': '1 − p', 'context_before': 'where l = 1, 2, ..., L indicates the layer, dmin is the minimum degree in the graph, |E| is the total number of edges, Φ is the standard normal CDF and β(l) d are the BN parameters for dimension ', 'modified_lines': 'd at layer l (The complete proof is in the Appendix.A.4). ', 'original_lines': 'd at layer l. ', 'after_paragraph_idx': 46, 'before_paragraph_idx': 46}]
2024-11-26 14:31:47
ICLR.cc/2025/Conference
cX02yuzwWI
UMYtslJUhV
[{'section': '3.1 NOTATIONS AND DEFINITIONS', 'after_section': None, 'context_after': '(C(l) (cid:88) i (t) = t )ij, ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'This matrix C (l) neously. ', 'modified_lines': 'Definition 9 (Effective Degree). The effective degree degeff degeff ', 'original_lines': 'Definition 9 (Effective Degree). The effective degree deff deff ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'after_section': None, 'context_after': '|E| ( ', 'paragraph_idx': 35, 'before_section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'context_before': 'E[E(H (l))] ≤ ', 'modified_lines': 'degmax ', 'original_lines': 'dmax ', 'after_paragraph_idx': None, 'before_paragraph_idx': 34}, {'section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'after_section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'context_after': 'to limitations in bounding certain terms. We will later show that when considering batch normaliza- tion, we can establish the existence of a lower bound, providing a more complete characterization of feature energy behavior. Additionally, we explored how dropout modulates the weight matrices in a ', 'paragraph_idx': 35, 'before_section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'context_before': 'proof is in the Appendix.A.2). The derived bound demonstrates how dropout affects feature energy through the interplay of net- ', 'modified_lines': 'work depth (l), graph structure (through degmax and ˜A), and weight properties (||W (i)||2). Note that this analysis only provides an upper bound; the absence of a lower bound in this derivation is due ', 'original_lines': 'work depth (l), graph structure (through dmax and ˜A), and weight properties (||W (i)||2). Note that this analysis only provides an upper bound; the absence of a lower bound in this derivation is due ', 'after_paragraph_idx': 35, 'before_paragraph_idx': 34}, {'section': '1 − p', 'after_section': '1 − p', 'context_after': 'number of edges, Φ is the standard normal CDF and β(l) d are the BN parameters for dimension d at layer l (The complete proof is in the Appendix.A.4). ', 'paragraph_idx': 46, 'before_section': '1 − p', 'context_before': 'd )2 ', 'modified_lines': 'where l = 1, 2, ..., L indicates the layer, degmin is the minimum degree in the graph, |E| is the total ', 'original_lines': 'where l = 1, 2, ..., L indicates the layer, dmin is the minimum degree in the graph, |E| is the total ', 'after_paragraph_idx': 46, 'before_paragraph_idx': 46}, {'section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'after_section': None, 'context_after': '|E| ∥Z∥2 ', 'paragraph_idx': 35, 'before_section': None, 'context_before': '2 = ', 'modified_lines': 'degmax ', 'original_lines': 'dmax ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'after_section': None, 'context_after': '|E| ( ', 'paragraph_idx': 35, 'before_section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'context_before': 'E[E(H (l))] ≤ ', 'modified_lines': 'degmax ', 'original_lines': 'dmax ', 'after_paragraph_idx': None, 'before_paragraph_idx': 34}, {'section': 'Abstract', 'after_section': None, 'context_after': '(cid:88) (i,j)∈E ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '≥ ', 'modified_lines': '', 'original_lines': 'Then with BN bound: ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-26 14:53:38
ICLR.cc/2025/Conference
UMYtslJUhV
U6AVndkUMf
[{'section': '1 INTRODUCTION', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'have applied dropout to GNNs, often observing beneficial effects on generalization (Hamilton et al., 2017). ', 'modified_lines': 'While dropout in standard neural networks primarily prevents co-adaptation of features, its inter- action with graph structure creates unique phenomena that current theoretical frameworks fail to capture. These observations prompt a fundamental question: How does dropout uniquely interact ', 'original_lines': 'Despite the widespread adoption of dropout in Graph Convolutional Networks (GCNs), our prelim- inary investigations have revealed intriguing discrepancies between its behavior in GCNs and its well-understood effects in traditional neural networks. These observations prompt a fundamental ', 'after_paragraph_idx': None, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '• Dropout in GCNs creates dimension-specific stochastic sub-graphs, leading to a unique ', 'paragraph_idx': 5, 'before_section': None, 'context_before': '106 107 ', 'modified_lines': 'with the graph structure in GCNs? In this paper, we present a comprehensive theoretical analysis of dropout in the context of GCNs. Our findings reveal that dropout in GCNs interacts with the underlying graph structure in ways that are fundamentally different from its operation in traditional neural networks. Specifically, we demonstrate that: ', 'original_lines': 'question: How does dropout uniquely interact with the graph structure in GCNs? In this paper, we present a comprehensive theoretical analysis of dropout in the context of GCNs. Our findings reveal that dropout in GCNs interacts with the underlying graph structure in ways that are fundamentally different from its operation in traditional neural networks. Specifically, we demonstrate that: ', 'after_paragraph_idx': 5, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '• The generalization bounds for GCNs with dropout exhibit a complex dependence on graph ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'based on their connectivity, resulting in adaptive regularization that considers the topolog- ical importance of nodes in the graph. ', 'modified_lines': '• Dropout plays a crucial role in mitigating the oversmoothing problem rather than co- adaption in GCNs, though its effects are more nuanced than previously thought. ', 'original_lines': '• Dropout plays a crucial role in mitigating the oversmoothing problem in GCNs, though its effects are more nuanced than previously thought. ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 5}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Theorem 15 (Dropout and Feature Energy). For a GCN with dropout probability p, the expected feature energy at layer l is bounded by: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Oversmoothing is a well-known issue in GCNs, where node representations become indistinguish- able as the number of layers increases. Our analysis reveals that dropout plays a crucial role in this context, though its effects are more nuanced than previously thought. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'after_section': None, 'context_after': '6 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 ', 'paragraph_idx': 34, 'before_section': '3.4 ROLE OF DROPOUT IN OVERSMOOTHING', 'context_before': 'where E(X) is the energy of the input features and W (i) are the weight matrices (The complete proof is in the Appendix.A.2). ', 'modified_lines': 'The derived bound demonstrates how dropout affects feature energy through the interplay of network depth (l), graph structure (through degmax and ˜A), and weight properties (||W (i)||2 2). Note that this analysis only provides an upper bound; the absence of a lower bound in this derivation is due to limitations in bounding certain terms. We will later show that when considering batch normalization, we can establish the existence of a lower bound, providing a more complete characterization. 3.5 GENERALIZATION BOUNDS WITH GRAPH-SPECIFIC DROPOUT EFFECTS The unique properties of dropout in GCNs, such as the creation of stochastic sub-graphs and degree- dependent effects, influence how these models generalize to unseen data. Our analysis provides novel generalization bounds that explicitly account for these graph-specific dropout effects, offer- ing insights into how dropout interacts with graph structure to influence the model’s generalization capabilities. Theorem 16 (Generalization Bound for L-Layer GCN with Dropout). For an L-layer GCN F with dropout probability p, with probability at least 1 − δ over the training examples, the following generalization bound holds: ED[L(F (x))]−ES[L(F (x))] ≤ O (cid:32)(cid:114) log(1/δ) n L (cid:88) l=1 Lloss · Ll · (cid:114) p 1 − p ∥σ( ˜AH (l−1)W (l))∥F , (cid:33) (7) ', 'original_lines': 'The derived bound demonstrates how dropout affects feature energy through the interplay of net- work depth (l), graph structure (through degmax and ˜A), and weight properties (||W (i)||2). Note that this analysis only provides an upper bound; the absence of a lower bound in this derivation is due to limitations in bounding certain terms. We will later show that when considering batch normaliza- tion, we can establish the existence of a lower bound, providing a more complete characterization of feature energy behavior. Additionally, we explored how dropout modulates the weight matrices in a 2-layer GCN, with a particular focus on its effects on the spectral norm, as detailed in Appendix A.5. Building on this, we further analyze three key metrics to understand how dropout influences feature representations, as depicted in Figure 9. From the left side of Figure 9, the Frobenius norm of fea- tures remains relatively stable whether dropout is applied or not, suggesting that dropout’s effects are not simply uniformly scaling all features. The middle of Figure 9 shows that dropout consis- tently doubles the average pairwise distance between nodes, aiding in maintaining more distinctive node representations. Most notably, the right side of Figure 9 demonstrates that dropout significantly increases feature energy. The substantial rise in feature energy, compared to the moderate changes in Frobenius norm and pairwise distances, provides strong evidence that dropout enhances discrim- inative power between connected nodes, explaining its effectiveness in preventing oversmoothing. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 33}, {'section': '1 − p', 'after_section': '1 − p', 'context_after': 'where ED is the expectation over the data distribution. ES is the expectation over the training samples. L is the loss function with Lipschitz constant Lloss. Ll = (cid:81)L 7 ', 'paragraph_idx': 36, 'before_section': None, 'context_before': 'Figure 8: BN feature energy vs dropout rates. ', 'modified_lines': 'i=l+1(∥W (i)∥ · ∥ ˜A∥) is the Lipschitz constant from layer l to output. ∥W (i)∥ is the spectral norm (largest singular value) of the weight matrix at layer i. ∥ ˜A∥ is the spectral norm of the normalized adjacency matrix. n is the number of training samples. p is the dropout probability. (The complete proof is in the Appendix.A.3). This generalization bound reveals how the network’s stability depends on the Lipschitz constant of the loss function, the layer-wise Lipschitz constants capturing weight and graph effects, the magni- tude of feature activations, the dropout rate, and presents several practical insights: First, network depth plays a crucial role. As signals propagate through layers, the effects of weights and graph structure accumulate multiplicatively. This suggests that deeper GCNs might need more careful regularization, as small perturbations could amplify through the network. Second, the graph struc- ture naturally influences how information flows through the network. The way we normalize our adjacency matrix (∥ ˜A∥2 ≈ 1) provides a built-in stabilizing effect. However, graphs with differ- ent connectivity patterns might require different dropout strategies. Third, looking at each layer individually, we see that both network weights and feature magnitudes matter. Some layers might process more important features than others, suggesting that a one-size-fits-all dropout rate might not be optimal. Instead, adapting dropout rates based on layer-specific characteristics could be more effective. Finally, there’s an inherent trade-off in choosing dropout rates. Higher dropout rates provide stronger regularization but also introduce more noise in the training process. Our bound helps explain this balance mathematically, suggesting why moderate dropout rates often work best in practice. 3.6 INTERACTION OF DROPOUT AND BATCH NORMALIZATION IN GCNS While dropout provides a powerful regularization mechanism for GCNs, its degree-dependent nature can lead to uneven regularization across nodes. Batch Normalization (BN) offers a complementary approach that can potentially address this issue and enhance the benefits of dropout. Our analysis reveals how the combination of dropout and BN creates a synergistic regularization effect that is sensitive to both graph structure and feature distributions. Theorem 17 (Layer-wise Energy Lower Bound for GCN). For an L-layer Graph Convolutional Network with dropout rate p, batch normalization parameters {β(l) d=1 at each layer l, with probability at least (1 − δ)L, the expected feature energy at each layer l satisfies: d , γ(l) d }dl E(H (l)) ≥ pdegmin 2|E|(1 − p) dl(cid:88) d=1 Φ(β(l) d /γ(l) d ) · (β(l) d )2 where l = 1, 2, ..., L indicates the layer, degmin is the minimum degree in the graph, |E| is the total number of edges, Φ is the standard normal CDF and β(l) d are the BN parameters for dimension d at layer l (The complete proof is in the Appendix.A.4). d , γ(l) Our theoretical bound reveals the synergistic interaction between dropout and batch normalization in GCNs, establishing a refined form of regularization. The energy preservation term p 1−p from dropout ', 'original_lines': 'Figure 9: Effect of dropout on feature F-norm, average pair distance, and feature energy. 3.5 GENERALIZATION BOUNDS WITH GRAPH-SPECIFIC DROPOUT EFFECTS The unique properties of dropout in GCNs, such as the creation of stochastic sub-graphs and degree- dependent effects, influence how these models generalize to unseen data. Our analysis provides novel generalization bounds that explicitly account for these graph-specific dropout effects, offer- ing insights into how dropout interacts with graph structure to influence the model’s generalization capabilities. Theorem 16 (Generalization Bound for L-Layer GCN with Dropout). For an L-layer GCN F with dropout probability p, with probability at least 1 − δ over the training examples, the following generalization bound holds: ED[L(F (x))]−ES[L(F (x))] ≤ O (cid:32)(cid:114) log(1/δ) n L (cid:88) Lloss · Ll · l=1 (cid:114) p 1 − p ∥σ( ˜AH (l−1)W (l))∥F , (cid:33) (7) i=l+1(∥W (i)∥ · ∥ ˜A∥) is the Lipschitz constant from layer l to output. ∥W (i)∥ is the spectral norm (largest singular value) of the weight matrix at layer i. ∥ ˜A∥ is the spectral norm of the normalized adjacency matrix. n is the number of training samples. p is the dropout probability. The bound reflects how the network’s sta- bility depends on the Lipschitz constant of the loss function Lloss, the layer-wise Lipschitz constants Ll capturing weight and graph effects, the magnitude of feature activations ∥σ( ˜AH (l−1)W (l))∥F , the dropout rate p through the term 1−p (The complete proof is in the Appendix.A.3). (cid:113) p This generalization bound reveals how dropout affects GCNs’ learning capabilities and presents several practical insights: First, network depth plays a crucial role. As signals propagate through layers, the effects of weights and graph structure accumulate multiplicatively. This suggests that deeper GCNs might need more careful regularization, as small perturbations could amplify through the network. Second, the graph structure naturally influences how information flows through the network. The way we normalize our adjacency matrix (typically ensuring its norm is at most 1) provides a built-in stabilizing effect. However, graphs with different connectivity patterns might require different dropout strategies. Third, looking at each layer individually, we see that both net- work weights and feature magnitudes matter. Some layers might process more important features ', 'after_paragraph_idx': 36, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'We follow all evaluation protocols suggested by Dwivedi et al. (2023). Peptides-func involves clas- sifying graphs into 10 functional classes, while Peptides-struct regresses 11 structural properties. All evaluations followed the protocols in (Dwivedi et al., 2022). ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'For graph-level tasks, we used MNIST, CIFAR10 (Dwivedi et al., 2023), and two Peptides datasets (functional and structural) (Dwivedi et al., 2022). MNIST and CIFAR10 are graph versions of their image classification counterparts, constructed using 8-nearest neighbor graphs of SLIC superpixels. ', 'modified_lines': '', 'original_lines': ' 8 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 Under review as a conference paper at ICLR 2025 Table 1: Node classification results (%). The baseline results are taken from Deng et al. (2024); Wu et al. (2023). The top 1st, 2nd and 3rd results are highlighted. ”dp” denotes dropout. Cora CiteSeer PubMed Computer Photo CS Physics WikiCS ogbn-arxiv ogbn-products # nodes # edges Metric GCNII GPRGNN APPNP tGNN GraphGPS NAGphormer Exphormer GOAT NodeFormer SGFormer Polynormer GCN Dirichlet energy GCN w/o dp Dirichlet energy 2,708 5,278 2,449,029 61,859,140 Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ 169,343 1,166,243 13,752 245,861 7,650 119,081 34,493 247,962 11,701 216,123 19,717 44,324 18,333 81,894 3,327 4,732 85.19 ± 0.26 73.20 ± 0.83 80.32 ± 0.44 91.04 ± 0.41 94.30 ± 0.20 92.22 ± 0.14 95.97 ± 0.11 78.68 ± 0.55 72.74 ± 0.31 83.17 ± 0.78 71.86 ± 0.67 79.75 ± 0.38 89.32 ± 0.29 94.49 ± 0.14 95.13 ± 0.09 96.85 ± 0.08 78.12 ± 0.23 71.10 ± 0.12 83.32 ± 0.55 71.78 ± 0.46 80.14 ± 0.22 90.18 ± 0.17 94.32 ± 0.14 94.49 ± 0.07 96.54 ± 0.07 78.87 ± 0.11 72.34 ± 0.24 82.97 ± 0.68 71.74 ± 0.49 80.67 ± 0.34 83.40 ± 1.33 89.92 ± 0.72 92.85 ± 0.48 96.24 ± 0.24 71.49 ± 1.05 72.88 ± 0.26 82.84 ± 1.03 72.73 ± 1.23 79.94 ± 0.26 91.19 ± 0.54 95.06 ± 0.13 93.93 ± 0.12 97.12 ± 0.19 78.66 ± 0.49 70.97 ± 0.41 82.12 ± 1.18 71.47 ± 1.30 79.73 ± 0.28 91.22 ± 0.14 95.49 ± 0.11 95.75 ± 0.09 97.34 ± 0.03 77.16 ± 0.72 70.13 ± 0.55 82.77 ± 1.38 71.63 ± 1.19 79.46 ± 0.35 91.47 ± 0.17 95.35 ± 0.22 94.93 ± 0.01 96.89 ± 0.09 78.54 ± 0.49 72.44 ± 0.28 83.18 ± 1.27 71.99 ± 1.26 79.13 ± 0.38 90.96 ± 0.90 92.96 ± 1.48 94.21 ± 0.38 96.24 ± 0.24 77.00 ± 0.77 72.41 ± 0.40 82.20 ± 0.90 72.50 ± 1.10 79.90 ± 1.00 86.98 ± 0.62 93.46 ± 0.35 95.64 ± 0.22 96.45 ± 0.28 74.73 ± 0.94 59.90 ± 0.42 84.50 ± 0.80 72.60 ± 0.20 80.30 ± 0.60 92.42 ± 0.66 95.58 ± 0.36 95.71 ± 0.24 96.75 ± 0.26 80.05 ± 0.46 72.63 ± 0.13 83.25 ± 0.93 72.31 ± 0.78 79.24 ± 0.43 93.68 ± 0.21 96.46 ± 0.26 95.53 ± 0.16 97.27 ± 0.08 80.10 ± 0.67 73.46 ± 0.16 85.22 ± 0.66 73.24 ± 0.63 81.08 ± 1.16 93.15 ± 0.34 95.03 ± 0.24 94.41 ± 0.13 97.07 ± 0.04 80.14 ± 0.52 73.13 ± 0.27 3.765 735.876 20.241 7.403 0.437 0.452 8.020 8.966 8.021 83.18 ± 1.22 70.48 ± 0.45 79.40 ± 1.02 90.60 ± 0.84 94.10 ± 0.15 94.30 ± 0.22 96.92 ± 0.05 77.61 ± 1.34 72.05 ± 0.23 1.793 264.230 0.114 2.951 0.170 0.592 3.980 0.318 1.231 79.42 ± 0.36 79.76 ± 0.59 78.84 ± 0.09 81.79 ± 0.54 OOM 73.55 ± 0.21 OOM 82.00 ± 0.43 73.96 ± 0.30 81.54 ± 0.43 83.82 ± 0.11 81.87 ± 0.41 7.771 77.50 ± 0.37 1.745 GCN w/o BN 84.97 ± 0.73 72.97 ± 0.86 80.94 ± 0.87 92.39 ± 0.18 94.38 ± 0.13 93.46 ± 0.24 96.76 ± 0.06 79.00 ± 0.48 71.93 ± 0.18 79.37 ± 0.42 84.14 ± 0.63 71.62 ± 0.29 77.86 ± 0.79 92.65 ± 0.21 95.71 ± 0.20 95.90 ± 0.09 97.20 ± 0.10 80.29 ± 0.97 72.72 ± 0.13 SAGE 83.06 ± 0.80 69.68 ± 0.82 76.40 ± 1.48 90.17 ± 0.60 94.90 ± 0.17 95.80 ± 0.08 97.06 ± 0.06 78.84 ± 1.17 71.37 ± 0.31 SAGE w/o dp SAGE w/o BN 83.89 ± 0.67 71.39 ± 0.75 77.26 ± 1.02 92.54 ± 0.24 95.51 ± 0.23 94.87 ± 0.15 97.03 ± 0.03 79.50 ± 0.93 71.52 ± 0.17 GAT GAT w/o dp GAT w/o BN 83.92 ± 1.29 72.00 ± 0.91 80.48 ± 0.99 93.47 ± 0.27 95.53 ± 0.16 94.49 ± 0.17 96.73 ± 0.10 80.21 ± 0.68 72.83 ± 0.19 82.58 ± 1.47 71.08 ± 0.42 79.28 ± 0.58 92.94 ± 0.30 93.88 ± 0.16 94.30 ± 0.14 96.42 ± 0.08 78.67 ± 0.40 71.52 ± 0.41 83.76 ± 1.32 71.82 ± 0.83 80.43 ± 1.03 92.16 ± 0.26 95.05 ± 0.49 93.33 ± 0.26 96.57 ± 0.20 79.49 ± 0.62 71.68 ± 0.36 82.69 ± 0.28 79.82 ± 0.22 80.91 ± 0.35 80.05 ± 0.34 77.87 ± 0.25 78.21 ± 0.32 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': None, 'context_after': '4.2 NODE-LEVEL CLASSIFICATION RESULTS The node-level classification results in Table 1 not only align with our theoretical predictions but also showcase the remarkable effectiveness of dropout. Notably, GCN with dropout and batch nor- malization outperforms state-of-the-art methods on several benchmarks, including Cora, CiteSeer, and PubMed. This superior performance underscores the practical significance of our theoretical insights. Consistently across all datasets, models employing dropout outperform their counterparts ', 'paragraph_idx': 45, 'before_section': None, 'context_before': 'Experimental Setup. We implemented all models using the PyTorch Geometric library (Fey & Lenssen, 2019). The experiments are conducted on a single workstation with 8 RTX 3090 GPUs. ', 'modified_lines': 'For node-level tasks, we adhered to the training protocols specified in (Deng et al., 2024; Luo et al., 2024b;a), employing BN and adjusting the dropout rate between 0.1 and 0.7. In graph-level tasks, we followed the experimental settings established by T¨onshoff et al. (2023), utilizing BN with a consistent dropout rate of 0.2. All experiments were run with 5 different random seeds, and we report the mean accuracy and standard deviation. To ensure generalizability, we used Dirichlet energy (Cai & Wang, 2020) as an oversmoothing metric, which is proportional to our feature energy. 8 Under review as a conference paper at ICLR 2025 Table 1: Node classification results (%). The baseline results are taken from Deng et al. (2024); Wu et al. (2023). The top 1st, 2nd and 3rd results are highlighted. ”dp” denotes dropout. Cora CiteSeer PubMed Computer Photo CS Physics WikiCS ogbn-arxiv ogbn-products # nodes # edges Metric GCNII GPRGNN APPNP tGNN GraphGPS NAGphormer Exphormer GOAT NodeFormer SGFormer Polynormer GCN Dirichlet energy GCN w/o dp Dirichlet energy 2,708 5,278 2,449,029 61,859,140 Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ Accuracy↑ 169,343 1,166,243 13,752 245,861 7,650 119,081 34,493 247,962 11,701 216,123 19,717 44,324 18,333 81,894 3,327 4,732 85.19 ± 0.26 73.20 ± 0.83 80.32 ± 0.44 91.04 ± 0.41 94.30 ± 0.20 92.22 ± 0.14 95.97 ± 0.11 78.68 ± 0.55 72.74 ± 0.31 83.17 ± 0.78 71.86 ± 0.67 79.75 ± 0.38 89.32 ± 0.29 94.49 ± 0.14 95.13 ± 0.09 96.85 ± 0.08 78.12 ± 0.23 71.10 ± 0.12 83.32 ± 0.55 71.78 ± 0.46 80.14 ± 0.22 90.18 ± 0.17 94.32 ± 0.14 94.49 ± 0.07 96.54 ± 0.07 78.87 ± 0.11 72.34 ± 0.24 82.97 ± 0.68 71.74 ± 0.49 80.67 ± 0.34 83.40 ± 1.33 89.92 ± 0.72 92.85 ± 0.48 96.24 ± 0.24 71.49 ± 1.05 72.88 ± 0.26 82.84 ± 1.03 72.73 ± 1.23 79.94 ± 0.26 91.19 ± 0.54 95.06 ± 0.13 93.93 ± 0.12 97.12 ± 0.19 78.66 ± 0.49 70.97 ± 0.41 82.12 ± 1.18 71.47 ± 1.30 79.73 ± 0.28 91.22 ± 0.14 95.49 ± 0.11 95.75 ± 0.09 97.34 ± 0.03 77.16 ± 0.72 70.13 ± 0.55 82.77 ± 1.38 71.63 ± 1.19 79.46 ± 0.35 91.47 ± 0.17 95.35 ± 0.22 94.93 ± 0.01 96.89 ± 0.09 78.54 ± 0.49 72.44 ± 0.28 83.18 ± 1.27 71.99 ± 1.26 79.13 ± 0.38 90.96 ± 0.90 92.96 ± 1.48 94.21 ± 0.38 96.24 ± 0.24 77.00 ± 0.77 72.41 ± 0.40 82.20 ± 0.90 72.50 ± 1.10 79.90 ± 1.00 86.98 ± 0.62 93.46 ± 0.35 95.64 ± 0.22 96.45 ± 0.28 74.73 ± 0.94 59.90 ± 0.42 84.50 ± 0.80 72.60 ± 0.20 80.30 ± 0.60 92.42 ± 0.66 95.58 ± 0.36 95.71 ± 0.24 96.75 ± 0.26 80.05 ± 0.46 72.63 ± 0.13 83.25 ± 0.93 72.31 ± 0.78 79.24 ± 0.43 93.68 ± 0.21 96.46 ± 0.26 95.53 ± 0.16 97.27 ± 0.08 80.10 ± 0.67 73.46 ± 0.16 85.22 ± 0.66 73.24 ± 0.63 81.08 ± 1.16 93.15 ± 0.34 95.03 ± 0.24 94.41 ± 0.13 97.07 ± 0.04 80.14 ± 0.52 73.13 ± 0.27 3.765 735.876 20.241 7.403 0.437 0.452 8.020 8.966 8.021 83.18 ± 1.22 70.48 ± 0.45 79.40 ± 1.02 90.60 ± 0.84 94.10 ± 0.15 94.30 ± 0.22 96.92 ± 0.05 77.61 ± 1.34 72.05 ± 0.23 1.793 264.230 0.114 2.951 0.170 0.592 3.980 0.318 1.231 79.42 ± 0.36 79.76 ± 0.59 78.84 ± 0.09 81.79 ± 0.54 OOM 73.55 ± 0.21 OOM 82.00 ± 0.43 73.96 ± 0.30 81.54 ± 0.43 83.82 ± 0.11 81.87 ± 0.41 7.771 77.50 ± 0.37 1.745 GCN w/o BN 84.97 ± 0.73 72.97 ± 0.86 80.94 ± 0.87 92.39 ± 0.18 94.38 ± 0.13 93.46 ± 0.24 96.76 ± 0.06 79.00 ± 0.48 71.93 ± 0.18 79.37 ± 0.42 84.14 ± 0.63 71.62 ± 0.29 77.86 ± 0.79 92.65 ± 0.21 95.71 ± 0.20 95.90 ± 0.09 97.20 ± 0.10 80.29 ± 0.97 72.72 ± 0.13 SAGE 83.06 ± 0.80 69.68 ± 0.82 76.40 ± 1.48 90.17 ± 0.60 94.90 ± 0.17 95.80 ± 0.08 97.06 ± 0.06 78.84 ± 1.17 71.37 ± 0.31 SAGE w/o dp SAGE w/o BN 83.89 ± 0.67 71.39 ± 0.75 77.26 ± 1.02 92.54 ± 0.24 95.51 ± 0.23 94.87 ± 0.15 97.03 ± 0.03 79.50 ± 0.93 71.52 ± 0.17 GAT GAT w/o dp GAT w/o BN 83.92 ± 1.29 72.00 ± 0.91 80.48 ± 0.99 93.47 ± 0.27 95.53 ± 0.16 94.49 ± 0.17 96.73 ± 0.10 80.21 ± 0.68 72.83 ± 0.19 82.58 ± 1.47 71.08 ± 0.42 79.28 ± 0.58 92.94 ± 0.30 93.88 ± 0.16 94.30 ± 0.14 96.42 ± 0.08 78.67 ± 0.40 71.52 ± 0.41 83.76 ± 1.32 71.82 ± 0.83 80.43 ± 1.03 92.16 ± 0.26 95.05 ± 0.49 93.33 ± 0.26 96.57 ± 0.20 79.49 ± 0.62 71.68 ± 0.36 82.69 ± 0.28 79.82 ± 0.22 80.91 ± 0.35 80.05 ± 0.34 77.87 ± 0.25 78.21 ± 0.32 ', 'original_lines': 'For node-level tasks, we adhered to the training protocols specified in (Deng et al., 2024), employing BN and adjusting the dropout rate between 0.1 and 0.7. In graph-level tasks, we followed the experimental settings established by T¨onshoff et al. (2023), utilizing BN with a consistent dropout rate of 0.2. All experiments were run with 5 different random seeds, and we report the mean accuracy and standard deviation. To ensure generalizability, we used Dirichlet energy (Cai & Wang, 2020) as an oversmoothing metric, which is proportional to our feature energy (see appendix). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 EXPERIMENTS', 'after_section': '4 EXPERIMENTS', 'context_after': '9 ', 'paragraph_idx': 48, 'before_section': '4 EXPERIMENTS', 'context_before': 'from its effects in standard neural networks. The varying levels of improvement observed across different datasets support our theory of degree-dependent dropout effects that adapt to the graph structure. Furthermore, the consistent increase in Dirichlet energy when using dropout provides em- ', 'modified_lines': 'pirical evidence for our theoretical insight into dropout’s crucial role in mitigating oversmoothing in GCNs, particularly evident in larger graphs. The complementary roles of dropout and batch normal- ization are demonstrated by the performance drop when either is removed, supporting our analysis of their synergistic interaction in GCNs. 4.3 GRAPH-LEVEL CLASSIFICATION RESULTS Our graph-level classification results, presented in Tables 2 and 3, further validate the broad applica- bility of our theoretical framework. First, compared to recent SOTA models, we observe that simply tuning dropout enables GNNs to achieve SOTA performance on three datasets and is competitive with the best single-model results on the remaining dataset. Second, the significant accuracy im- provements on graph-level tasks such as Peptides-func and CIFAR10 highlight that our insights ex- tend beyond node classification. The varying degrees of improvement across different graph datasets are consistent with our theory that dropout provides adaptive regularization tailored to graph proper- ties. Third, the consistent increase in Dirichlet energy when using dropout supports our theoretical analysis of dropout’s role in preserving feature diversity. These results robustly validate our theory, showing that dropout in GCNs produces dimension- specific stochastic sub-graphs, has degree-dependent effects, mitigates oversmoothing, and offers topology-aware regularization. Combined with batch normalization, dropout enhances GCN per- formance on graph-level tasks, affirming the relevance and utility of our framework and suggesting directions for improving GNN architectures. 4.4 MITIGATING OVERSMOOTHING OR CO-ADAPTION In traditional neural networks, dropout is known to prevent co-adaptation of neurons. However, could dropout serve a different primary purpose in GCNs? Our theoretical framework proposes that dropout primarily mitigates oversmoothing in GCNs rather than preventing co-adaptation. To validate this, we explored how dropout modulates the weight matrices in a 2-layer GCN, with a ', 'original_lines': '', 'after_paragraph_idx': 48, 'before_paragraph_idx': 48}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Table 2: Graph classification results on two pep- tide datasets from LRGB (Dwivedi et al., 2022). ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': ' 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '13 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. In International Conference on Learning Repre- sentations, 2020. URL https://openreview.net/forum?id=Hkx1qkrKPr. ', 'modified_lines': '', 'original_lines': ' Franco Scarselli, Ah Chung Tsoi, and Markus Hagenbuchner. The vapnik–chervonenkis dimension of graph and recursive neural networks. Neural Networks, 108:248–259, 2018. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008. Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan G¨unnemann. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868, 2018. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'A APPENDIX A.1 PROOF OF THEOREM 11 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'arXiv:1909.12223, 2019. ', 'modified_lines': '', 'original_lines': '14 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where zi = σ((cid:80) Step 3: Since degi ≤ degmax for all i: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Tr(ZT AZ) ', 'modified_lines': '', 'original_lines': '15 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'where we used Jensen’s inequality in the last step. Step 5: Layer Aggregation. The total expected change in loss: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '∥σ( ˜AH (l−1)W (l))∥F ', 'modified_lines': '', 'original_lines': '16 (9) (10) (11) (12) (13) (14) (15) 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Step 6: Concentration Bound. Let f (S) = ED[L(F (x))] − ES[L(F (x))] where S is training set. When changing one example in S to S′, the maximum change is: ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '∥σ( ˜AH (l−1)W (l))∥F ', 'modified_lines': '', 'original_lines': '(16) (17) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'n ln(1/δ) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ϵ2 = ', 'modified_lines': '', 'original_lines': '(cid:32)(cid:114) ϵ = O ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '864 865 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '(23) (24) ', 'modified_lines': '', 'original_lines': ' Therefore, with probability at least 1 − δ: ED[L(F (x))] − ES[L(F (x))] ≤ O (cid:32)(cid:114) ln(1/δ) n (cid:33) L (cid:88) l=1 Lloss · Ll · (cid:114) p 1 − p ∥σ( ˜AH (l−1)W (l))∥F (25) A.4 PROOF OF THEOREM 17 Proof. Step 1: Start with feature energy and node representation: E(H (l)) = 1 2|E| (cid:88) (i,j)∈E ∥h(l) i − h(l) j ∥2 h(l) i = 1 1 − p M (l) i ⊙ z(l) i 17 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '918 919 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Under review as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': 'where z(l) i ∈ Rdl and z(l) ˜Aikh(l−1) Step 2: For the BN output before ReLU at layer l, for each feature dimension d ∈ {1, ..., dl}: i = σ(BN((cid:80) k W (l))) k (Y (l)):,d = BN(( ˜AH (l−1)W (l)):,d) = γ(l) d ( ˜AH (l−1)W (l)):,d − µ(l) d (cid:113) (σ(l) d )2 + ϵ + β(l) d Step 3: For ReLU activation z = max(0, y) at layer l, for each dimension d: E[(z(l) d )2] ≥ Φ(β(l) d /γ(l) d ) · (β(l) d )2 where Φ is the standard normal CDF. Step 4: Using the BN-induced bound: ∥z(l) i ∥2 = ≥ dl(cid:88) d=1 dl(cid:88) d=1 (z(l) i )2 d Φ(β(l) d /γ(l) d ) · (β(l) d )2 > 0 Step 5: For feature energy with merged terms: E(H (l)) = ≥ = = = ≥ (cid:88) (i,j)∈E (cid:88) (i,j)∈E (cid:88) 1 2|E| 1 2|E| 1 2|E| (i,j)∈E 1 2|E| p 1 − p [ [ 1 1 − p 1 1 − p ( 1 1 − p (∥z(l) i ∥2 + ∥z(l) j ∥2) − 2(z(l) i )T z(l) j ] (∥z(l) i ∥2 + ∥z(l) j ∥2) − (∥z(l) i ∥2 + ∥z(l) j ∥2)] − 1)(∥z(l) i ∥2 + ∥z(l) j ∥2) (cid:88) (∥z(l) i ∥2 + ∥z(l) j ∥2) (i,j)∈E p 1 − p 1 2|E| (cid:88) i degi∥z(l) i ∥2 pdegmin 1 − p 1 2|E| ∥Z(l)∥2 F Then with BN bound: E(H (l)) ≥ pdegmin 1 − p 1 2|E| dl(cid:88) d=1 Φ(β(l) d /γ(l) d ) · (β(l) d )2 A.5 EFFECT OF DROPOUT ON MAX SINGULAR VALUES OF THE WEIGHT MATRICES We analyze why dropout leads to larger weight matrices in terms of spectral norm ∥W ∥2. Consider the gradient update for weights W 2 between layers: ∂L ∂W 2 = ( ˜AH 1 drop)⊤ × ∂L ∂H 2 = ( ˜A(H 1 ⊙ M 1)/(1 − p))⊤ × ∂L ∂H 2 where p is the dropout rate and M 1 is the dropout mask. This leads to weight updates: ∆W 2 = −η( ˜AH 1 drop)⊤ × ∂L ∂H 2 = −η( ˜A(H 1 ⊙ M 1)/(1 − p))⊤ × ∂L ∂H 2 (26) (27) 18 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2024-11-27 13:01:39
ICLR.cc/2025/Conference
U6AVndkUMf
P6MvxEzOug
[]
2025-02-13 10:56:30
ICLR.cc/2025/Conference
P6MvxEzOug
hkDXGVP2rZ
[]
2025-03-01 07:02:13
ICLR.cc/2025/Conference
hkDXGVP2rZ
9hv4yMVvK1
[{'section': 'Abstract', 'after_section': '1 Introduction', 'context_after': 'The remarkable success of deep neural networks across various domains has been accompanied by the persistent challenge of overfitting, where models perform well on training data but fail to generalize to unseen examples. This issue has spurred the development of numerous regularization techniques, among which dropout has emerged as a particularly effective and widely adopted ap- extensive theoretical analysis, with various perspectives offered to explain its regularization effects. Some researchers have interpreted dropout as a form of model averaging (Baldi & Sadowski, 2013), ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'enhances overall regularization. Our theoretical findings are validated through ex- tensive experiments on both node-level and graph-level tasks across 14 datasets. Notably, GCN with dropout and batch normalization outperforms state-of-the-art ', 'modified_lines': 'methods on several benchmarks, demonstrating the practical impact of our theo- retical insights. 1 Introduction proach (LeCun et al., 2015). Introduced by Srivastava et al. (2014), dropout addresses overfitting by randomly ”dropping out” a proportion of neurons during training, effectively creating an ensem- ble of subnetworks. This technique has proven highly successful in improving generalization and has become a standard tool in the deep learning toolkit. The effectiveness of dropout has prompted ', 'original_lines': 'methods on several benchmarks. This work bridges a critical gap in the theoret- ical understanding of regularization in GCNs and provides practical insights for designing more effective graph learning algorithms. 1 INTRODUCTION proach LeCun et al. (2015). Introduced by Srivastava et al. (2014), dropout addresses overfitting by randomly ”dropping out” a proportion of neurons during training, effectively creating an ensemble of subnetworks. This technique has proven highly successful in improving generalization and has become a standard tool in the deep learning toolkit. The effectiveness of dropout has prompted ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 2}, {'section': '1 Introduction', 'after_section': '1 Introduction', 'context_after': '• Dropout in GCNs creates dimension-specific stochastic sub-graphs, leading to a unique ', 'paragraph_idx': 5, 'before_section': '1 Introduction', 'context_before': 'have applied dropout to GNNs, often observing beneficial effects on generalization (Hamilton et al., 2017). ', 'modified_lines': '∗The corresponding author. 1 Published as a conference paper at ICLR 2025 While dropout was originally designed to prevent co-adaptation of features in standard neural net- works, our analysis reveals that its primary mechanism in GCNs is fundamentally different. We demonstrate that dropout’s main contribution in GCNs is mitigating oversmoothing by maintaining feature diversity across nodes, rather than preventing co-adaptation as in standard neural networks. This finding represents a significant shift in our understanding of how regularization operates in graph neural networks. Specifically, we demonstrate that: ', 'original_lines': 'While dropout in standard neural networks primarily prevents co-adaptation of features, its inter- action with graph structure creates unique phenomena that current theoretical frameworks fail to capture. These observations prompt a fundamental question: How does dropout uniquely interact 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 with the graph structure in GCNs? In this paper, we present a comprehensive theoretical analysis of dropout in the context of GCNs. Our findings reveal that dropout in GCNs interacts with the underlying graph structure in ways that are fundamentally different from its operation in traditional neural networks. Specifically, we demonstrate that: ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 4}, {'section': '1 Introduction', 'after_section': '1 Introduction', 'context_after': 'ing training, reducing over-smoothing and improving generalization. Graph diffusion-based meth- ods (Gasteiger et al., 2019) incorporate higher-order neighborhood information to enhance model robustness. Spectral-based approaches (Wu et al., 2019) leverage the graph spectrum to design effec- ', 'paragraph_idx': 3, 'before_section': None, 'context_before': 'Regularization in Graph Neural Networks. Graph Neural Networks (GNNs), while powerful, are prone to overfitting and over-smoothing (Li et al., 2018). Various regularization techniques (Yang ', 'modified_lines': 'et al., 2021; Rong et al., 2020; Fang et al., 2023; Feng et al., 2020) have been proposed to address these issues. DropEdge (Rong et al., 2020) randomly removes edges from the input graph dur- ', 'original_lines': 'et al., 2021; Rong et al., 2019; Fang et al., 2023; Feng et al., 2020) have been proposed to address these issues. DropEdge (Rong et al., 2019) randomly removes edges from the input graph dur- ', 'after_paragraph_idx': 3, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'd )2 where l = 1, 2, ..., L indicates the layer, degmin is the minimum degree in the graph, |E| is the total ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'd /γ(l) d ) · (β(l) ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 Experiments', 'after_section': '4 Experiments', 'context_after': 'bility of our theoretical framework. First, compared to recent SOTA models, we observe that simply tuning dropout enables GNNs to achieve SOTA performance on three datasets and is competitive with the best single-model results on the remaining dataset. Second, the significant accuracy im- ', 'paragraph_idx': 29, 'before_section': '4 Experiments', 'context_before': 'ization are demonstrated by the performance drop when either is removed, supporting our analysis of their synergistic interaction in GCNs. ', 'modified_lines': '4.3 Graph-level Classification Results Our graph-level classification results, presented in Tables 3 and 4, further validate the broad applica- ', 'original_lines': '4.3 GRAPH-LEVEL CLASSIFICATION RESULTS Our graph-level classification results, presented in Tables 2 and 3, further validate the broad applica- ', 'after_paragraph_idx': 29, 'before_paragraph_idx': 29}, {'section': '4 Experiments', 'after_section': None, 'context_after': '9 tide datasets from LRGB (Dwivedi et al., 2022). age datasets from (Dwivedi et al., 2023). Model ', 'paragraph_idx': 31, 'before_section': None, 'context_before': 'formance on graph-level tasks, affirming the relevance and utility of our framework and suggesting directions for improving GNN architectures. ', 'modified_lines': '4.4 Mitigating Oversmoothing Rather Than Co-adaptation In traditional neural networks, dropout primarily prevents co-adaptation of neurons. However, our theoretical framework suggests that dropout in GCNs serves a fundamentally different purpose: mit- igating oversmoothing rather than preventing co-adaptation. To validate this hypothesis, we exam- ined how dropout affects weight matrices in a 2-layer GCN, focusing specifically on spectral norm changes (see Appendix A.5). We further analyzed three key metrics to quantify dropout’s influence on feature representations, as shown in Figure 4. The left panel of Figure 4 demonstrates that the Published as a conference paper at ICLR 2025 Table 3: Graph classification results on two pep- Table 4: Graph classification results on two im- ', 'original_lines': '4.4 MITIGATING OVERSMOOTHING OR CO-ADAPTION In traditional neural networks, dropout is known to prevent co-adaptation of neurons. However, could dropout serve a different primary purpose in GCNs? Our theoretical framework proposes that dropout primarily mitigates oversmoothing in GCNs rather than preventing co-adaptation. To validate this, we explored how dropout modulates the weight matrices in a 2-layer GCN, with a 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 Under review as a conference paper at ICLR 2025 Table 2: Graph classification results on two pep- Table 3: Graph classification results on two im- ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'GCN w/o dp Dirichlet energy ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'GCN Dirichlet energy ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Pascal Esser, Leena Chennuru Vankadara, and Debarghya Ghoshdastidar. Learning theory can (sometimes) explain generalisation in graph neural networks. Advances in Neural Information Processing Systems, 34:27043–27056, 2021. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Xavier Bresson. Benchmarking graph neural networks. Journal of Machine Learning Research, 24(43):1–48, 2023. ', 'modified_lines': '', 'original_lines': '11 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 Conclusions', 'after_section': '5 Conclusions', 'context_after': 'In Andreas Krause, Emma Brun- skill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Pro- ceedings of the 40th International Conference on Machine Learning, volume 202 of Proceed- ings of Machine Learning Research, pp. 17375–17390. PMLR, 23–29 Jul 2023. URL https: //proceedings.mlr.press/v202/kong23a.html. Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent L´etourneau, and Prudencio Tossou. Re- thinking graph transformers with spectral attention. Advances in Neural Information Processing Systems, 34:21618–21629, 2021. ', 'paragraph_idx': 80, 'before_section': '5 Conclusions', 'context_before': '//openreview.net/forum?id=SJU4ayYgl. Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Renkun Ni, C. Bayan Bruss, and Tom Gold- ', 'modified_lines': 'stein. GOAT: A global transformer on large-scale graphs. ', 'original_lines': 'stein. GOAT: A global transformer on large-scale graphs. 12 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 Under review as a conference paper at ICLR 2025 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 ', 'after_paragraph_idx': 80, 'before_paragraph_idx': 79}, {'section': '5 Conclusions', 'after_section': None, 'context_after': 'xkljKdGe4E. Shaogao Lv. Generalization bounds for graph convolutional neural networks via rademacher com- ', 'paragraph_idx': 48, 'before_section': None, 'context_before': 'for learning on graphs. arXiv preprint arXiv:2405.16435, 2024a. Yuankai Luo, Lei Shi, and Xiao-Ming Wu. Classic GNNs are strong baselines: Reassessing GNNs ', 'modified_lines': 'for node classification. In The Thirty-eight Conference on Neural Information Processing Sys- tems Datasets and Benchmarks Track, 2024b. URL https://openreview.net/forum?id= ', 'original_lines': 'for node classification. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024b. URL https://openreview.net/forum?id= ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 Conclusions', 'after_section': None, 'context_after': 'Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, and Shuiwang Ji. On explainability of graph neural networks via subgraph explorations. In International conference on machine learning, pp. 12241– 12252. PMLR, 2021. Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong. Fast learning of graph neural networks with guaranteed generalizability: one-hidden-layer case. In International Conference on Machine Learning, pp. 11268–11277. PMLR, 2020. ', 'paragraph_idx': 118, 'before_section': '5 Conclusions', 'context_before': 'graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 430–438, 2020. ', 'modified_lines': 'Rui-Ray Zhang and Massih-Reza Amini. Generalization bounds for learning under graph- ISSN 0885-6125. doi: dependence: a survey. Mach. Learn., 113(7):3929–3959, April 2024. 10.1007/s10994-024-06536-9. URL https://doi.org/10.1007/s10994-024-06536-9. ', 'original_lines': '14 Under review as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 117}, {'section': '5 Conclusions', 'after_section': '5 Conclusions', 'context_after': 'Proof. Let’s approach this proof: t t Step 3: For a sub-graph to be identical to the original graph, all edges must be present. The proba- bility of this is: ((1 − p)2)|E| = (1 − p)2|E|. 1 − (1 − p)2|E|. t (cid:40) 1 0 t otherwise . Step 6: We have: Step 8: The total number of unique sub-graphs is (cid:80)dl t | j = 1, . . . , dl|] = E[ ', 'paragraph_idx': 120, 'before_section': None, 'context_before': 'arXiv:1909.12223, 2019. ', 'modified_lines': 'A Appendix A.1 Proof of Theorem 1 Step 1: For a single feature j, the probability that an edge is present in the sub-graph E(l, j) as both endpoints need to retain this feature. Step 2: The probability that an edge is not present in E(l, j) is 1 − (1 − p)2 = p(2 − p). is (1− p)2, Step 4: Therefore, the probability that E(l, j) is different from the original graph (i.e., unique) is Step 5: Define an indicator random variable X j for each feature j: X j = is unique if E(l, j) P(X j = 1) = 1 − (1 − p)2|E|][P(X j = 0) = (1 − p)2|E|. Step 7: The expected value of X j is: E[X j] = 1 · P(X j = 1) + 0 · P(X j = 0) = 1 − (1 − p)2|E|. j=1 X j. By the linearity of expectation: E[|E(l, j) ', 'original_lines': 'A APPENDIX A.1 PROOF OF THEOREM 11 Step 1: For a single feature j, the probability that an edge is present in the sub-graph G(l,j) (1 − p)2, as both endpoints need to retain this feature. is Step 2: The probability that an edge is not present in G(l,j) is 1 − (1 − p)2 = p(2 − p). Step 4: Therefore, the probability that G(l,j) is different from the original graph (i.e., unique) is Step 5: Define an indicator random variable Xj for each feature j: Xj = if G(l,j) is unique P (Xj = 1) = 1 − (1 − p)2|E|][P (Xj = 0) = (1 − p)2|E|. Step 7: The expected value of Xj is: E[Xj] = 1 · P (Xj = 1) + 0 · P (Xj = 0) = 1 − (1 − p)2|E|. j=1 Xj. By the linearity of expectation: E[|G(l,j) ', 'after_paragraph_idx': 120, 'before_paragraph_idx': None}]
2025-03-01 14:36:20
ICLR.cc/2025/Conference
9hv4yMVvK1
gXXJcVisjV
[{'section': 'Abstract', 'after_section': None, 'context_after': '2 Related Work ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ical understanding of regularization in GCNs and paves the way for more principled approaches to leveraging dropout in graph representation learning. Furthermore, we validate our theoretical find- ings through extensive experiments, demonstrating that GCNs incorporating our insights on dropout ', 'modified_lines': 'and batch normalization outperform several state-of-the-art methods on benchmark datasets. This practical success underscores the importance of our theoretical contributions and their potential to advance the field of graph representation learning. ', 'original_lines': 'and batch normalization outperform several state-of-the-art methods on benchmark datasets, in- cluding Cora, CiteSeer, and PubMed. This practical success underscores the importance of our theoretical contributions and their potential to advance the field of graph representation learning. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '4 Experiments', 'after_section': None, 'context_after': '4.2 Node-level Classification Results ', 'paragraph_idx': 41, 'before_section': None, 'context_before': 'Figure 4: Effect of dropout on feature F-norm, average pair distance, and Dirichlet energy. 2024b;a), employing BN and adjusting the dropout rate between 0.1 and 0.7. In graph-level tasks, ', 'modified_lines': 'we adopted the settings from (T¨onshoff et al., 2023; Luo et al., 2025), utilizing BN with a consistent dropout rate of 0.2. All experiments were run with 5 different random seeds, and we report the mean accuracy and standard deviation. To ensure generalizability, we used Dirichlet energy (Cai & Wang, 2020) as an oversmoothing metric, which is proportional to our feature energy. ', 'original_lines': 'we followed the experimental settings established by T¨onshoff et al. (2023), utilizing BN with a consistent dropout rate of 0.2. All experiments were run with 5 different random seeds, and we report the mean accuracy and standard deviation. To ensure generalizability, we used Dirichlet energy (Cai & Wang, 2020) as an oversmoothing metric, which is proportional to our feature energy. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-02 07:51:13
ICLR.cc/2025/Conference
gXXJcVisjV
2iIUWdNUNh
[{'section': '1 Introduction', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 5, 'before_section': '1 Introduction', 'context_before': 'have applied dropout to GNNs, often observing beneficial effects on generalization (Hamilton et al., 2017). ', 'modified_lines': '∗Hao Zhu is the corresponding author and led the writing of the paper. ', 'original_lines': '∗The corresponding author. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 4}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Hao Zhu was supported by the Science Digital Program in Commonwealth Scientific and Indus- trial Research Organization (CSIRO). Yuankai Luo received support from National Key R&D Pro- gram of China (2021YFB3500700), NSFC Grant 62172026, National Social Science Fund of China ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Acknowledgments ', 'modified_lines': '', 'original_lines': 'Hao Zhu led the writing of the paper and served as the corresponding author. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Alessandro Achille and Stefano Soatto. Information dropout: Learning optimal representations ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Complex & Critical Software Environment (SKLCCSE), and the HK PolyU Grant P0051029. References ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 Conclusions', 'after_section': None, 'context_after': '11 Published as a conference paper at ICLR 2025 Vijay Prakash Dwivedi, Ladislav Ramp´aˇsek, Mikhail Galkin, Ali Parviz, Guy Wolf, Anh Tuan Luu, and Dominique Beaini. Long range graph benchmark. arXiv preprint arXiv:2206.08164, 2022. ', 'paragraph_idx': 56, 'before_section': '5 Conclusions', 'context_before': 'Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. Advances in neural information processing systems, 32, 2019. ', 'modified_lines': 'Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699, 2020. ', 'original_lines': ' Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. arXiv preprint arXiv:2012.09699, 2020. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 55}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Johannes Gasteiger, Stefan Weißenberger, and Stephan G¨unnemann. Diffusion improves graph learning. Advances in neural information processing systems, 32, 2019. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Johannes Gasteiger, Aleksandar Bojchevski, and Stephan G¨unnemann. Predict then propagate: Graph neural networks meet personalized pagerank. arXiv preprint arXiv:1810.05997, 2018. ', 'modified_lines': '', 'original_lines': ' ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems, 33:22118–22133, 2020. ', 'modified_lines': '', 'original_lines': '12 Published as a conference paper at ICLR 2025 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 Conclusions', 'after_section': None, 'context_after': '13 Published as a conference paper at ICLR 2025 Ladislav Ramp´aˇsek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and Do- minique Beaini. Recipe for a general, powerful, scalable graph transformer. arXiv preprint ', 'paragraph_idx': 93, 'before_section': '5 Conclusions', 'context_before': 'positional encoding for learning long-range and hierarchical structures. The Journal of Chemical Physics, 159(3), 2023. ', 'modified_lines': 'Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. arXiv preprint arXiv:1905.10947, 2019. ', 'original_lines': ' Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for node classification. arXiv preprint arXiv:1905.10947, 2019. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 92}]
2025-03-14 04:30:46
ICLR.cc/2025/Conference
2iIUWdNUNh
oVG40icCJj
[]
2025-03-30 14:03:15
ICLR.cc/2025/Conference
ItPv3YFVp6
oFznv6Hqvv
[{'section': 'Abstract', 'after_section': None, 'context_after': '1 ', 'paragraph_idx': 2, 'before_section': 'Abstract', 'context_before': 'show that SSL can perform system identification in latent space. We propose DYNCL, a framework to uncover linear, switching linear and non-linear dynamics under a non-linear observation model, give theoretical guarantees and validate ', 'modified_lines': 'them empirically. Code: github.com/dynamical-inference/dyncl ', 'original_lines': 'them empirically. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 2}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'yt = g(xt) + νt. (1) ', 'paragraph_idx': 3, 'before_section': '1 INTRODUCTION', 'context_before': 'The identification and modeling of dynamics from observational data is a long-standing problem in machine learning, engineering and science. A discrete-time dynamical system with latent variables x, ', 'modified_lines': 'observable variables y, control signal u, its control matrix B , and noise ε, ν can take the form xt+1 = f (xt) + But + εt ', 'original_lines': 'observable variables y, control signal u, its control matrix B, and noise ε, ν can take the form xt+1 = f (xt) + But + εt 4X1L-8 4X1L-8 ', 'after_paragraph_idx': 3, 'before_paragraph_idx': 3}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'identifiability results (Hyvarinen & Morioka, 2016; 2017; Hyvarinen et al., 2019; Zimmermann et al., 2021; Roeder et al., 2021) for CL towards dynamical systems. While our theory makes several predictions about capabilities of standard CL, it also highlights shortcomings. To overcome these and enable interpretable dynamics inference across a range of data generating processes, we propose a general framework for linear and non-linear system identification with CL (Figure 1). Background. An influential motivation of our work is Contrastive Predictive Coding [CPC; Oord et al., 2018]. CPC can be recovered as a special case of our framework when using an RNN dynamics model. Related works have emerged across different modalities: wav2vec (Schneider et al., 2019), TCN (Sermanet et al., 2018) and CPCv2 (Henaff, 2020). In the field of system identification, notable approaches include the Extended Kalman Filter (EKF) (McGee & Schmidt, 1985) and NARMAX (Chen & Billings, 1989). Additionally, several works have also explored generative models for general dynamics (Duncker et al., 2019) and switching dynamics, e.g. rSLDS (Linderman et al., 2017). In the Nonlinear ICA literature, identifiable algorithms for time-series data, such as Time Contrastive Learn- ', 'paragraph_idx': 5, 'before_section': '1 INTRODUCTION', 'context_before': 'In this work, we revisit and extend contrastive learning in the context of system identification. We uncover several surprising facts about its out-of-the-box effectiveness in identifying dynamics and ', 'modified_lines': 'unveil common design choices in SSL systems used in practice. Our theoretical study extends ∗Equal contribution. †Correspondence: [email protected] 1 Published as a conference paper at ICLR 2025 ', 'original_lines': 'unveil common design choices in SSL systems used in practice. Our theoretical study extends all-0 1 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 Under review as a conference paper at ICLR 2025 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'ability result for both the latent space and the dynamics model in section 3. These theoretical results are later em- pirically validated. We then propose a practical way to ', 'paragraph_idx': 7, 'before_section': '1 INTRODUCTION', 'context_before': 'Contributions. We extend the existing theory on con- trastive learning for time series learning and make adap- tations to common inference frameworks. We introduce ', 'modified_lines': 'our CL variant (Fig. 1) in section 2, and give an identifi- ', 'original_lines': 'our CL variant (Fig. 1) in section 2, and give an identifi- ', 'after_paragraph_idx': 7, 'before_paragraph_idx': 7}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'context_after': 'h is shared across the reference yt, positive yt+1, and negative samples y− i . A dynam- ics model ˆf forward predicts the reference. A (possibly latent) variable z can parameter- ize the dynamics (cf. § 4) or external control 2 CONTRASTIVE LEARNING FOR TIME-SERIES ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'y− i ', 'modified_lines': 'Figure 1: DynCL framework: The encoder (cf. § I). The model fits the InfoNCE loss (L). ', 'original_lines': 'KF3j-1 KF3j-1 Figure 1: DynCL framework: The encoder (cf. § J). The model fits the InfoNCE loss (L). ', 'after_paragraph_idx': 11, 'before_paragraph_idx': None}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'context_after': 'of an encoder, a dynamics model, and a similarity function and will be defined further below. We fit the model by minimizing the negative log-likelihood on the time series, min ψ ', 'paragraph_idx': 9, 'before_section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'context_before': 'y−∈N ∪{y+} where y is often called the reference or anchor sample, y+ is a positive sample, y− ∈ N are negative ', 'modified_lines': 'examples, and N is the set of negative samples. The model ψ itself is parameterized as a composition ', 'original_lines': 'and N is the set of negative samples. The model ψ itself is parameterized as a composition examples, 3gxR-1 qeKY-1 ', 'after_paragraph_idx': 9, 'before_paragraph_idx': 9}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'context_after': 'To attain favourable properties for identifying the latent dynamics, we carefully design the hypothesis class for ψ. The motivation for this particular design will become clear later. To define the full model, a ', 'paragraph_idx': 9, 'before_section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'context_before': '(3) where positive examples are just adjacent points in the time-series, and M negative examples are ', 'modified_lines': 'sampled uniformly across the dataset. U (1, T ) denotes a uniform distribution across the discrete time steps. ', 'original_lines': 'sampled uniformly across the dataset. U (1, T ) denotes a uniform distribution across the discrete timesteps. ', 'after_paragraph_idx': 10, 'before_paragraph_idx': 9}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': None, 'context_after': '(4) and call the resulting algorithm DYNCL. Intuitively, we obtain two observed samples (y, y′) which 1Note that we can equivalently write ϕ(˜h(x)), ˜h′(x′)) using two asymmetric encoder functions, see addi- ', 'paragraph_idx': 12, 'before_section': None, 'context_before': 'correction term α : Rd (cid:55)→ R. We define their composition as1 ψ(y, y′) := ϕ( ˆf (h(y)), h(y′)) − α(y′), ', 'modified_lines': 'are first mapped to the latent space, (h(y), h(y′)). Then, the dynamics model is applied to h(y) , ', 'original_lines': ' are first mapped to the latent space, (h(y), h(y′)). Then, the dynamics model is applied to h(y), and the resulting points are compared through the similarity function ϕ. The similarity function ϕ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'context_after': 'will be informed by the form of (possibly induced) system noise εt. In the simplest form, the noise can be chosen as isotropic Gaussian noise, which results in a negative squared Euclidean norm for ϕ. Note, the additional term α(y′) is a correction applied to account for non-uniform marginal distri- butions. It can be parameterized as a kernel density estimate (KDE) with log ˆq(h(y′)) ≈ log q(x′) around the datapoints. In very special cases, the KDE makes a difference in empirical performance 3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS We now study the aforementioned model theoretically. The key components of our theory along with Figure 2. We are interested in two properties. First, linear identifiability of the latent space: The composition of mixing function g and model encoder h should recover the ground-truth latents up to a linear transform. Second, identifiability of the (non-linear) dynamics model: We would like to relate the estimated dynamics ˆf to the underlying ground-truth dynamics f . This property is also with the following properties: Data-generating process. We consider a discrete-time dynamical system defined as (5) where xt ∈ Rd are latent variables, f : Rd (cid:55)→ Rd is a bijective dynamics model, εt ∈ Rd the system noise, and g : Rd (cid:55)→ RD is a non-linear injective mapping from latents to observables yt ∈ RD, We proceed by stating our main result: Theorem 1 (Contrastive estimation of non-linear dynamics). Assume that t=1 is generated according to the ground-truth dynamical system in Eq. 5 with a bijective dynamics model f and an injective mixing function g. • (A3) The model ψ is composed of an encoder h, a dynamics model ˆf , a correction term α, and the similarity metric ϕ(u, v) = −∥u − v∥2 and attains the global minimizer of Eq. 3. Then, in the limit of T → ∞ for any point x in the support of the data marginal distribution: (a) The composition of mixing and de-mixing h(g(x)) = Lx + b is a bijective affine transform, ', 'paragraph_idx': 14, 'before_section': None, 'context_before': 'space. By observing variations introduced by the system noise ε, our model is able to infer the ground-truth dynamics up to an affine transform. ', 'modified_lines': 'and the resulting points are compared through the similarity function ϕ. The similarity function ϕ (App. B, Fig. 9 ) and is required for our theory. Yet, we found that on the time-series datasets considered, it was possible to drop this term without loss in performance (i.e., α(y′) = 0) . our notion of linear identifiability (Roeder et al., 2021; Khemakhem et al., 2020) are visualized in called structural identifiability (Bellman & ˚Astr¨om, 1970). Our model operates on a subclass of Eq. 1 d ≤ D. We sample a total number of T time steps. xt+1 = f (xt) + εt, yt = g(xt), • (A1) A time-series dataset {yt}T • (A2) The system noise follows an iid normal distribution, p(εt) = N (εt|0, Σε) . ', 'original_lines': '( App. B, Fig. 9) and is required for our theory. Yet, we found that on the time-series datasets considered, it was possible to drop this term without loss in performance (i.e., α(y′) = 0). our notion of linear identifiability (Roeder et al., 2021; Khemakhem et al., 2020) are visualized in called structural identifiability (Bellman & ˚Astr¨om, 1970). Our model operates on a subclass of Eq. 1 xt+1 = f (xt) + εt, yt = g(xt), d ≤ D. We sample a total number of T timesteps. • (A1) A time-series dataset {yt}T The system noise follows an iid normal distribution, • (A2) p(εt) = N (εt|0, Σε). all-14 all-11 all-13 all-13 all-13 all ', 'after_paragraph_idx': 14, 'before_paragraph_idx': None}, {'section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'after_section': None, 'context_after': '4 ∇-SLDS: TOWARDS NON-LINEAR DYNAMICS ESTIMATION Piecewise linear approximation of dynamics. Our theoretical linear bijective dynamics. This is a compelling result, but in practice it requires the use of a powerful, yet easy to parameterize dynamics model. One option is to use an RNN (Elman, 1990; Oord et al., ', 'paragraph_idx': 22, 'before_section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'context_before': 'Note on Assumptions. The required assumptions are rather practical: (A1) allows for a very broad class of dynamical systems as long as bijectivity of the dynamics model holds, which is the case of many systems used in the natural sciences. We consider dynamical systems with control signal ut in ', 'modified_lines': 'Appendix I. While (A2) is a very common one in dynamical systems modeling, it can be seen more strict: We either need knowledge about the form of system noise, or inject such noise. We should note that analogous to the discussion of Zimmermann et al. (2021), it is most certainly possible to extend our results towards other classes of noise distributions by matching the log-density of ε with ϕ. Given the common use of Normally distributed noise, however, we limited the scope of the current theory to the Normal distribution, but show vMF noise in Appendix D. (A3) mainly concerns the model setup. An apparent limitation of Def. 5 is the injectivity assumption imposed on the mixing function g. In practice, a partially observable setting often applies, where g(x) = Cx maps latents into lower dimensional observations or has a lower rank than there are latent dimensions. For these systems, we can ensure injectivity through a time-lag embedding. See Appendix H for empirical validation. results suggest that contrastive learning allows the fitting of non- ', 'original_lines': 'Appendix J. While (A2) is a very common one in dynamical systems modeling, it can be seen more strict: We either need knowledge about the form of system noise, or inject such noise. We should note that analogous to the discussion of Zimmermann et al. (2021), it is most certainly possible to extend our results towards other classes of noise distributions by matching the log-density of ε with ϕ. Given the common use of Normally distributed noise, however, we limited the scope of the current theory to the Normal distribution, but show vMF noise in Appendix D. (A3) mainly concerns the model setup. An apparent limitation of Def. 5 is the injectivity assumption imposed on the mixing function g. In practice, a partially observable setting often applies, where g(x) = Cx maps latents into lower dimensional observations or has a lower rank than there are latent dimensions. For these systems, we can ensure injectivity through a time-lag embedding. See Appendix I for empirical validation. results suggest that contrastive learning allows the fitting of non- ', 'after_paragraph_idx': None, 'before_paragraph_idx': 22}, {'section': 'Abstract', 'after_section': None, 'context_after': 'Figure 3. This model allows fast estimation of switching dynamics namics model has a trainable bank W = [W1, . . . , WK] of possible dynamics matrices. K is a hyperparameter. The dynamics depend on a latent variable kt and are defined as Figure 3: The core components of the ∇-SLDS model is parameter- free, differentiable parameteriza- tion of the switching process. the Gumbel-Softmax trick (Jang et al., 2016) without hard sampling: ˆf (xt; W, zt) = ( ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'across timescales. An alternative option is to linearize the system, which we propose in the following. ', 'modified_lines': 'We propose a new forward model for differentiable switching linear dynamics (∇-SLDS) in latent space . The estimation is outlined in and can be easily integrated into the DYNCL algorithm . The dy- ˆf (xt; W, kt) = Wktxt, kt = argmink∥Wkxt − xt+1∥2. (6) Intuitively, the predictive performance of every available linear dynamical system is used to select the right dynamics with index kt from the bank W . During training, we approximate the argmin using ', 'original_lines': 'differentiable switching linear We propose a new forward model for dynamics (∇-SLDS) in latent space. The estimation is outlined in and can be easily integrated into the DYNCL algorithm. The dy- ˆf (xt; W, kt) = Wktxt, Intuitively, the predictive performance of every available linear dynamical system is used to select the right dynamics with index kt from the bank W. During training, we approximate the argmin using kt = argmink∥Wkxt − xt+1∥2. (6) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': None, 'context_after': '4 Backprop. via Gumbel Softmax bution (Jang et al., 2016) and we use a temperature τ to control the smoothness of the resulting probabilities. During pilot experiments, we found that the reciprocal parameterization of the logits outperforms other choices for computing an argmin, like flipping the sign. From linear switching to non-linear dynamics. Non-linear system dynamics of the general form in Eq. 5 can be approximated using our switching model. We can approximate a continuous-time non-linear dynamical system with latent dynamics ˙x = f (x) around reference points { ˜xk}K k=1 using xt+1 = (Aktxt + bkt) + εt =: ˆf (xt; kt) + εt. While a theoretical guarantee for this general case is beyond the scope of this work, we give an empirical evaluation on Lorenz attractor dynamics below. Note, as the number of “basis points” capability of the latents as we store the exact value of f at every point. However, this comes at the expense of having less points to estimate each individual dynamics matrix. Empirically, we used (8) 5 EXPERIMENTS To verify our theory, we implement a benchmark dataset for studying the effects of various model trials. Our experiments rigorously evaluate different variants of contrastive learning algorithms. Data generation. Data is generated by simulating latent variables x that evolve according to a ', 'paragraph_idx': 13, 'before_section': None, 'context_before': '(7) ', 'modified_lines': 'Note that the dynamics model ˆf (xt; W, zt) depends on an additional latent variable zt = [zt,1, . . . , zt,K]⊤ which contains probabilities to parametrize the dynamics . During inference, Published as a conference paper at ICLR 2025 we can obtain the index kt = arg maxk zt,k. The variables gk are samples from the Gumbel distri- a first-order Taylor expansion, f (x) ≈ ˜f (x) = f ( ˜xk) + Jf ( ˜xk)(x − ˜xk), where we denote the Jacobian matrix of f with Jf . We evaluate the equation at each point t using the best reference point ˜xk. We obtain system matrices Ak = Jf ( ˜xk) and bias term bk = f ( ˜xk) − Jf ( ˜xk) ˜xk which can be modeled with the ∇-SLDS model ˆf (xt; kt): of ∇-SLDS approaches the number of time steps, we could trivially approach perfect estimation 100–200 matrices for datasets of 1M samples. choices. We generate time-series with 1M samples , either as a single sequence or across multiple ', 'original_lines': 'Note that the dynamics model ˆf (xt; W, zt) depends on an additional latent variable zt = [zt,1, . . . , zt,K]⊤ which contains probabilities to parametrize the dynamics. During inference, we qeKY-6 4X1L-1 GCbH-3 GCbH-4 all-1 4X1L-2 qeKY-4 KF3j-2 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 Under review as a conference paper at ICLR 2025 can obtain the index kt = arg maxk zt,k. The variables gk are samples from the Gumbel distri- 4X1L-3 Taylor expansion, f (x) ≈ ˜f (x) = f ( ˜xk) + Jf ( ˜xk)(x − ˜xk), where we denote the a first-order Jacobian matrix of f with Jf . We evaluate the equation at each point t using the best reference point ˜xk. We obtain system matrices Ak = Jf ( ˜xk) and bias term bk = f ( ˜xk) − Jf ( ˜xk) ˜xk which can be modeled with the ∇-SLDS model ˆf (xt; kt): all-12 of ∇-SLDS approaches the number of timesteps, we could trivially approach perfect estimation 100–200 matrices for datasets of 1M samples. 4X1L-4 choices. We generate time-series with 1M samples, either as a single sequence or across multiple ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': '5 EXPERIMENTS', 'context_after': 'distinct modes following a mode sequence it. The mode sequence it follows a Markov chain with a symmetric transition matrix and uniform prior: i0 ∼ Cat(π), where πj = 1 K for all j. At each time ', 'paragraph_idx': 32, 'before_section': '5 EXPERIMENTS', 'context_before': 'eigenvalues equal to 1. We do so by taking the product of multiple rotation matrices, one for each possible plane to rotate around with rotation angles being randomly chosen to be -5° or 5°. ', 'modified_lines': 'SLDS. We simulate switching linear dynamical systems with f (xt; kt) = Aktxt and system noise standard deviation σϵ = 0.0001 . We choose Ak to be an orthogonal matrix ensuring that all eigenvalues are 1, which guarantees system stability. Specifically, we set Ak to be a rotation matrix with varying rotation angles (5°, 10°, 20°). The latent dimensionality is 6. The number of samples is 1M. We use 1000 trials, and each trial consists of 1000 samples. We use k = 0, 1, . . . , K ', 'original_lines': 'SLDS. We simulate switching linear dynamical systems with f (xt; kt) = Aktxt and system noise standard deviation σϵ = 0.0001. We choose Ak to be an orthogonal matrix ensuring that all eigenvalues are 1, which guarantees system stability. Specifically, we set Ak to be a rotation matrix with varying rotation angles (5°, 10°, 20°). The latent dimensionality is 6. The number of samples is 1M. We use 1000 trials, and each trial consists of 1000 samples. We use k = 0, 1, . . . , K ', 'after_paragraph_idx': 32, 'before_paragraph_idx': 32}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'context_after': 'the recovered latents minimizing the predictive mean squared error via gradient descent. Evaluation metrics. Our metrics are informed by the result in Theorem 1 and measure empirical identifiability up to affine transformation of the latent space and its underlying linear or non-linear ', 'paragraph_idx': 34, 'before_section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'context_before': 'data, and 10−4 for Lorenz system data. Our baseline model is standard self-supervised contrastive learning with the InfoNCE loss, which corresponds to the CEBRA-time model (with symmetric encoders, i.e., without a dynamics model; cf. Schneider et al., 2023). For DYNCL, we add an LDS or ', 'modified_lines': '∇-SLDS dynamics model for fitting. For our baseline, we post-hoc fit the corresponding model on ', 'original_lines': '∇-SLDS dynamics model for fitting. For our baseline, we post-hoc fit the corresponding model on qeKY-12 ', 'after_paragraph_idx': 34, 'before_paragraph_idx': 34}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'context_after': 'We also propose two metrics as direct measures of identifiability for the recovered dynamics ˆf . First, the LDS error, which is suitable only for linear dynamics models, denotes the norm of the difference between the true dynamics matrix A and the estimated dynamics matrix ˆA by accounting for the linear transformation between the true and recovered latent spaces. The LDS error (related to the metric for Dynamical Similarity Analysis; Ostrow et al., 2023) is then computed as (cf. Corollary 2): 1 Not explicitly shown, but the argument in Corollary 2 applies to each piecewise linear section of the SLDS. 2 ∇-SLDS is only an approximation of the functional form of the underlying system. 6 Figure 4: Switching linear dynamics: (a) example ground-truth dynamics in latent space for four matrices Ak. (b) R2 metric for different noise levels as we increase the angles used for data generation. We compare a baseline ', 'paragraph_idx': 37, 'before_section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'context_before': 't=1 ', 'modified_lines': 'To evaluate the identifiability of the representation, we measure the R2 between the true latents xt and the optimally aligned recovered latents L2 ˆxt + b2 across time steps t = 1 . . . T in the time-series. LDS(A, ˆA) = ∥A − L1 ˆAL2∥F ≈ ∥A − L−1 ˆAL∥F . (11) Published as a conference paper at ICLR 2025 ', 'original_lines': 'To evaluate the identifiability of the representation, we measure the R2 between the true latents xt and the optimally aligned recovered latents L2 ˆxt + b2 across time-steps t = 1 . . . T in the time-series. LDS(A, ˆA)= ∥A − L1 ˆAL2∥F ≈ ∥A − L−1 ˆAL∥F . (11) 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 Under review as a conference paper at ICLR 2025 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 ', 'after_paragraph_idx': 37, 'before_paragraph_idx': 37}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': None, 'context_after': 'Finally, when evaluating switching linear dynamics, we compute the accuracy for assigning the correct mode at any point in time. To compute the cluster accuracy in the case of SLDS ground truth ', 'paragraph_idx': 37, 'before_section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'context_before': '(12) ', 'modified_lines': 'along all time steps. Additional variants of the dynR2 metric are discussed in Appendix G. ', 'original_lines': 'along all time-steps. Additional variants of the dynR2 metric are discussed in Appendix G. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 37}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'context_after': 'benchmark suite for identifiable dynamics learning upon publication of the paper. 6 RESULTS 6.1 VERIFICATION OF THE THEORY FOR LINEAR DYNAMICS of models, we show in Table 1 that DYNCL effectively identifies the correct dynamics. For linear dynamics (LDS), DYNCL reaches an R2 of 99.0%, close to the oracle performance (99.5%). Most importantly, the LDS error of our method (0.38) is substantially closer to the oracle (0.17) compared ', 'paragraph_idx': 41, 'before_section': None, 'context_before': 'mode switches to the ground truth modes, and then proceed to compute the accuracy. Implementation. Experiments were carried out on a compute cluster with A100 cards. On each ', 'modified_lines': 'card, we ran ∼3 experiments simultaneously. Depending on the exact configuration, training time varied from 5–20min per model . The combined experiments ran for this paper comprised about 120 days of A100 compute time and we provide a breakdown in Appendix K . We will open source our Suitable dynamics models enable identification of latents and dynamics. For all considered classes ', 'original_lines': 'card, we ran ∼3 experiments simultaneously. Depending on the exact configuration, training time varied from 5–20min per model. The combined experiments ran for this paper comprised about 120 days of A100 compute time and we provide a breakdown in Appendix K. We will open source our 3gxR-11 all-2 Suitable dynamics models enable identification of latents and dynamics. For all considered classes ', 'after_paragraph_idx': 41, 'before_paragraph_idx': None}, {'section': '6 RESULTS', 'after_section': None, 'context_after': '7 babcσ=0.001identity (B)noisedynamics∇-SLDSGT SLDSσ=0.0001 Figure 5: Contrastive learning of 3D non-linear dynamics following a Lorenz attractor model. (a), left to right: ground truth dynamics for 10k samples with dt = 0.0005 and σ = 0.1, estimation results for baseline (identity ', 'paragraph_idx': 43, 'before_section': '6 RESULTS', 'context_before': 'the dynamical system is then negligible compared to the noise. In Table 1 (“large σ”), we show that recovery is possible for cases with small angles, both in the linear and non-linear case. While in some cases, this learning setup might be applicable in practice, it seems generally unrealistic to be able to ', 'modified_lines': 'perturb the system beyond the actual dynamics. As we scale the dynamics to larger values (Figure 4, Published as a conference paper at ICLR 2025 ', 'original_lines': 'As we scale the dynamics to larger values (Figure 4, perturb the system beyond the actual dynamics. qeKY-8 qeKY-9 Under review as a conference paper at ICLR 2025 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 ', 'after_paragraph_idx': None, 'before_paragraph_idx': 43}, {'section': '6 RESULTS', 'after_section': '6 RESULTS', 'context_after': 'truth dynamics (Table 1) as predicted by Corollary 1 (rows marked with ✗). For identity dynamics, the baseline is able to identify the latents (R2=99.56%) but breaks as soon as linear dynamics are introduced (R2=73.56%). 6.2 APPROXIMATION OF NON-LINEAR DYNAMICS ', 'paragraph_idx': 44, 'before_section': '6 RESULTS', 'context_before': 'Symmetric encoders cannot identify non-trivial dynamics. In the more general case where the dynamics dominates the system behavior, the baseline cannot identify linear dynamics (or more ', 'modified_lines': 'complicated systems). In the general LDS and SLDS cases, the baseline fails to identify the ground ', 'original_lines': 'complicated systems). In the general LDS and SLDS cases, the baseline fails to identify the ground qeKY-10 ', 'after_paragraph_idx': 44, 'before_paragraph_idx': 44}, {'section': '6 RESULTS', 'after_section': '6 RESULTS', 'context_after': 'that in this non-linear case, we are primarily succeeding at estimating the latent space, the estimated dynamics model did not meaningfully outperform an identity model (Appendix G). Extensions to other distributions pε. While Euclidean geometry is most relevant for dynamical systems in practice, and hence the focus of our theoretical and empirical investigation, contrastive learning commonly operates on the hypersphere in other contexts. We provide additional results Appendix D. 6.3 ABLATION STUDIES ', 'paragraph_idx': 47, 'before_section': '6 RESULTS', 'context_before': 'between baseline and our model increases substantially. Non-linear dynamics. Figure 5 depicts the Lorenz system as an example of a non-linear dynamical ', 'modified_lines': 'system for different choices of algorithms. The ground truth dynamics vary in the ratio between dt/σ and we show the full range in panels b/c. When the noise dominates the dynamics (panel a), the baseline is able to estimate also the nonlinear dynamics accurately, with 99.7%. However, as we move to lower noise cases (panel b), performance reduces to 41.0%. Our switching dynamics model is able to estimate the system with high R2 in both cases (94.14% and 94.08%). However, note for the case of a von Mises-Fisher (vMF) distribution for pε and dot-product similarity for ϕ in ', 'original_lines': 'system for different choices of algorithms. The ground truth dynamics vary in the ratio between dt/σ and we show the full range in panels b/c. When the noise dominates the dynamics (panel a), the baseline is able to estimate also the nonlinear dynamics accurately, with 99.7%. However, as we move to lower noise cases (panel b), performance reduces to 41.0%. Our switching dynamics model is able to estimate the system with high R2 in both cases (94.14% and 94.08%). However, note von Mises-Fisher (vMF) distribution for pε and dot-product similarity for ϕ in for the case of a qeKY-7 ', 'after_paragraph_idx': 47, 'before_paragraph_idx': 46}, {'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': 'tem noise levels σ, averaged over all dt. ', 'paragraph_idx': 33, 'before_section': None, 'context_before': 'Figure 7: Impact of modes for non-linear dynamics in the ', 'modified_lines': 'Lorenz system for different sys- ', 'original_lines': 'Lorenz system for different sys- ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '7 DISCUSSION', 'after_section': '7 DISCUSSION', 'context_after': 'about their behavior in practice. 2018] or wav2vec (Schneider et al., 2019), DYNCL generalizes the concept of training contrastive learning models with (explicit) dynamics models. CPC uses an RNN encoder followed by linear projection, while wav2vec leverages CNNs dynamics models and affine projections. Theorem 1 applies to both these models, and offers an explanation for their successful empirical performance. Nonlinear ICA methods, such as TCL (Hyvarinen & Morioka, 2016) and PCL (Hyvarinen & Morioka, 2017) provide identifiability of the latent variables leveraging temporal structure of the data. Com- pared to DynCL, they do not explicitly model dynamics and assume either stationarity or non- stationarity of the time series (Hyv¨arinen et al., 2023), whereas DynCL assumes bijective latent dynamics, and focuses on explicit dynamics modeling beyond solving the demixing problem. For applications in scientific data analysis, CEBRA (Schneider et al., 2023) uses supervised or self-supervised contrastive learning, either with symmetric encoders or asymmetric encoder functions. ', 'paragraph_idx': 58, 'before_section': '7 DISCUSSION', 'context_before': 'The DYNCL framework is versatile and allows to study the performance of contrastive learning in conjunction with different dynamics models. By exploring various special cases (identity, linear, switching linear), our study categorizes different forms of contrastive learning and makes predictions ', 'modified_lines': 'In comparison to contrastive predictive coding [CPC; Oord et al., ', 'original_lines': 'In comparison to contrastive predictive coding [CPC; Oord et al., all-3 all-4 GCbH-2 ', 'after_paragraph_idx': 58, 'before_paragraph_idx': 58}, {'section': 'Abstract', 'after_section': None, 'context_after': 'A limitation of the present study is its main focus on simulated data which clearly corroborates our theory but does not yet demonstrate real-world applicability. However, our simulated data bears the signatures of real-world datasets (multi-trial structures, varying degrees of dimensionality, number of modes, and different forms of dynamics). A challenge is the availability of real-world benchmark datasets for dynamics identification. We believe that rigorous evaluation of different estimation methods on such datasets will continue to show the promise of contrastive learning for dynamics from Chen et al. (2021) with realistic mixing functions (g) offers a promising direction for evaluating latent dynamics models. As a demonstration of real-world applicability, we compared DynCL to 8 CONCLUSION ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'a contrastive loss for avoiding collapse and, more importantly, serves as the foundation for our theoretical result. ', 'modified_lines': 'identification. Integrating recent benchmarks like DynaDojo (Bhamidipaty et al., 2023) or datasets CEBRA-Time (Schneider et al., 2023) on a neural recordings dataset in Appendix J. ', 'original_lines': 'qeKY-11 identification. Integrating recent benchmarks like DynaDojo (Bhamidipaty et al., 2023) or datasets CEBRA-Time (Schneider et al., 2023) on a neural recordings dataset in Appendix H. qeKY-13 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '8 CONCLUSION', 'after_section': None, 'context_after': 'REFERENCES Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 11 Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179–211, 1990. 12 Geoffrey Roeder, Luke Metz, and Durk Kingma. On linear identifiability of learned representations. Peter Sorrenson, Carsten Rother, and Ullrich K¨othe. Disentanglement by nonlinear ica with general ', 'paragraph_idx': 68, 'before_section': '8 CONCLUSION', 'context_before': 'we used around 120 days of GPU compute on a A100 to produce the results presented in the paper. We provide a more detailed breakdown in Appendix K. ', 'modified_lines': 'AUTHOR CONTRIBUTIONS RGL and TS: Methodology, Software, Investigation, Writing–Editing. StS: Conceptualization, Methodology, Formal Analysis, Writing–Original Draft and Writing–Editing. ACKNOWLEDGMENTS We thank Luisa Eck and Stephen Jiang for discussions on the theory, and Lilly May for input on paper figures. We thank the five anonymous reviewers at ICLR for their valuable and constructive comments on our manuscript. This work was supported by the Helmholtz Association’s Initiative and Networking Fund on the HAICORE@KIT and HAICORE@FZJ partitions. Guy Ackerson and K Fu. On state estimation in switching environments. IEEE transactions on automatic control, 15(1):10–17, 1970. Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15619–15629, 2023. Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. Data2vec: A general framework for self-supervised learning in speech, vision and language. In International Conference on Machine Learning, pp. 1298–1312. PMLR, 2022. Carles Balsells-Rodas, Yixin Wang, and Yingzhen Li. On the identifiability of switching dynamical systems. arXiv preprint arXiv:2305.15925, 2023. Ror Bellman and Karl Johan ˚Astr¨om. On structural identifiability. Mathematical biosciences, 7(3-4): 329–339, 1970. Logan Mondal Bhamidipaty, Tommy Bruzzese, Caryn Tran, Rami Ratl Mrad, and Max Kanwal. Dynadojo: an extensible benchmarking platform for scalable dynamical system identification. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportuni- ties and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences, 113(15):3932–3937, 2016. Published as a conference paper at ICLR 2025 Chaw-Bing Chang and Michael Athans. State estimation for discrete systems with switching parameters. IEEE Transactions on Aerospace and Electronic Systems, (3):418–425, 1978. Boyuan Chen, Kuang Huang, Sunand Raghupathi, Ishaan Chandratreya, Qiang Du, and Hod Lipson. Discovering State Variables Hidden in Experimental Data, December 2021. URL http:// arxiv.org/abs/2112.10755. Ricky TQ Chen, Brandon Amos, and Maximilian Nickel. Learning neural event functions for ordinary differential equations. arXiv preprint arXiv:2011.03902, 2020a. Sheng Chen and Steve A Billings. Representations of non-linear systems: the narmax model. International journal of control, 49(3):1013–1032, 1989. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597–1607. PMLR, 2020b. Silvia Chiappa et al. Explicit-duration markov switching models. Foundations and Trends® in Machine Learning, 7(6):803–886, 2014. Sy-Miin Chow and Guangjian Zhang. Nonlinear regime-switching state-space (rsss) models. Psy- chometrika, 78:740–768, 2013. Hanjun Dai, Bo Dai, Yan-Ming Zhang, Shuang Li, and Le Song. Recurrent hidden semi-markov model. In International Conference on Learning Representations, 2022. St´ephane d’Ascoli, S¨oren Becker, Alexander Mathis, Philippe Schwaller, and Niki Kilbertus. Odeformer: Symbolic regression of dynamical systems with transformers. arXiv preprint arXiv:2310.05573, 2023. Saskia EJ de Vries, Jerome A Lecoq, Michael A Buice, Peter A Groblewski, Gabriel K Ocker, Michael Oliver, David Feng, Nicholas Cain, Peter Ledochowitsch, Daniel Millman, et al. A large-scale standardized physiological survey reveals functional organization of the mouse visual cortex. Nature neuroscience, 23(1):138–151, 2020. Zhe Dong, Bryan Seybold, Kevin Murphy, and Hung Bui. Collapsed amortized variational inference for switching nonlinear dynamical systems. In International Conference on Machine Learning, pp. 2638–2647. PMLR, 2020. Lea Duncker, Gergo Bohner, Julien Boussard, and Maneesh Sahani. Learning interpretable In International conference continuous-time models of latent stochastic dynamical systems. on machine learning, pp. 1726–1734. PMLR, 2019. Yuanjun Gao, Evan W Archer, Liam Paninski, and John P Cunningham. Linear dynamical neural population models through nonlinear embeddings. Advances in neural information processing systems, 29, 2016. Quentin Garrido, Mahmoud Assran, Nicolas Ballas, Adrien Bardes, Laurent Najman, and Yann LeCun. Learning and leveraging world models in visual representation learning. arXiv preprint arXiv:2403.00504, 2024. Zoubin Ghahramani and Geoffrey E Hinton. Variational learning for switching state-space models. Neural computation, 12(4):831–864, 2000. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. Albert Gu, Karan Goel, and Christopher R´e. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021. Hermanni H¨alv¨a, Sylvain Le Corff, Luc Leh´ericy, Jonathan So, Yongjie Zhu, Elisabeth Gassiat, and Aapo Hyvarinen. Disentangling identifiable features from noisy data with structured nonlinear ica. Advances in Neural Information Processing Systems, 34:1624–1633, 2021. Published as a conference paper at ICLR 2025 Olivier Henaff. Data-efficient image recognition with contrastive predictive coding. In International conference on machine learning, pp. 4182–4192. PMLR, 2020. Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. Cole Hurwitz, Nina Kudryashova, Arno Onken, and Matthias H Hennig. Building population models for large-scale neural recordings: Opportunities and pitfalls. Current opinion in neurobiology, 70: 64–73, 2021. Aapo Hyvarinen and Hiroshi Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. Advances in neural information processing systems, 29, 2016. Aapo Hyvarinen and Hiroshi Morioka. Nonlinear ica of temporally dependent stationary sources. In Artificial Intelligence and Statistics, pp. 460–469. PMLR, 2017. Aapo Hyvarinen, Hiroaki Sasaki, and Richard Turner. Nonlinear ica using auxiliary variables and generalized contrastive learning. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 859–868. PMLR, 2019. Aapo Hyv¨arinen, Ilyes Khemakhem, and Hiroshi Morioka. Nonlinear independent compo- nent analysis for principled disentanglement in unsupervised deep learning. Patterns, 4(10): ISSN 26663899. doi: 10.1016/j.patter.2023.100844. URL https: 100844, October 2023. //linkinghub.elsevier.com/retrieve/pii/S2666389923002234. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. Pierre-Alexandre Kamienny, St´ephane d’Ascoli, Guillaume Lample, and Franc¸ois Charton. End-to- end symbolic regression with transformers. Advances in Neural Information Processing Systems, 35:10269–10281, 2022. Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In International conference on artificial intelligence and statistics, pp. 2207–2217. PMLR, 2020. Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Yann LeCun. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review, 62(1):1–62, 2022. Scott Linderman, Matthew Johnson, Andrew Miller, Ryan Adams, David Blei, and Liam Paninski. Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial intelligence and statistics, pp. 914–922. PMLR, 2017. Phillip Lippe, Sara Magliacane, Sindy L¨owe, Yuki M Asano, Taco Cohen, and Stratis Gavves. Citris: Causal identifiability from temporal intervened sequences. In International Conference on Machine Learning, pp. 13557–13603. PMLR, 2022. Stefan Matthes, Zhiwei Han, and Hao Shen. Towards a unified framework of contrastive learning for disentangled representations. Advances in Neural Information Processing Systems, 36:67459– 67470, 2023. Leonard A McGee and Stanley F Schmidt. Discovery of the kalman filter as a practical tool for aerospace and industry. Technical report, 1985. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Mitchell Ostrow, Adam Eisen, Leo Kozachkov, and Ila Fiete. Beyond geometry: Comparing the temporal structure of computation in neural circuits with dynamical similarity analysis, 2023. URL https://arxiv.org/abs/2306.10168. 13 Published as a conference paper at ICLR 2025 Chethan Pandarinath, Daniel J O’Shea, Jasmine Collins, Rafal Jozefowicz, Sergey D Stavisky, Jonathan C Kao, Eric M Trautmann, Matthew T Kaufman, Stephen I Ryu, Leigh R Hochberg, et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805–815, 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. In International Conference on Machine Learning, pp. 9030–9039. PMLR, 2021. Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. wav2vec: Unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862, 2019. Steffen Schneider, Jin Hwa Lee, and Mackenzie Weygandt Mathis. Learnable latent embeddings for joint behavioural and neural analysis. Nature, 617(7960):360–368, 2023. Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1134–1141. IEEE, 2018. Ruian Shi and Quaid Morris. Segmenting hybrid trajectories using latent odes. In International Conference on Machine Learning, pp. 9569–9579. PMLR, 2021. Jimmy Smith, Scott Linderman, and David Sussillo. Reverse engineering recurrent neural networks with jacobian switching linear dynamical systems. Advances in Neural Information Processing Systems, 34:16700–16713, 2021. ', 'original_lines': 'EDITING/REBUTTAL LEGEND General writing improvements and additions Minor clarity improvements or corrections Changes request by multiple Reviewers Changes request by Reviewer KF3j Changes request by Reviewer 3gxR Changes request by Reviewer GCbH Changes request by Reviewer qeKY Changes request by Reviewer 4X1L all KF3j 3gxR GCbH qeKY 4X1L Guy Ackerson and K Fu. On state estimation in switching environments. IEEE transactions on automatic control, 15(1):10–17, 1970. Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15619–15629, 2023. Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. Data2vec: A general framework for self-supervised learning in speech, vision and language. In International Conference on Machine Learning, pp. 1298–1312. PMLR, 2022. Carles Balsells-Rodas, Yixin Wang, and Yingzhen Li. On the identifiability of switching dynamical systems. arXiv preprint arXiv:2305.15925, 2023. Ror Bellman and Karl Johan ˚Astr¨om. On structural identifiability. Mathematical biosciences, 7(3-4):329–339, 1970. Logan Mondal Bhamidipaty, Tommy Bruzzese, Caryn Tran, Rami Ratl Mrad, and Max Kanwal. Dynadojo: an extensible benchmarking platform for scalable dynamical system identification. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023. Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021. Steven L Brunton, Joshua L Proctor, and J Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences, 113(15): 3932–3937, 2016. Chaw-Bing Chang and Michael Athans. State estimation for discrete systems with switching parameters. IEEE Transactions on Aerospace and Electronic Systems, (3):418–425, 1978. 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 Under review as a conference paper at ICLR 2025 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 Boyuan Chen, Kuang Huang, Sunand Raghupathi, Ishaan Chandratreya, Qiang Du, and Hod Lipson. Discovering State Variables Hidden in Experimental Data, December 2021. URL http://arxiv.org/abs/2112. 10755. Ricky TQ Chen, Brandon Amos, and Maximilian Nickel. Learning neural event functions for ordinary differential equations. arXiv preprint arXiv:2011.03902, 2020a. Sheng Chen and Steve A Billings. Representations of non-linear systems: the narmax model. International journal of control, 49(3):1013–1032, 1989. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597–1607. PMLR, 2020b. Silvia Chiappa et al. Explicit-duration markov switching models. Foundations and Trends® in Machine Learning, 7(6):803–886, 2014. Sy-Miin Chow and Guangjian Zhang. Nonlinear regime-switching state-space (rsss) models. Psychometrika, 78: 740–768, 2013. Hanjun Dai, Bo Dai, Yan-Ming Zhang, Shuang Li, and Le Song. Recurrent hidden semi-markov model. In International Conference on Learning Representations, 2022. St´ephane d’Ascoli, S¨oren Becker, Alexander Mathis, Philippe Schwaller, and Niki Kilbertus. Odeformer: Symbolic regression of dynamical systems with transformers. arXiv preprint arXiv:2310.05573, 2023. Zhe Dong, Bryan Seybold, Kevin Murphy, and Hung Bui. Collapsed amortized variational inference for switching nonlinear dynamical systems. In International Conference on Machine Learning, pp. 2638–2647. PMLR, 2020. Lea Duncker, Gergo Bohner, Julien Boussard, and Maneesh Sahani. Learning interpretable continuous-time models of latent stochastic dynamical systems. In International conference on machine learning, pp. 1726– 1734. PMLR, 2019. Yuanjun Gao, Evan W Archer, Liam Paninski, and John P Cunningham. Linear dynamical neural population models through nonlinear embeddings. Advances in neural information processing systems, 29, 2016. Quentin Garrido, Mahmoud Assran, Nicolas Ballas, Adrien Bardes, Laurent Najman, and Yann LeCun. Learning and leveraging world models in visual representation learning. arXiv preprint arXiv:2403.00504, 2024. Zoubin Ghahramani and Geoffrey E Hinton. Variational learning for switching state-space models. Neural computation, 12(4):831–864, 2000. Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. Albert Gu, Karan Goel, and Christopher R´e. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021. Hermanni H¨alv¨a, Sylvain Le Corff, Luc Leh´ericy, Jonathan So, Yongjie Zhu, Elisabeth Gassiat, and Aapo Hyvarinen. Disentangling identifiable features from noisy data with structured nonlinear ica. Advances in Neural Information Processing Systems, 34:1624–1633, 2021. Olivier Henaff. Data-efficient image recognition with contrastive predictive coding. In International conference on machine learning, pp. 4182–4192. PMLR, 2020. Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. Cole Hurwitz, Nina Kudryashova, Arno Onken, and Matthias H Hennig. Building population models for large-scale neural recordings: Opportunities and pitfalls. Current opinion in neurobiology, 70:64–73, 2021. Aapo Hyvarinen and Hiroshi Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. Advances in neural information processing systems, 29, 2016. Aapo Hyvarinen and Hiroshi Morioka. Nonlinear ica of temporally dependent stationary sources. In Artificial Intelligence and Statistics, pp. 460–469. PMLR, 2017. Under review as a conference paper at ICLR 2025 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 Aapo Hyvarinen, Hiroaki Sasaki, and Richard Turner. Nonlinear ica using auxiliary variables and generalized contrastive learning. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 859–868. PMLR, 2019. Aapo Hyv¨arinen, Ilyes Khemakhem, and Hiroshi Morioka. Nonlinear independent component analysis for princi- pled disentanglement in unsupervised deep learning. Patterns, 4(10):100844, October 2023. ISSN 26663899. doi: 10.1016/j.patter.2023.100844. URL https://linkinghub.elsevier.com/retrieve/pii/ S2666389923002234. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. Pierre-Alexandre Kamienny, St´ephane d’Ascoli, Guillaume Lample, and Franc¸ois Charton. End-to-end symbolic regression with transformers. Advances in Neural Information Processing Systems, 35:10269–10281, 2022. Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In International conference on artificial intelligence and statistics, pp. 2207–2217. PMLR, 2020. Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Yann LeCun. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review, 62(1): 1–62, 2022. Scott Linderman, Matthew Johnson, Andrew Miller, Ryan Adams, David Blei, and Liam Paninski. Bayesian learning and inference in recurrent switching linear dynamical systems. In Artificial intelligence and statistics, pp. 914–922. PMLR, 2017. Phillip Lippe, Sara Magliacane, Sindy L¨owe, Yuki M Asano, Taco Cohen, and Stratis Gavves. Citris: Causal identifiability from temporal intervened sequences. In International Conference on Machine Learning, pp. 13557–13603. PMLR, 2022. Stefan Matthes, Zhiwei Han, and Hao Shen. Towards a unified framework of contrastive learning for disentangled representations. Advances in Neural Information Processing Systems, 36:67459–67470, 2023. Leonard A McGee and Stanley F Schmidt. Discovery of the kalman filter as a practical tool for aerospace and industry. Technical report, 1985. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Mitchell Ostrow, Adam Eisen, Leo Kozachkov, and Ila Fiete. Beyond geometry: Comparing the temporal structure of computation in neural circuits with dynamical similarity analysis, 2023. URL https://arxiv. org/abs/2306.10168. Chethan Pandarinath, Daniel J O’Shea, Jasmine Collins, Rafal Jozefowicz, Sergey D Stavisky, Jonathan C Kao, Eric M Trautmann, Matthew T Kaufman, Stephen I Ryu, Leigh R Hochberg, et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805–815, 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. In International Conference on Machine Learning, pp. 9030–9039. PMLR, 2021. Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. wav2vec: Unsupervised pre-training for speech recognition. arXiv preprint arXiv:1904.05862, 2019. Steffen Schneider, Jin Hwa Lee, and Mackenzie Weygandt Mathis. Learnable latent embeddings for joint behavioural and neural analysis. Nature, 617(7960):360–368, 2023. Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In 2018 IEEE international conference on robotics and automation (ICRA), pp. 1134–1141. IEEE, 2018. Ruian Shi and Quaid Morris. Segmenting hybrid trajectories using latent odes. In International Conference on Machine Learning, pp. 9569–9579. PMLR, 2021. 13 Under review as a conference paper at ICLR 2025 Joshua H Siegle, Xiaoxuan Jia, S´everine Durand, Sam Gale, Corbett Bennett, Nile Graddis, Greggory Heller, Tamina K Ramirez, Hannah Choi, Jennifer A Luviano, et al. Survey of spiking in the mouse visual system reveals functional hierarchy. Nature, 592(7852):86–92, 2021. Jimmy Smith, Scott Linderman, and David Sussillo. Reverse engineering recurrent neural networks with jacobian switching linear dynamical systems. Advances in Neural Information Processing Systems, 34:16700–16713, 2021. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 67}, {'section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'after_section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'context_after': 't=1 is generated according to the ground-truth dynamical system in Eq. 5 with a bijective dynamics model f and an injective mixing function g. • (A3) The model ψ is composed of an encoder h, a dynamics model ˆf , a correction term α, and the similarity metric ϕ(u, v) = −∥u − v∥2 and attains the global minimizer of Eq. 3. Then, in the limit of T → ∞ for any point x in the support of the data marginal distribution: (a) The composition of mixing and de-mixing h(g(x)) = Lx + b is a bijective affine transform, ', 'paragraph_idx': 17, 'before_section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'context_before': 'Theorem 1 (Contrastive estimation of non-linear dynamics). Assume that ', 'modified_lines': '• (A1) A time-series dataset {yt}T • (A2) The system noise follows an iid normal distribution, p(εt) = N (εt|0, Σε) . ', 'original_lines': '3gxR-6 all-13 all-13 all 4X1L-6 3gxR-8 • (A1) A time-series dataset {yt}T The system noise follows an iid normal distribution, • (A2) p(εt) = N (εt|0, Σε). ', 'after_paragraph_idx': 17, 'before_paragraph_idx': 17}, {'section': 'Abstract', 'after_section': None, 'context_after': 'q(y′) exp[ψ(y, y′)]dy′ (cid:21) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'p(y′|y)ψ(y, y′)dy′ + log ', 'modified_lines': '', 'original_lines': '3gxR-9 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'ˆq(x) = 1 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'For the Euclidean case, we use the KDE based on the squared Euclidean norm, ', 'modified_lines': '', 'original_lines': '4X1L-5 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'The grey curve shows the decline in empirical identifiability (R2) as the uniformity assumption is violated by an increasing concentration κ (x-axis). Applying a KDE correction to the data resulted in substantially improved performance (red lines). ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'mixing function with a final projection layer to 50D observed data. The reference, positive and negative distributions are all vMFs parameterized according to κ (x-axis) in the case of the reference and negative distribution and κp for the positive distribution. ', 'modified_lines': ' ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 INTRODUCTION', 'after_section': None, 'context_after': 'latent variables, usually employing auxiliary variables such as class labels or time information (Hyvarinen & Morioka, 2016; 2017; Hyvarinen et al., 2019; Khemakhem et al., 2020; Sorrenson et al., 2020). In the case of time series data, Time Contrastive Learning (TCL) (Hyvarinen & Morioka, ', 'paragraph_idx': 6, 'before_section': None, 'context_before': 'RNNs [LFADS; Pandarinath et al., 2018]. Hurwitz et al. (2021) provide a detailed summary of additional algorithms. ', 'modified_lines': 'Nonlinear ICA. The field of Nonlinear ICA has recently provided identifiability results for identifying ', 'original_lines': 'Nonlinear ICA The field of Nonlinear ICA has recently provided identifiability results for identifying ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': None, 'context_after': 'more steps are performed, the performance of the ∇-SLDS model drops to about 95.5% vs. chance level for the control metric, again highlighting the high performance of our model, but also the room for improvement, as the oracle model stays at above 99% as expected. ', 'paragraph_idx': 15, 'before_section': None, 'context_before': 'of around 85% for single step prediction, both for the original and control metric. Our ∇-SLDS model and the ground truth dynamical model obtain over 99.9% well above the level of the control metric which remains at around 95%. The high value of the control metric is due to the small change ', 'modified_lines': 'introduced by a single time step, and should be considered when using and interpreting the metric. If ', 'original_lines': 'introduced by a single timestep, and should be considered when using and interpreting the metric. If ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': ' ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'O xt−τ + ', 'modified_lines': '', 'original_lines': ' where ντ t := C τ −1 (cid:88) i=0 Aiεt−i. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'after_section': None, 'context_after': 'the time resolution of the system is very high). A practical way to avoid feeding increasingly large inputs, is to not feed in all time-lags 0 . . . τ into the construction of O, but to subselect k time lags τ1, . . . , τk, with τ1 = 0 and τk = τ , and instead consider the system ', 'paragraph_idx': 22, 'before_section': None, 'context_before': 'would make ˜g injective and our theoretical guarantees from Theorem 1 would hold, up to the offset introduced by the noise ν. ', 'modified_lines': 'In practice, the change in latent space between different time steps might be small (especially when ', 'original_lines': 'In practice, the change in latent space between different time-steps might be small (especially when ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'after_section': None, 'context_after': '• Linear Dynamical System: ut+1 = Auut + εt, similar to the LDS system used before for latent dynamics. We generate three datasets with linear dynamics using a) no control, b) control following another LDS, and c) control following a step function. Each dataset consists of 1000 trials, each trial is ', 'paragraph_idx': 16, 'before_section': None, 'context_before': '• Step function: A composition of a negative and positive step function, starting at random ', 'modified_lines': 'time steps and random magnitudes. I.2 EXPERIMENT DETAILS ', 'original_lines': 'time-steps and random magnitudes. J.2 EXPERIMENT DETAILS ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': None, 'context_after': 'This is conceptually similar to the conditional independent assumption in Hyvarinen et al. (2019) with auxiliary variable f (x), but with the distinction that at training time, we do not have u available, only x which requires the use of a dynamics model. ', 'paragraph_idx': 35, 'before_section': None, 'context_before': 'as a measure of component-wise identifiability. In non-linear ICA, it is typically assumed that a set of independent sources s1(t), . . . , sn(t) is passed through a mixing function to arrived at the observable signal (cf. Hyvarinen & Morioka, 2017). In contrast, in our work the sources are not ', 'modified_lines': 'independent, but are conditioned on the previous time step and the passed through a dynamics model. ', 'original_lines': 'independent, but are conditioned on the previous time-step and the passed through a dynamics model. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '5 EXPERIMENTS', 'after_section': None, 'context_after': 'different random seeds and on each dataset we train 3 models with different seeds, resulting in 9 models for each baseline and for each of the two settings. We train every baseline model for 50 epochs or until the training time reaches 8 hours. ', 'paragraph_idx': 31, 'before_section': None, 'context_before': 'σε = 0.0001 and rotation angles max(θi) = 10. Additionally, since our main baseline (CEBRA-time) performed best on datasets with lower ∆t where the noise dominates over the dynamics, we also compare against an SLDS dataset generated with larger dynamics noise σε = 0.001 and smaller ', 'modified_lines': 'rotation angles max(θi) = 5 (see Figure 4b). We generate 3 different versions of each dataset using ', 'original_lines': 'rotation angles max(θi) = 5 (see Figure 4b). We generate4 3 different versions of each dataset using ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-02-28 23:10:15
ICLR.cc/2025/Conference
oFznv6Hqvv
3EC25XMVtD
[{'section': '1 (7)', 'after_section': None, 'context_after': '4 ', 'paragraph_idx': 25, 'before_section': None, 'context_before': '(7) Note that the dynamics model ˆf (xt; W, zt) depends on an additional latent variable zt = ', 'modified_lines': '[zt,1, . . . , zt,K]⊤ which contains probabilities to parametrize the dynamics. During inference, we ', 'original_lines': '[zt,1, . . . , zt,K]⊤ which contains probabilities to parametrize the dynamics . During inference, ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 (7)', 'after_section': '1 (7)', 'context_after': 'bution (Jang et al., 2016) and we use a temperature τ to control the smoothness of the resulting probabilities. During pilot experiments, we found that the reciprocal parameterization of the logits outperforms other choices for computing an argmin, like flipping the sign. ', 'paragraph_idx': 25, 'before_section': None, 'context_before': 'Gumbel Softmax Published as a conference paper at ICLR 2025 ', 'modified_lines': 'can obtain the index kt = arg maxk zt,k. The variables gk are samples from the Gumbel distri- ', 'original_lines': 'we can obtain the index kt = arg maxk zt,k. The variables gk are samples from the Gumbel distri- ', 'after_paragraph_idx': 25, 'before_paragraph_idx': None}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': None, 'context_after': '5 ', 'paragraph_idx': 34, 'before_section': None, 'context_before': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001. ', 'modified_lines': 'Model estimation. For the feature encoder h, baseline and our model use an MLP with three layers followed by GELU activations (Hendrycks & Gimpel, 2016). Model capacity scales with the embedding dimensionality d. The last hidden layer has 10d units and all previous layers have 30d units. For the SLDS and LDS datasets, we train on batches with 2048 samples each (reference ', 'original_lines': 'Model estimation. For the feature encoder h, baseline and our model use an MLP with three layers followed by GELU activations (Hendrycks & Gimpel, 2016). Each layer has 180 units. We train on batches with 2048 samples each (reference and positive) and use 215=32,768 negative samples. We use the Adam optimizer (Kingma, 2014) with learning rates 3 × 10−4 for LDS data, 10−3 for SLDS ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'context_after': 'Evaluation metrics. Our metrics are informed by the result in Theorem 1 and measure empirical identifiability up to affine transformation of the latent space and its underlying linear or non-linear ', 'paragraph_idx': 34, 'before_section': None, 'context_before': '80.30 ± 14.13 93.91 ± 5.32 ', 'modified_lines': 'and positive). We use 216=65536 negative samples for SLDS and 20k negative samples for LDS data. For the Lorenz data, we use a batch size of 1024 and 20k negative samples. We use the Adam optimizer (Kingma, 2014) with learning rates 3 × 10−4 for LDS data, 10−3 for SLDS data, and 10−4 for Lorenz system data. For the SLDS data, we use a different learning rate of 10−2 for the parameters of the dynamics model. We train for 50k steps on SLDS data and for 30k steps for LDS and Lorenz system data. Our baseline model is standard self-supervised contrastive learning with the InfoNCE loss, which is similar to the CEBRA-time model (with symmetric encoders, i.e., without a dynamics model; cf. Schneider et al., 2023). For DYNCL, we add an LDS or ∇-SLDS dynamics model for fitting. For our baseline, we post-hoc fit the corresponding model on the recovered latents minimizing the predictive mean squared error via gradient descent. ', 'original_lines': 'data, and 10−4 for Lorenz system data. Our baseline model is standard self-supervised contrastive learning with the InfoNCE loss, which corresponds to the CEBRA-time model (with symmetric encoders, i.e., without a dynamics model; cf. Schneider et al., 2023). For DYNCL, we add an LDS or ∇-SLDS dynamics model for fitting. For our baseline, we post-hoc fit the corresponding model on the recovered latents minimizing the predictive mean squared error via gradient descent. ', 'after_paragraph_idx': 35, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'panel b and c), the estimation scheme breaks again. However, this property offers an explanation for the success of existing contrastive estimation algorithms like CEBRA-time (Schneider et al., 2023) which successfully estimate dynamics in absence of a dynamics model. ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'recovery is possible for cases with small angles, both in the linear and non-linear case. While in some cases, this learning setup might be applicable in practice, it seems generally unrealistic to be able to perturb the system beyond the actual dynamics. As we scale the dynamics to larger values (Figure 4, ', 'modified_lines': '', 'original_lines': ' 7 babcσ=0.001identity (B)noisedynamics∇-SLDSGT SLDSσ=0.0001 Published as a conference paper at ICLR 2025 Figure 5: Contrastive learning of 3D non-linear dynamics following a Lorenz attractor model. (a), left to right: ground truth dynamics for 10k samples with dt = 0.0005 and σ = 0.1, estimation results for baseline (identity dynamics), DynCL with ∇-SLDS, estimated mode sequence. (b), empirical identifiability (R2) between baseline (BAS) and ∇-SLDS for varying numbers of discrete states K. (c, d), same layout but for dt = 0.01 and σ = 0.001. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '8 050,000samplesground truthbaseline (w/o dynamics)ours (switching dynamics)index050,000samplestimetimeK=1K=200BASK=100K=10abcdσ = 0.1σ = 0.001Δt = 0.01Δt = 0.0005 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'for the case of a von Mises-Fisher (vMF) distribution for pε and dot-product similarity for ϕ in Appendix D. ', 'modified_lines': '', 'original_lines': '6.3 ABLATION STUDIES For practitioners leveraging contrastive learning for statistical analysis, it is important to know the trade-offs in empirical performance in relation to various parameters. In real-world experiments, the most important factors are the size of the dataset, the trial-structure of the dataset, the latent ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 RESULTS', 'after_section': '6 RESULTS', 'context_after': 'dimensionality we can expect to recover, and the degree of non-linearity between latents and observ- ables. We consider these factors of influence: As a reference, we use the SLDS system with a 6D latent space, 1M samples (1k trials × 1k samples), L = 4 mixing layers, 10 degrees for the rotation ', 'paragraph_idx': 48, 'before_section': None, 'context_before': 'impact of complexity of the latent dynamics in terms of d) latent dimensionality, e) number of modes to switch in between and f) the switching frequency paramameterized via the switching probability. ', 'modified_lines': '6.3 ABLATION STUDIES For practitioners leveraging contrastive learning for statistical analysis, it is important to know the trade-offs in empirical performance in relation to various parameters. In real-world experiments, the most important factors are the size of the dataset, the trial-structure of the dataset, the latent ', 'original_lines': '', 'after_paragraph_idx': 48, 'before_paragraph_idx': None}, {'section': '7 DISCUSSION', 'after_section': '7 DISCUSSION', 'context_after': 'In comparison to contrastive predictive coding [CPC; Oord et al., 2018] or wav2vec (Schneider et al., 2019), DYNCL generalizes the concept of training contrastive learning models with (explicit) dynamics models. CPC uses an RNN encoder followed by linear projection, while wav2vec leverages CNNs dynamics models and affine projections. Theorem 1 ', 'paragraph_idx': 59, 'before_section': '7 DISCUSSION', 'context_before': 'The DYNCL framework is versatile and allows to study the performance of contrastive learning in conjunction with different dynamics models. By exploring various special cases (identity, linear, switching linear), our study categorizes different forms of contrastive learning and makes predictions ', 'modified_lines': 'about their behavior in practice. ', 'original_lines': 'about their behavior in practice. ', 'after_paragraph_idx': 59, 'before_paragraph_idx': 59}, {'section': '7 DISCUSSION', 'after_section': '7 DISCUSSION', 'context_after': 'A limitation of the present study is its main focus on simulated data which clearly corroborates our theory but does not yet demonstrate real-world applicability. However, our simulated data bears the ', 'paragraph_idx': 62, 'before_section': '7 DISCUSSION', 'context_before': 'Finally, there is a connection to the joint embedding predictive architecture (JEPA; LeCun, 2022; Assran et al., 2023). The architecture setup of DYNCL can be regarded as a special case of JEPA, ', 'modified_lines': 'but with symmetric encoders to leverage distillation of the system dynamics into the predictor (the dynamics model). In contrast to JEPA, the use of symmetric encoders requires a contrastive loss for avoiding collapse and, more importantly, serves as the foundation for our theoretical result. ', 'original_lines': 'but with symmetric encoders to leverage distillation of the system dynamics into the predictor (the dynamics model). In contrast to JEPA, the use of symmetric encoders again requires use of a contrastive loss for avoiding collapse and, more importantly, serves as the foundation for our theoretical result. ', 'after_paragraph_idx': 63, 'before_paragraph_idx': 62}, {'section': '8 CONCLUSION', 'after_section': '8 CONCLUSION', 'context_after': 'Datasets. We evaluate our experiments on a variety of synthetic datasets. The datasets comprise different dynamical systems, from linear to nonlinear. ', 'paragraph_idx': 66, 'before_section': '8 CONCLUSION', 'context_before': '5 and for each experiment of the Appendix within the respective chapter. Theory. Our theoretical claims are backed by a complete proof attached in Appendix A. Assumptions ', 'modified_lines': 'are outlined in the main text (Section 3) and again in more detail in Appendix A. ', 'original_lines': 'are outlined in the main text (Section 1) and again in more detail in Appendix A. ', 'after_paragraph_idx': 67, 'before_paragraph_idx': 65}, {'section': 'Abstract', 'after_section': None, 'context_after': 'H Non-Injective Mixing Functions . H.1 Experimental Validation . ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '. . ', 'modified_lines': '', 'original_lines': '. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'It was shown [Proposition 1, Schneider et al., 2023 ] that this loss function is convex in ψ with the unique minimizer ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'q(y′) exp[ψ(y, y′)]dy′ ', 'modified_lines': '. ', 'original_lines': '(cid:21) . (16) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': 'We insert this into Eq. 22 and obtain ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'Published as a conference paper at ICLR 2025 ', 'modified_lines': '', 'original_lines': ' and computing the derivative with respect to x yields Jf (x) = A1J ˆf (Lx + b)L + Jv(x). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '(31) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ˆf (Lx + b) ', 'modified_lines': '', 'original_lines': ' (30) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '8 CONCLUSION', 'after_section': None, 'context_after': 'Published as a conference paper at ICLR 2025 C ADDITIONAL RELATED WORK Contrastive learning. An influential and conceptual motivation for our work is Contrastive Predictive Coding (CPC) (Oord et al., 2018) which uses the InfoNCE loss with an additional non-linear projection head implemented as an RNN to aggregate information from multiple time steps. Then, an ', 'paragraph_idx': 68, 'before_section': None, 'context_before': 'drop this computationally expensive term when applying the method on real-world datasets that are approximately uniform. ', 'modified_lines': '20 ', 'original_lines': '21 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': None, 'context_after': 'σ = 0.0001σ = 0.01 Published as a conference paper at ICLR 2025 ', 'paragraph_idx': 34, 'before_section': None, 'context_before': 'Figure 10: Visualizations of 6D linear dynamical systems at σ = 0.0001 (left) and σ = 0.01 for 10 degree rotations. These systems are used in our SLDS experiments. ', 'modified_lines': '24 ', 'original_lines': '25 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': ' ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'O xt−τ + ', 'modified_lines': '', 'original_lines': ' where ντ t := C τ −1 (cid:88) i=0 Aiεt−i. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': None, 'context_after': 'Published as a conference paper at ICLR 2025 ', 'paragraph_idx': 34, 'before_section': None, 'context_before': 'Table 6: SLDS and Lorenz dataset from Table 1 with the addition of the MCC metric. ', 'modified_lines': '36 ', 'original_lines': '37 ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-01 23:06:00
ICLR.cc/2025/Conference
3EC25XMVtD
zVWFIErqTQ
[]
2025-03-01 23:08:16
ICLR.cc/2025/Conference
zVWFIErqTQ
22A0PSumim
[{'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'In this work, we revisit and extend contrastive learning in the context of system identification. We uncover several surprising facts about its out-of-the-box effectiveness in identifying dynamics and ', 'paragraph_idx': 4, 'before_section': '1 INTRODUCTION', 'context_before': 'modern machine learning systems for learning from sequential data, proving highly effective for building meaningful latent representations (Baevski et al., 2022; Bommasani et al., 2021; Brown, 2020; Oord et al., 2018; LeCun, 2022; Sermanet et al., 2018; Radford et al., 2019). An emerging ', 'modified_lines': 'view is a connection between these algorithms and learning of world models (Ha & Schmidhuber, 2018; Assran et al., 2023; Garrido et al., 2024). However, the theoretical understanding of non-linear system identification by these sequence-learning algorithms remains limited. ', 'original_lines': 'view is a connection between these algorithms and learning of “world models” (Assran et al., 2023; Garrido et al., 2024). Yet, non-linear system identification in such sequence-learning algorithms is poorly theoretically studied. ', 'after_paragraph_idx': 5, 'before_paragraph_idx': 4}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': '∗Equal contribution. †Correspondence: [email protected] ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': 'enable interpretable dynamics inference across a range of data generating processes, we propose a general framework for linear and non-linear system identification with CL (Figure 1). ', 'modified_lines': 'Background. An influential motivation of our work is Contrastive Predictive Coding (CPC; Oord et al., 2018). CPC can be recovered as a special case of our framework when using an RNN dynamics ', 'original_lines': 'Background. An influential motivation of our work is Contrastive Predictive Coding [CPC; Oord et al., 2018]. CPC can be recovered as a special case of our framework when using an RNN dynamics ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 5}, {'section': '1 INTRODUCTION', 'after_section': '1 INTRODUCTION', 'context_after': 'advances like SNICA (H¨alv¨a et al., 2021) for more generally structured data-generating processes. In contrast to previous work, we focus on bridging time- series representation learning through contrastive learning ', 'paragraph_idx': 6, 'before_section': '1 INTRODUCTION', 'context_before': '(Chen & Billings, 1989). Additionally, several works have also explored generative models for general dynamics (Duncker et al., 2019) and switching dynamics, e.g. rSLDS (Linderman et al., 2017). In the Nonlinear ICA literature, identifiable algorithms for time-series data, such as Time Contrastive Learn- ', 'modified_lines': 'ing (TCL; Hyvarinen & Morioka, 2016) for non-stationary processes and Permutation Contrastive Learning (PCL; Hyvarinen & Morioka, 2017) for stationary data have been proposed, with recent ', 'original_lines': 'ing [TCL; Hyvarinen & Morioka, 2016] for non-stationary processes and Permutation Contrastive Learning [PCL; Hyvarinen & Morioka, 2017] for stationary data have been proposed, with recent ', 'after_paragraph_idx': 6, 'before_paragraph_idx': 6}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'context_after': 'h is shared across the reference yt, positive yt+1, and negative samples y− i . A dynam- ', 'paragraph_idx': 11, 'before_section': None, 'context_before': 'y− i ', 'modified_lines': 'Figure 1: DYNCL framework: The encoder ', 'original_lines': 'Figure 1: DynCL framework: The encoder ', 'after_paragraph_idx': 11, 'before_paragraph_idx': None}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': None, 'context_after': '2 Published as a conference paper at ICLR 2025 and the resulting points are compared through the similarity function ϕ. The similarity function ϕ will be informed by the form of (possibly induced) system noise εt. In the simplest form, the noise ', 'paragraph_idx': 12, 'before_section': None, 'context_before': '1Note that we can equivalently write ϕ(˜h(x)), ˜h′(x′)) using two asymmetric encoder functions, see addi- ', 'modified_lines': 'tional results in Appendix D. Figure 2: Graphical intuition behind Theorem 1. (a), the ground truth latent space is mapped to observables through the injective mixing function g. Our model maps back into the latent space. The composition of mixing and de-mixing by the model is an affine transform. (b), dynamics in the ground-truth space are mapped to the latent space. By observing variations introduced by the system noise ε, our model is able to infer the ground-truth dynamics up to an affine transform. ', 'original_lines': 'tional results the potent in Appendix D. Figure 2: Graphical intuition behind Theorem 1. a, the ground truth latent space is mapped to observables through the injective mixing function g. Our model maps back into the latent space. The composition of mixing and de-mixing by the model is an affine transform. b dynamics in the ground-truth space are mapped to the latent space. By observing variations introduced by the system noise ε, our model is able to infer the ground-truth dynamics up to an affine transform. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': None, 'context_after': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS ', 'paragraph_idx': 14, 'before_section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'context_before': 'Note, the additional term α(y′) is a correction applied to account for non-uniform marginal distri- butions. It can be parameterized as a kernel density estimate (KDE) with log ˆq(h(y′)) ≈ log q(x′) around the datapoints. In very special cases, the KDE makes a difference in empirical performance ', 'modified_lines': '(App. B, Fig. 9) and is required for our theory. Yet, we found that on the time-series datasets considered, it was possible to drop this term without loss in performance (i.e., α(y′) = 0). ', 'original_lines': '(App. B, Fig. 9 ) and is required for our theory. Yet, we found that on the time-series datasets considered, it was possible to drop this term without loss in performance (i.e., α(y′) = 0) . ', 'after_paragraph_idx': None, 'before_paragraph_idx': 14}, {'section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'after_section': None, 'context_after': 'This means that simultaneously fitting the system dynamics and encoding model allows us to recover the system matrix up to an indeterminacy. ', 'paragraph_idx': 19, 'before_section': None, 'context_before': 'model fitting, ˆf (x) = x = LAL−1x + b, which is impossible (Theorem 1b; also see App. Eq. 22). We can fix this case by either decoupling the two encoders (Appendix D) , or taking a more structured approach and parameterizing a dynamics model with a dynamics matrix: ', 'modified_lines': 'Corollary 2. For a ground-truth linear dynamical system f (x) = Ax and dynamics model ˆf (x) = ˆAx, we identify the latents up to h(g(x)) = Lx + b and dynamics with ˆA = LAL−1. ', 'original_lines': 'Corollary 2. For a ground-truth linear dynamical system f (x) = Ax and dynamics model ˆf (x) = ˆAx, we identify the latents up to h(g(x)) = Lx + b and dynamics with ˆA = LAL−1. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'context_after': 'data. For the Lorenz data, we use a batch size of 1024 and 20k negative samples. We use the Adam optimizer (Kingma, 2014) with learning rates 3 × 10−4 for LDS data, 10−3 for SLDS data, and 10−4 for Lorenz system data. For the SLDS data, we use a different learning rate of 10−2 for the ', 'paragraph_idx': 34, 'before_section': None, 'context_before': '80.30 ± 14.13 93.91 ± 5.32 ', 'modified_lines': 'and positive). We use 216 = 65536 negative samples for SLDS and 20k negative samples for LDS ', 'original_lines': 'and positive). We use 216=65536 negative samples for SLDS and 20k negative samples for LDS ', 'after_paragraph_idx': 34, 'before_paragraph_idx': None}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': None, 'context_after': 'min T (cid:88) t=1 2 and min T (cid:88) t=1 2. (10) To evaluate the identifiability of the representation, we measure the R2 between the true latents xt 1 Not explicitly shown, but the argument in Corollary 2 applies to each piecewise linear section of the SLDS. 2 ∇-SLDS is only an approximation of the functional form of the underlying system. ', 'paragraph_idx': 37, 'before_section': None, 'context_before': 'dynamics. All metrics are estimated on the dataset the model is fit on. See Appendix F for additional discussion on estimating metrics on independently sampled dynamics. ', 'modified_lines': 'To account for the affine indeterminacy, we estimate L, b for ˆx = Lx + b which allows us to map ground truth latents x to recovered latents ˆx (cf. Theorem 1a). In cases where the inverse transform x = L−1( ˆx − b) is required, we can either compute L−1 directly, or for the purpose of numerical stability estimate it from data, which we denote as L′. The values of L,b and L′,b′ are computed via a linear regression: L,b ∥ ˆxt − (Lxt + b)∥2 L′,b′ ∥xt − (L′ ˆxt + b′)∥2 and the optimally aligned recovered latents L′ ˆxt + b′ across time steps t = 1 . . . T in the time-series. ', 'original_lines': 'To account for the affine indeterminacy, we explicitly estimate L, b for x = L ˆx + b which allows us to transform recovered latents ˆx into the space of ground truth latents x. In those cases, where the inverse transform ˆx = L−1(x − b) is required, for the purpose of numerical stability we estimate it from data rather than computing an explicit inverse of L. This results in estimates for L1,b1 and L2,b2, which we fit via linear regression: L1,b1 ∥ ˆxt − (L1xt + b1)∥2 L2,b2 ∥xt − (L2 ˆxt + b2)∥2 and the optimally aligned recovered latents L2 ˆxt + b2 across time steps t = 1 . . . T in the time-series. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 Not explicitly shown, but the argument in Corollary 2 applies to each piecewise linear section of the SLDS.2 ∇-SLDS is only an approximation of the functional form of the underlying system.', 'after_section': '1 Not explicitly shown, but the argument in Corollary 2 applies to each piecewise linear section of the SLDS.2 ∇-SLDS is only an approximation of the functional form of the underlying system.', 'context_after': 'between the true dynamics matrix A and the estimated dynamics matrix ˆA by accounting for the linear transformation between the true and recovered latent spaces. The LDS error (related to the (11) As a second, more general identifiability metric for the recovered dynamics ˆf , we introduce dynR2, computes the R2 between the predicted dynamics ˆf and the true dynamics f , corrected for the linear transformation between the two latent spaces. Specifically, motivated by Theorem 1(b), we compute (12) ', 'paragraph_idx': 39, 'before_section': None, 'context_before': '(no dynamics) to ∇-SLDS and a model fitted with ground-truth dynamics. (c) cluster accuracies for models shown in (b). ', 'modified_lines': 'We also propose two metrics as direct measures of identifiability for the recovered dynamics ˆf . For linear dynamics models, we introduce the LDS error. It denotes the norm of the difference metric for Dynamical Similarity Analysis; Ostrow et al., 2023) is computed as (cf. Corollary 2): LDS(A, ˆA) = ∥A − L−1 ˆAL∥F . which builds on Theorem 1b to evaluate the identifiability of non-linear dynamics. This metric dynR2(f , ˆf ) = r2 score( ˆf ( ˆx), Lf (L′ ˆx + b′) + b) ', 'original_lines': 'We also propose two metrics as direct measures of identifiability for the recovered dynamics ˆf . First, the LDS error, which is suitable only for linear dynamics models, denotes the norm of the difference metric for Dynamical Similarity Analysis; Ostrow et al., 2023) is then computed as (cf. Corollary 2): LDS(A, ˆA) = ∥A − L1 ˆAL2∥F ≈ ∥A − L−1 ˆAL∥F . which builds on Theorem 1 to evaluate the identifiability of non-linear dynamics. This metric dynR2(f , ˆf ) = r2 score( ˆf ( ˆx), L1f (L2 ˆx + b2) + b1) ', 'after_paragraph_idx': 39, 'before_paragraph_idx': None}, {'section': '6 RESULTS', 'after_section': '6 RESULTS', 'context_after': 'strong performance, both in terms of latent R2 (99.5%) and dynamics R2 (99.9%) outperforming the respective baselines (76.8% R2 and 85.5% dynamics R2). For non-linear dynamics, the baseline model fails entirely (41.0%/27.0%), while ∇-SLDS dynamics can be fitted with 94.1% R2 for latents ', 'paragraph_idx': 43, 'before_section': '6 RESULTS', 'context_before': '6.1 VERIFICATION OF THE THEORY FOR LINEAR DYNAMICS ', 'modified_lines': 'Suitable dynamics models enable identification of latents and dynamics. For all considered classes of models, we show in Table 1 that DYNCL with a suitable dynamics model effectively identifies the correct dynamics. For linear dynamics (LDS), DYNCL reaches an R2 of 99.0%, close to the oracle performance (99.5%). Most importantly, the average LDS error of our method (7.7×10−3) is very close to the oracle (4.4×10−3), in contrast to the baseline model (2.1×10−1) which has a substantially larger LDS error. In the case of switching linear dynamics (SLDS), DYNCL also shows ', 'original_lines': 'Suitable dynamics models enable identification of latents and dynamics. For all considered classes of models, we show in Table 1 that DYNCL effectively identifies the correct dynamics. For linear dynamics (LDS), DYNCL reaches an R2 of 99.0%, close to the oracle performance (99.5%). Most importantly, the LDS error of our method (0.38) is substantially closer to the oracle (0.17) compared to the baseline model (21.24). In the case of switching linear dynamics (SLDS), DYNCL also shows ', 'after_paragraph_idx': 43, 'before_paragraph_idx': 43}, {'section': '6 RESULTS', 'after_section': None, 'context_after': 'Learning noisy dynamics does not require a dynamics model. If the variance of the distribution for εt dominates the changes actually introduced by the dynamics, we find that the baseline model ', 'paragraph_idx': 45, 'before_section': '6 RESULTS', 'context_before': 'Figure 5: Contrastive learning of 3D non-linear dynamics following a Lorenz attractor model. (a), left to right: ground truth dynamics for 10k samples with dt = 0.0005 and σ = 0.1, estimation results for baseline (identity ', 'modified_lines': 'dynamics), DYNCL with ∇-SLDS, estimated mode sequence. (b), empirical identifiability (R2) between baseline (BAS) and ∇-SLDS for varying numbers of discrete states K. (c, d), same layout but for dt = 0.01 and σ = 0.001. ', 'original_lines': 'dynamics), DynCL with ∇-SLDS, estimated mode sequence. (b), empirical identifiability (R2) between baseline (BAS) and ∇-SLDS for varying numbers of discrete states K. (c, d), same layout but for dt = 0.01 and σ = 0.001. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 45}, {'section': '6 RESULTS', 'after_section': None, 'context_after': '6.3 ABLATION STUDIES ', 'paragraph_idx': 51, 'before_section': '6 RESULTS', 'context_before': 'Figure 6: Variations and ablations for the SLDS. We compare the ∇-SLDS model to the ground-truth switching dynamics (oracle) and a standard CL model without dynamics (baseline). All variations are with respect to the setting with 1M time steps (1k trials × 1k samples), L = 4 mixing layers, d = 6 latent dimensionality, 5 modes, ', 'modified_lines': 'and p = 0.0001 switching probability. We study the impact of the dataset size in terms of (a) samples per trial, (b) the number of trials, the impact of nonlinearity of the observations in terms of (c) number of mixing layers, the impact of complexity of the latent dynamics in terms of (d) latent dimensionality, (e) number of modes to switch in between and (f) the switching frequency paramameterized via the switching probability. ', 'original_lines': 'and p = 0.0001 switching probability. We study the impact of the dataset size in terms of a) samples per trial, b) the number of trials, the impact of nonlinearity of the observations in terms of c) number of mixing layers, the impact of complexity of the latent dynamics in terms of d) latent dimensionality, e) number of modes to switch in between and f) the switching frequency paramameterized via the switching probability. ', 'after_paragraph_idx': None, 'before_paragraph_idx': 51}, {'section': '6 RESULTS', 'after_section': '6 RESULTS', 'context_after': 'increasing modes to 200 improves performance, but eventually converges to a stable maximum for all noise levels. ', 'paragraph_idx': 58, 'before_section': '6 RESULTS', 'context_before': 'Number of modes for non-linear dynamics fitting (Fig. 7). We study the effect of increasing the number of matrices in the parameter bank W in the ∇-SLDS model. The figure depicts the impact ', 'modified_lines': 'of increasing the number of modes for DYNCL on the non-linear Lorenz dataset. We observe that ', 'original_lines': 'of increasing the number of modes for DynCL on the non-linear Lorenz dataset. We observe that ', 'after_paragraph_idx': 58, 'before_paragraph_idx': 58}, {'section': '7 DISCUSSION', 'after_section': '7 DISCUSSION', 'context_after': 'learning models with (explicit) dynamics models. CPC uses an RNN encoder followed by linear projection, while wav2vec leverages CNNs dynamics models and affine projections. Theorem 1 applies to both these models, and offers an explanation for their successful empirical performance. Nonlinear ICA methods, such as TCL (Hyvarinen & Morioka, 2016) and PCL (Hyvarinen & Morioka, 2017) provide identifiability of the latent variables leveraging temporal structure of the data. Com- dynamics, and focuses on explicit dynamics modeling beyond solving the demixing problem. For applications in scientific data analysis, CEBRA (Schneider et al., 2023) uses supervised or self-supervised contrastive learning, either with symmetric encoders or asymmetric encoder functions. While our results show that such an algorithm is able to identify dynamics for a sufficient amount of system noise, adding dynamics models is required as the system dynamics dominate. Hence, the and makes it applicable for a broader class of problems. Finally, there is a connection to the joint embedding predictive architecture (JEPA; LeCun, 2022; ', 'paragraph_idx': 59, 'before_section': '7 DISCUSSION', 'context_before': 'The DYNCL framework is versatile and allows to study the performance of contrastive learning in conjunction with different dynamics models. By exploring various special cases (identity, linear, switching linear), our study categorizes different forms of contrastive learning and makes predictions ', 'modified_lines': 'about their behavior in practice. In comparison to contrastive predictive coding (CPC; Oord et al., 2018) or wav2vec (Schneider et al., 2019), DYNCL generalizes the concept of training contrastive pared to DYNCL, they do not explicitly model dynamics and assume either stationarity or non- stationarity of the time series (Hyv¨arinen et al., 2023), whereas DYNCL assumes bijective latent DYNCL approach with LDS or ∇-SLDS dynamics generalises the self-supervised mode of CEBRA ', 'original_lines': 'about their behavior in practice. In comparison to contrastive predictive coding [CPC; Oord et al., 2018] or wav2vec (Schneider et al., 2019), DYNCL generalizes the concept of training contrastive pared to DynCL, they do not explicitly model dynamics and assume either stationarity or non- stationarity of the time series (Hyv¨arinen et al., 2023), whereas DynCL assumes bijective latent DynCL approach with LDS or ∇-SLDS dynamics generalises the self-supervised mode of CEBRA ', 'after_paragraph_idx': 59, 'before_paragraph_idx': 59}, {'section': '2 CONTRASTIVE LEARNING FOR TIME-SERIES', 'after_section': None, 'context_after': '8 CONCLUSION ', 'paragraph_idx': 10, 'before_section': None, 'context_before': 'methods on such datasets will continue to show the promise of contrastive learning for dynamics identification. Integrating recent benchmarks like DynaDojo (Bhamidipaty et al., 2023) or datasets from Chen et al. (2021) with realistic mixing functions (g) offers a promising direction for evaluating ', 'modified_lines': 'latent dynamics models. As a demonstration of real-world applicability, we benchmarked DYNCL on a neural recordings dataset in Appendix J. ', 'original_lines': 'latent dynamics models. As a demonstration of real-world applicability, we compared DynCL to CEBRA-Time (Schneider et al., 2023) on a neural recordings dataset in Appendix J. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'after_section': None, 'context_after': 'additional algorithms. Nonlinear ICA. The field of Nonlinear ICA has recently provided identifiability results for identifying ', 'paragraph_idx': 22, 'before_section': None, 'context_before': 'g and f using a first-order Taylor-series approximation and then apply the Kalman Filter (KF) to the linearized functions. NARMAX, on the other hand, typically employs a power-form polynomial representation to model the non-linearities. In neuroscience, practical (generative algorithms) include ', 'modified_lines': 'systems modeling linear dynamics (fLDS; Gao et al., 2016) or non-linear dynamics modelled by RNNs (LFADS; Pandarinath et al., 2018). Hurwitz et al. (2021) provide a detailed summary of ', 'original_lines': 'systems modeling linear dynamics [fLDS; Gao et al., 2016] or non-linear dynamics modelled by RNNs [LFADS; Pandarinath et al., 2018]. Hurwitz et al. (2021) provide a detailed summary of ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 Not explicitly shown, but the argument in Corollary 2 applies to each piecewise linear section of the SLDS.2 ∇-SLDS is only an approximation of the functional form of the underlying system.', 'after_section': None, 'context_after': '(63) ', 'paragraph_idx': 39, 'before_section': None, 'context_before': 'and respectively for ˆf n in relation to ˆf . We then consider two variants of Eq. 12. Firstly, we perform multiple forward predictions (n > 1) and compare the resulting embeddings: ', 'modified_lines': 'r2 score( ˆf n( ˆx), Lf n(L′ ˆx + b′) + b). ', 'original_lines': 'r2 score( ˆf n( ˆx), L1f n(L2 ˆx + b2) + b1). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '1 Not explicitly shown, but the argument in Corollary 2 applies to each piecewise linear section of the SLDS.2 ∇-SLDS is only an approximation of the functional form of the underlying system.', 'after_section': None, 'context_after': '(64) ', 'paragraph_idx': 39, 'before_section': None, 'context_before': 'number of time steps, and errors accumulate faster. Secondly, as an additional control, we replace ˆf with the identity, and compute ', 'modified_lines': 'r2 score( ˆx, Lf n(L′ ˆx + b′) + b). ', 'original_lines': 'r2 score( ˆx, L1f n(L2 ˆx + b2) + b1). ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': ' ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'ντ t ', 'modified_lines': '', 'original_lines': 'where ντ t := C τ −1 (cid:88) i=0 Aiεt−i. ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 STRUCTURAL IDENTIFIABILITY OF NON-LINEAR LATENT DYNAMICS', 'after_section': None, 'context_after': 'and c non-linear mixing function. d, to achieve injective mixing functions through time-lag embeddings, we here include the full 100-step length window, but only pass k equidistantly spaced points within this window of number of points in the context for a fixed τ = 100-step window and f for nonlinear mixing. where C1 ∈ Rm×r, C2 ∈ Rr×n are randomly sampled, g : Rr → Rr is a random injective and ', 'paragraph_idx': 16, 'before_section': None, 'context_before': 'Figure 14: Non-injective mixing functions can be successfully handled by a time-lag embedding. a, in the first setting, we pass observations from τ consecutive time steps into our feature encoder. b, empirical identifiability ', 'modified_lines': 'of the latent space (R2) for baseline (no dynamics) vs. DYNCL (linear dynamics) as we increase n for a linear length τ . e, empirical identifiability for baseline (no dynamics) vs. DYNCL (linear dynamics) as we increase the ', 'original_lines': 'of the latent space (R2) for baseline (no dynamics) vs. DynCL (linear dynamics) as we increase n for a linear length τ . e, empirical identifiability for baseline (no dynamics) vs. DynCL (linear dynamics) as we increase the ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '87.27 ± 10.6 99.16 ± 1.00 99.53 ± 0.27 ', 'paragraph_idx': 2, 'before_section': None, 'context_before': '98.26 ± 0.17 98.10 ± 0.35 ', 'modified_lines': '', 'original_lines': 'Results %dynR2 ↑ ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 RESULTS', 'after_section': None, 'context_after': 'J.3 DISCUSSION ', 'paragraph_idx': 48, 'before_section': None, 'context_before': 'each trial follows roughly the same circular motion as the other trials. When removing temporal structure by shuffling (Fig. 17), neither embedding shows non-trivial ', 'modified_lines': 'structure and the consistency metric is low on both the train (panel a) and validation set (panel b). ', 'original_lines': 'structure and the consistency metric is low on both the train (panel a) and validation set (panel b) ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': 'Abstract', 'after_section': None, 'context_after': '3Code: https://github.com/weirayao/tdrl (MIT License) ', 'paragraph_idx': 2, 'before_section': None, 'context_before': 'encoder architecture is equal to the baseline architecture, we introduce an additional variant “PCL-L” (L=Large) to match the number of parameters as close as possible. We do so by increasing the hidden dimension of the PCL encoder model from 50 to 160 and reduce the number of layers from 4 to 3, ', 'modified_lines': '', 'original_lines': 'effectively increasing the number of parameters by factor 5. Because TDRL can be considered the ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '3 , ρ = 28 and system noise standard deviation σϵ = 0.001.', 'after_section': None, 'context_after': 'DynCL+SLDS 59.46 ± 5.84 ', 'paragraph_idx': 34, 'before_section': None, 'context_before': '80.90 ± 7.82 81.40 ± 6.42 ', 'modified_lines': 'CEBRA-time ', 'original_lines': 'CEBRA-Time ', 'after_paragraph_idx': None, 'before_paragraph_idx': None}, {'section': '6 RESULTS', 'after_section': None, 'context_after': 'most promising baseline candidate (beside CEBRA-time) based on the results from table 7, we also double its encoder size from using hidden dimension 128 to 256, resulting in the ”TDRL-L” baseline model. ', 'paragraph_idx': 43, 'before_section': None, 'context_before': 'noise setting is equivalent to Table 1. For the high noise setting (low ∆t), we use larger noise and lower rotation angles, setting σε = 0.001, max(θi) = 5. ', 'modified_lines': 'effectively increasing the number of parameters by factor 5. Because TDRL can be considered the ', 'original_lines': '', 'after_paragraph_idx': None, 'before_paragraph_idx': None}]
2025-03-15 10:04:11
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
21

Collection including ulab-ai/ResearchArcade-openreview-revisions