File size: 6,837 Bytes
f9b95de
 
 
 
 
 
 
 
 
 
b349fbd
79c206a
b349fbd
 
 
79c206a
b349fbd
79c206a
b349fbd
79c206a
b349fbd
79c206a
b349fbd
79c206a
b349fbd
 
 
79c206a
 
b349fbd
 
 
 
 
79c206a
b349fbd
79c206a
b349fbd
 
 
 
d4a93ab
b349fbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba3a11b
 
 
b349fbd
 
 
 
 
ba3a11b
 
 
b349fbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79c206a
 
 
b349fbd
79c206a
b349fbd
79c206a
b349fbd
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
---
tags:
- synthetic-data
- physics
- symbolic-regression
- scientific-discovery
- physical-theories
license: mit
---

# SynPAT: Generating Synthetic Physical Theories with Data

This is the Hugging Face dataset entry for **SynPAT**, a synthetic theory and data generation system developed for the paper:  
**SynPAT: Generating Synthetic Physical Theories with Data**  
GitHub: [https://github.com/jlenchner/theorizer](https://github.com/jlenchner/theorizer)

SynPAT generates symbolic physical systems and corresponding synthetic data to benchmark symbolic regression and scientific discovery algorithms. Each synthetic system includes symbolic equations, dimensionally consistent variables/constants, algebraic consequences, and both clean and noisy datasets.

---

## Dataset Structure

The dataset is organized hierarchically by configuration and system index:

```
dataset/{config}/System_{n}/
```


Where:
- `{v}` ∈ {6, 7, 8, 9} is the number of symbolic variables
- `{d}` ∈ {2, 3, 4} is the number of derivatives (among the symbolic variables)
- `{e}` ∈ {4, 5, 6} is the number of equations in the system
- `{n}` ∈ {1, 2, 3} indexes the individual system instances per configuration

Each `System_{n}` directory contains exactly 18 files:

### Symbolic System and Consequence:
- `system.txt` — base symbolic system with metadata including variable types, units, constants, and equations
- `consequence.txt` — derived consequence equation (single polynomial) with associated metadata

### Noiseless Data:
- `system.dat` — clean numerical dataset corresponding to `system.txt`
- `consequence.dat` — clean data for the consequence polynomial

### Noisy Data:
For each noise level ε ∈ {0.001, 0.01, 0.05, 0.1}, the following files are included:
- `system_ε.dat` — noisy system data with Gaussian noise of standard deviation ε
- `consequence_ε.dat` — noisy consequence data with Gaussian noise of standard deviation ε

This results in:
- 4 `system_ε.dat` files
- 4 `consequence_ε.dat` files

### Replacement Systems:
- `replacement_1.txt` through `replacement_5.txt` — versions of `system.txt` where one axiom has been replaced, used for consequence prediction benchmarking under ablation.

### Summary of File Count per System:
- 2 base files: `system.txt`, `consequence.txt`
- 2 clean data files: `system.dat`, `consequence.dat`
- 8 noisy data files: `system_ε.dat`, `consequence_ε.dat` for 4 ε values
- 5 replacement files: `replacement_1.txt` through `replacement_5.txt`

**Total per system:** 2 + 2 + 8 + 5 = **17 files**

Each configuration folder (`vars_{v}_derivs_{d}_eqns_{e}`) contains exactly three systems: `System_1`, `System_2`, `System_3`.

The full dataset spans all 4 × 3 × 3 = **36 configurations**, totaling **108 systems** and **1836 files**.


## File Format Example

### system.txt 

`system.txt` contains the symbolic system, including variables, constants, derivatives, equations, and units:

```text
Variables: ['Fc', 'Fg', 'W', 'd1', 'd2', 'm1']  
Constants: ['G', 'c']  
Derivatives: ['dx1dt', 'd2x1dt2', 'dx2dt']  
Equations:  
dx1dt*dx2dt*d2*m1 - W*d1  
d2x1dt2*W*m1 - Fc*Fg*d1  
2*c*Fg + dx2dt*Fc - dx2dt*Fg  
G*d2x1dt2*d2*m1 + G*Fc*d2 - G*W + 4*d2x1dt2*dx1dt^2*d1^2  
-d2x1dt2*Fc*m1 + d2x1dt2*Fg*m1 + Fc*Fg  
Units of Measure of Variables: ['s^(-2)*kg*m', 's^(-2)*kg*m', 'm^2*s^(-2)*kg', 'm', 'm', 'kg']  
Units of Measure of Constants: ['1/kg*m^3*s^(-2)', '1/s*m']  
Units of Measure of Derivatives: ['1/s*m', 's^(-2)*m', '1/s*m']  
Units of Measure of Equations:  
m^3*s^(-2)*kg  
kg^2*m^3*s^(-4)  
m^2*s^(-3)*kg  
m^5*s^(-4)  
kg^2*m^2*s^(-4)
```

### consequence.txt 

`consequence.txt` includes **additional metadata** for data generation and symbolic regression tasks:

```text
Variables: ['Fc', 'Fg', 'W', 'd1', 'd2', 'm1']  
Constants: ['G', 'c']  
Derivatives: ['dx1dt', 'd2x1dt2', 'dx2dt']  
Equations:  
dx1dt*dx2dt*d2*m1 - W*d1  
d2x1dt2*W*m1 - Fc*Fg*d1  
2*c*Fg + dx2dt*Fc - dx2dt*Fg  
G*d2x1dt2*d2*m1 + G*Fc*d2 - G*W + 4*d2x1dt2*dx1dt^2*d1^2  
-d2x1dt2*Fc*m1 + d2x1dt2*Fg*m1 + Fc*Fg  
Units of Measure of Variables: ['s^(-2)*kg*m', 's^(-2)*kg*m', 'm^2*s^(-2)*kg', 'm', 'm', 'kg']  
Units of Measure of Constants: ['1/kg*m^3*s^(-2)', '1/s*m']  
Units of Measure of Derivatives: ['1/s*m', 's^(-2)*m', '1/s*m']  
Units of Measure of Equations:  
m^3*s^(-2)*kg  
kg^2*m^3*s^(-4)  
m^2*s^(-3)*kg  
m^5*s^(-4)  
kg^2*m^2*s^(-4)

Measured Variables: ['d1', 'Fg', 'm1', 'W']  
Observed Constants: []  
Measured Derivatives: ['d2x1dt2']  

Target Polynomial:  
d1*m1*d2x1dt2*Fg^2 - m1^2*d2x1dt2^2*W + m1*d2x1dt2*W*Fg
```

### replacement_i.txt

`replacement_i.txt` has the same format as `system.txt` but with one axiom replaced by a different, dimensionally consistent axiom
```text
Variables: ['Fc', 'Fg', 'W', 'd1', 'd2', 'm1']
Constants: ['G', 'c']
Derivatives: ['dx1dt', 'd2x1dt2', 'dx2dt']
Equations:
dx1dt*dx2dt*d2*m1 - W*d1
>>> c*Fc*d2 + c*Fg*d1 - dx1dt*Fg*d1 + dx2dt*Fg*d2 + dx2dt*W   <<< 
2*c*Fg + dx2dt*Fc - dx2dt*Fg
G*d2x1dt2*d2*m1 + G*Fc*d2 - G*W + 4*d2x1dt2*dx1dt^2*d1^2
-d2x1dt2*Fc*m1 + d2x1dt2*Fg*m1 + Fc*Fg
Units of Measure of Variables: ['s^(-2)*kg*m', 's^(-2)*kg*m', 'm^2*s^(-2)*kg', 'm', 'm', 'kg']
Units of Measure of Constants: ['1/kg*m^3*s^(-2)', '1/s*m']
Units of Measure of Derivatives: ['1/s*m', 's^(-2)*m', '1/s*m']
Units of Measure of Equations:
m^3*s^(-2)*kg
m^3*s^(-3)*kg
m^2*s^(-3)*kg
m^5*s^(-4)
kg^2*m^2*s^(-4)
```

### Data Files

Each `.dat` file contains rows of numerical data corresponding to samples used for learning or evaluation. Each row represents one data point.

- For **`consequence.dat`** and its noisy versions (e.g., `consequence_0.1.dat`), **the columns correspond to the observed constants, followed by measured derivatives, and then measured variables**, in the exact order as specified in `consequence.txt`.

Example row from `consequence_0.1.dat`:

```text
1.925664644193098241e+00 2.872700594236812677e+00 ...
```

- For **`system.dat`** and its noisy versions (e.g., `system_0.1.dat`), **the first line is a header specifying the variable ordering**, followed by rows of numerical data where each column matches the respective variable in the header.

Example header and data snippet from `system_0.1.dat`:

```text
G c dx1dt d2x1dt2 dx2dt Fc W d1 Fg m1 d2
1.000000 1.0000000 ......
```

---

## Authors

- **Karan Srivastava** — University of Wisconsin-Madison ([email protected])  
- **Jonathan Lenchner** — IBM T.J. Watson Research Center ([email protected])  
- **Joao Goncalves** — IBM T.J. Watson Research Center ([email protected])  
- **Lior Horesh** — IBM T.J. Watson Research Center ([email protected])  

---

## License

MIT License

---

For issues, questions, or contributions, please visit:  
[https://github.com/jlenchner/theorizer](https://github.com/jlenchner/theorizer)