Dataset Viewer
Auto-converted to Parquet
id
stringlengths
14
2.21k
text
stringlengths
0
8.25M
added
stringlengths
24
32
created
stringlengths
20
24
source
stringclasses
4 values
original_shard_dir
stringclasses
127 values
original_shard_idx
int64
0
352k
num_tokens
int64
1
1.44M
proofpile-arXiv_065-129
\section{Introduction} \begin{figure*}[t!] \centering \includegraphics[width=0.75\textwidth]{introduction_modified.pdf} \caption{(a) Each angle of illumination, here labelled as angular axis, corresponds to a time step in an analogous temporal axis. (b) The raw intensity diffraction pattern $\mathbf{g}_n,\: n\!=\!1,\ldots,N\!\!=\!\!42$\ \ of the at $n$-th angular sequence step is followed by gradient descent and moving average operations to construct a shorter Approximant sequence $\mathbf{\tilde{f}}_m{}^{[1]},\: m\!=\!1,\ldots,M\!\!=\!\!12$. The Approximants $\mathbf{\tilde{f}}_m{}^{[1]}$ are encoded to $\xi_m$ and fed to the recurrent dynamical operation whose output sequence $\mathbf{h}_m, m\!=\!1,\ldots,12$\ \ the angular attention scheme merges into a single representation $a$, and that is finally decoded to produce the 3D reconstruction $\mathbf{\hat{f}}$. Training adapts the weights of the learned operators in this architecture to minimize the training loss function $\mathcal{E}(\mathbf{f},\hat{\mathbf{f}})$ between $\mathbf{\hat{f}}$ and the ground truth object $\mathbf{f}$.} \label{fig:introduction} \end{figure*} Optical tomography reconstructs the three-dimensional (3D) internal refractive index profile by illuminating the sample at several angles and processing the respective raw intensity images. The reconstruction scheme depends on the scattering model that is appropriate for a given situation. If the rays through the sample can be well approximated as straight lines, then accumulation of absorption and phase delay along the rays is an adequate forward model, {\it i.e.} the projection or Radon transform approximation applies. This is often the case with hard x-rays through most materials including biological tissue; for that reason, Radon transform inversion has been widely studied\ \cite{radon1986determination,radon1917determination,bracewell1967inversion,feldkamp1984practical,dreike1976convolution,wang1993general,kudo1991helical,grangeat1991mathematical,katsevich2002analysis,choi2007tomographic}. The next level of complexity arises when diffraction and multiple scattering must be taken into account in the forward model; then, the Born or Rytov expansions and the Lippmann-Schwinger integral equation \cite{ishimaru2017electromagnetic,tatarski2016wave,wolf1969three,devaney1981inverse,pham2020three} are more appropriate. These follow from the scalar Helmholtz equation using different forms of expansion for the scattered field \cite{marks2006family}. In all these approaches, weak scattering is obtained from the first order in the series expansion. Holographic approaches to volumetric reconstruction generally rely on this first expansion term\ \cite{milgram2002computational,tian2010quantitative,hahn2008wide,park2009recent,nehmetallah2012applications,williams2013digital,brady2009compressive,choi2010compressive,rivenson2018phase,wu2019bright,rivenson2019deep,zhang2018twin}. Often, solving the Lippmann-Schwinger equation is the most robust approach to account for multiple scattering, but even then the solution is iterative and requires excessive amount of computation especially for complex 3D geometries. The inversion of these forward models to obtain the refractive index in 3D is referred to as inverse scattering, also a well studied topic \cite{kamilov2016recursive,kamilov2016optical,giorgi2013application,chew1990reconstruction,sun2018efficient,lu1985multidimensional,lu1986jkm,tsihrintzis2000higher}. An alternative to the integral methods is the beam propagation method (BPM), which sections the sample along the propagation distance $z$ into slices, each slice scattering according to the thin transparency model, and propagates the field from one slice to the next through the object\ \cite{feit1980computation}. Despite some compromise in accuracy, BPM offers comparatively light load of computation and has been used as forward model for 3D reconstructions\ \cite{pham2020three}. The analogy of the BPM computational structure with a neural network was exploited, in conjunction with gradient descent optimization, to obtain the 3D refractive index as the ``weights'' of the analogous neural network in the learning tomography approach \cite{kamilov2015learning,shoreh2017optical,lim2018learning}. BPM has also been used with more traditional sparsity-based inverse methods\ \cite{kamilov2016optical,chowdhury2019high}. Later, a machine learning approach with a convolutional neural network (CNN) replacing the iterative gradient descent algorithm exhibited even better robustness to strong scattering for layered objects, which match well with the BPM assumptions \cite{goy2019high}. Despite great progress reported by these prior works, the problem of reconstruction through multiple scattering remains difficult due to the extreme ill-posedness and uncertainty in the forward operator; residual distortion and artifacts are not uncommon in experimental reconstructions. Inverse scattering, as inverse problems in general, may be approached in a number of different ways to regularize the ill-posedness and thus provide some immunity to noise \cite{bertero1998introduction,candes2006robust}. Recently, thanks to a ground-breaking observation from 2010 that sparsity can be learnt by a deep neural network \cite{gregor2010learning}, the idea of using machine learning to approximate solutions to inverse problems also caught on \cite{barbastathis2019use}. In the context of tomography, in particular, deep neural networks have been used to invert the Radon transform \cite{jin2017deep} and recursive Born model \cite{kamilov2016recursive}, and were also the basis of some of the papers we cited earlier on holographic 3D reconstruction\ \cite{wu2019bright,rivenson2018phase,rivenson2019deep}, learning tomography\ \cite{kamilov2015learning,shoreh2017optical,lim2018learning}, and multi-layered strongly scattering objects\ \cite{goy2019high}. In prior work on tomography using machine learning, generally, the intensity projections are all fed as inputs to a computational architecture that includes a neural network, and the output is the 3D reconstruction of the refractive index. The role of the neural network is to learn the priors that apply to the particular class of objects being considered and the relationship of these priors to the forward operator (Born, BPM, etc.) so as to produce a reasonable estimate of the inverse. Here we propose a rather distinct approach to exploit machine learning for 3D refractive index reconstruction under strong scattering conditions. Our motivation is that, as the angle of illumination is changed, the light goes through {\em the same scattering volume,} but the scattering events follow a different sequence. At the same time, the intensity diffraction pattern obtained from a new angle of illumination adds information to the tomographic problem, but that information is constrained by ({\it i.e.}, is not orthogonal to) the previously obtained patterns. We interpret this as similar to a dynamical system, where as time evolves and new inputs arrive, the output is constrained by the history of earlier inputs. (The convolution integral is the simplest and best known expression of this relationship between the output of a system and the history of the system's input.) The analogy between strong scattering tomography and a dynamical system suggests the recurrent neural network (RNN) architecture as a strong candidate to process intensity diffraction patterns in sequence, as they are obtained one after the other; and process them recurrently so that each intensity diffraction pattern from a new angle improves over the reconstructions obtained from the previous angles. Thus, we treat multiple diffraction patterns under different illumination angles as a temporal sequence, as shown in Figure~\ref{fig:introduction}. The angle index $\theta$ replaces what in a dynamical system would have been the time $t$. This idea is intuitively appealing; it also leads to considerable improvement in the reconstructions, removing certain artifacts that were visible in \cite{goy2019high}, as we will show in section~\ref{sec:results}. The way we propose to use RNNs in this problem is quite distinct from the recurrent architecture proposed first in \cite{gregor2010learning} and subsequently implemented, replacing the recurrence by a cascade of distinct neural networks, in \cite{jin2017deep,inv:mardani2017a,inv:mardani2017b}, among others. In these prior works, the input to the recurrence can be thought of as clamped to the raw measurement, as in the proximal gradient \cite{inv:daubechies04} and related methods; whereas, in our case, the input to the recurrence is itself dynamic, with the raw intensity diffraction patterns from different angles forming the input sequence. Moreover, by utilizing a modified gated recurrent unit (more on this below) rather than a standard neural network, we do not need to break the recurrence up into a cascade. Typical applications of RNNs \cite{williams1989learning,hochreiter1997long} are in temporal sequence learning and identification. In imaging and computer vision, RNN is applied in 2D and 3D: video frame prediction \cite{xingjian2015convolutional,wang2018eidetic,wang2017predrnn,wang2018predrnn++}, depth map prediction \cite{cs2018depthnet}, shape inpainting \cite{wang2017shape}; and stereo reconstruction \cite{liu2020novel,choy20163d} or segmentation \cite{le2017multi,stollenga2015parallel} from multi-view images, respectively. Stereo, in particular, bears certain similarities to our tomographic problem here, as sequential multiple views can be treated as a temporal sequence. To establish the surface shape, the RNNs in these prior works learn to enforce consistency in the raw 2D images from each view and resolve the redundancy between adjacent views in recursive fashion through the time sequence ({\it i.e.}, the sequence of view angles). Non-RNN learning approaches have also been used in stereo, e.g. Gaussian mixture models\ \cite{hou2019multi}. In this work, we replaced the standard long-short term memory (LSTM)\ \cite{hochreiter1997long} implementation of RNNs with a modified version of the newer gated recurrent unit (GRU) \cite{cho2014learning}. The GRU has the advantage of fewer parameters but generalizes comparably with the LSTM. Our GRU employs a split convolutional scheme to explicitly account for the asymmetry between the lateral and axial axes of propagation, and an angular attention mechanism that learns how to reward specific angles in proportion to their contribution to reconstruction quality. For isotropic (in the ensemble sense) samples as we consider here, it turns out that the attention mechanism treats all angles equally, yet we found that its presence still improves the quality of the training algorithm. For more general sample classes with spatially anisotropic structure, angular attention may be expected to treat different angles of illumination with more disparity. Details in experiments are delineated in Section~\ref{sec:experiment}. The computational elements are all described in Section~\ref{sec:comput_arch}, while training and testing procedures are illustrated in Section~\ref{sec:train_and_test}. The results of our experimental study are in Section~\ref{sec:results}, showing significant improvement over static neural network-based reconstructions of the same data both visually and in terms of several quantitative metrics. We also include results from an ablation study that indicates the relative significance of the new components we introduced to the quality of the reconstructions. \iffalse Performance of our RNN architecture is qualitatively and quantitatively compared with the baseline from earlier works in Section~\ref{sec:results}.\ref{subsec:comparison_baseline}. An ablation study to quantify contribution of each element to the overall performance is given in Section~\ref{sec:results}.\ref{subsec:ablation}. Section~\ref{sec:results}.\ref{subsec:number_of_patterns} shows how quality of reconstructions is incrementally enhanced as the number of patterns that enters the trained network for testing increases. Finally, in Section~\ref{sec:conclusion} we share some concluding thoughts and suggestions for future work. \fi \section{Experiment} \label{sec:experiment} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{optical_apparatus.pdf} \caption{Optical apparatus used for experimental data acquisition\ \protect{\cite{goy2019high}}. L$1-4$: lenses, F$1$: pinhole, A$1$: aperture, EM-CCD: electron-multiplying charge coupled device. $f_{\text{L}_3}:f_{\text{L}_4} = 2:1$. The object is rotated along both $x$ and $y$ axes. The defocus distance between the conjugate plane to the exit object surface and the EM-CCD is $\Delta z = 58.2\:\text{mm}$.} \label{fig:optical_apparatus} \end{figure} The experimental data are the same as in \cite{goy2019high}, whose experimental apparatus is summarized in Figure~\ref{fig:optical_apparatus}. We repeat the description here for the readers' convenience. The He-Ne laser (Thorlabs HNL210L, power: $20\:\text{mW}$, $\lambda = 632.8\:\text{nm}$) illuminated the sample after spatial filtering and beam expansion. The illumination beam was then de-magnified by the telescope ($f_{\text{L}_3} : f_{\text{L}_4} = 2:1$), and the EM-CCD (Rolera EM-C$2$, pixel pitch: $8\:\mu\text{m}$, acquisition window dimension: $1002\:\times\:1004$) captured the experimental intensity diffraction patterns. The integration time for each frame was $2\:\text{ms}$, and the EM gain was set to $\times 1$. The optical power of the laser was strong enough for the captured intensities to be comfortably outside the shot-noise limited regime. Each layer of the sample was made of fused silica slabs ($n=1.457$ at $632.8$ nm and at $20\:^\circ$C). Slab thickness was $0.5\text{mm}$, and patterns were carefully etched to the depth of $575\pm 5$ nm on the top surface of each of the four slabs. To reduce the difference between refractive indices, gaps between adjacent layers were filled with oil ($n = 1.4005\pm 0.0002$ at $632.8$ nm and at $20^\circ$C), yielding binary phase depth of $-0.323\pm 0.006\:\text{rad}$. The diffraction patterns used for training were prepared with simulation precisely matched to the apparatus of Figure~\ref{fig:optical_apparatus}. For testing, we used a set of diffraction patterns that was acquired experimentally. Objects used for both simulation and experiment are dense-layered, transparent, \textit{i.e.} of negligible amplitude modulation, and of binary refractive index. They were drawn from a database of IC layout segments\ \cite{goy2019high}. The feature depth of $575\pm 5\:\text{nm}$ and refractive index contrast $0.0565\pm0.0002$ at $632.8$ nm and at $20\:^\circ$C were such that weak scattering assumptions are invalid and strong scattering has to be necessarily taken into account. The Fresnel number ranged from $0.7$ to $5.5$ for the given defocus amount $\Delta z=58.2\:\text{mm}$ for the range of object feature sizes. To implement the raw image acquisition scheme, the sample was rotated from $-10$ degree to $10$ degree with a $1$-degree increment along both the $x$ and $y$ axes, while the illumination beam and detector remained still. This resulted in $N=42$ angles and intensity diffraction patterns in total (see Section~\ref{sec:comput_arch}.\ref{subsec:conv_enc_dec}). Note that \cite{goy2019high} only utilized $22$ patterns out of with a $2$-degree increment along both $x$ and $y$ axes. The comparisons we show later are still fair because we retrained all the algorithms of \cite{goy2019high} for the $42$ angles and $1^\circ$ increment. \section{Computational architecture}\label{sec:comput_arch} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{architecture.pdf} \caption{Details on implementing the dynamical scheme of Figure~\protect{\ref{fig:introduction}}. (a) Overall network architecture; (b) tensorial dimensions of each layer; (c) down-residual block (DRB); (d) up-residual block (URB); and (e) residual block (RB). $K$ and $S$ indicate the sizes of kernel and stride, respectively, and the values shown apply only to the row and column axes. For the layer axis, $K=4$ and $S=1$ always. The disparities are to implement the split convolution scheme; please see Section~\protect{\ref{sec:comput_arch}}.\protect{\ref{subsec:sc_gru}} and Figure~\protect{\ref{fig:split_convolution}}.} \label{fig:architecture} \end{figure*} The proposed RNN architecture is shown in detail in Figure~\ref{fig:architecture}. The forward model and gradient descent Approximant (pre-processing) algorithm are described in Section~\ref{subsec:approximants}. The split-convolutional GRU, convolutional encoder and decoder, and the angular attention mechanism are described in Sections~\ref{subsec:sc_gru}, \ref{subsec:conv_enc_dec}, and \ref{subsec:angular_att}, respectively. The total number of parameters in this computational architecture is $\sim 21\text{M}$ (more on this topic in section~\ref{sec:train_and_test}.\ref{subsec:training_rnn}). \subsection{Approximant computations}\label{subsec:approximants} The dense-layered, binary-phase object is illuminated at a sequence of angles, and the corresponding diffraction intensity patterns are captured by a detector. At the $n$-th step of the sequence, the object is illuminated by a plane wave at angles $\left(\theta_{nx},\theta_{ny}\right)$ with respect to the propagation axis $z$ on the $xz$ and $yz$ planes, respectively. Beyond the object, the scattered field propagates in free space by distance $\Delta z$ to the digital camera (the numerical value is $\Delta z=58.2$mm, as we saw in section~\ref{sec:experiment}). Let the forward model under the $n$-th illumination angle be denoted as $H_n$, $n=1,2,\ldots, N$; that is, the $n$-th intensity diffraction pattern at the detector plane produced by the phase object $\mathbf{f}$ is $\mathbf{g}_n\equiv H_n(\mathbf{f})$. In the simulations, the forward operators $H_n$ are obtained from the non-paraxial beam propagation method (BPM) \cite{feit1980computation,goy2019high,kamilov2016optical}. Let the $j$-th cross-section of the computational window perpendicular to $z$ axis be $f^{[j]} = \exp\left(i\varphi^{[j]}\right),\: j=1,\ldots,J$ where $J$ is the number of slices the we divide the object into, each of axial extent $\delta z$. At the $n$-th illumination angle, the BPM is initialized as $f_n^{[0]}=\text{exp}\left[ik\left(x\sin\theta_{nx}+y\sin\theta_{ny}\right)\right]$, where $k$ is the wavenumber. The optical field at the $(j+1)$-th slice is \begin{equation}\label{eq:BPM-iteration} \begin{split} \psi_n^{[j+1]} = \mathcal{F}^{-1}&\bigg[\mathcal{F}\left[\psi_n^{[j]}\circ f_n^{[j]}\right](k_x,k_y)\\ &\cdot\exp\left(-i\left(k-\sqrt{k^2-k_x^2-k_y^2}\right)\delta z\right)\bigg], \end{split} \end{equation} where $\delta z$ is equal to the slab thickness, \textit{i.e.} $0.5\:\text{mm}$; ${\cal F}$ and ${\cal F}^{-1}$ are the Fourier and inverse Fourier transforms, respectively; and $\chi_1\circ\chi_2$ denotes the Hadamard (element-wise) product of the functions $\chi_1$, $\chi_2$. The Hadamard product is the numerical implementation of the thin transparency approximation, which is inherent in the BPM. To obtain the intensity at the detector, we define the $(J+1)$-th slice displaced by $\Delta z$ from the $J$-th slice (the latter is the exit surface of the object) to yield \begin{equation}\label{eq:forw} \mathbf{g}_n\equiv H_n(\mathbf{f})=\left|\psi_n^{[J+1]}\right|^2. \end{equation} The purpose of the Approximant, in general, is to produce a crude estimate of the volumetric reconstruction using the forward operator alone. This has been well established as a helpful form of preprocessing for subsequent treatment by machine learning algorithms\ \cite{goy2018low,goy2019high}. Previous works constructed the Approximant as a single-pass gradient descent algorithm \cite{kamilov2016optical,goy2019high}. Here, due to the sequential nature of our reconstruction algorithm, as each intensity diffraction pattern from a new angle of illumination $n$ is received, we instead construct a sequence of Approximants, indexed by $n$, by minimizing the functionals \begin{equation}\label{eq:new_loss_function} \mathcal{L}_n(\mathbf{f}) = \frac{1}{2}||H_n(\mathbf{f})-\mathbf{g}_n||_2^2,\quad n=1,2,\ldots,N. \end{equation} The gradient descent update rule for this functional is \begin{multline}\label{eq:new_approximants} \mathbf{f}_n^{[l+1]} = \mathbf{f}_n^{[l]} -s\left(\nabla_\mathbf{f}\mathcal{L}_n\left(\mathbf{f}_n^{[l]}\right)\right)^\dagger = \\ = \mathbf{f}_n^{[l]} -s\left(H_n^T\left(\mathbf{f}^{[l]}\right)\nabla_\mathbf{f} H_n\left(\mathbf{f}_n^{[l]}\right)-\mathbf{g}_n^T\nabla_\mathbf{f}H_n\left(\mathbf{f}_n^{[l]}\right)\right)^\dagger, \end{multline} where $\mathbf{f}_n^{[0]}=\mathbf{0}$ and $s$ is the descent step size and in the numerical calculations was set to $0.05$ and the superscript $\dagger$ denotes the transpose. The single-pass, gradient descent-based Approximant was used for training of the RNN but with an additional pre-processing step that will be explained in (\ref{eq:moving_window}). We also implemented a denoised Total Variation (TV) based Approximant, to be used only at the testing stage of the RNN. In this case, the functional to be minimized is \begin{equation}\label{eq:TV_Approx} \mathcal{L}^{\text{TV}}_n(\mathbf{f}) = \frac{1}{2} ||H_n(\mathbf{f})-\mathbf{g}||_2^2 + \kappa\text{TV}_{l_1}(\mathbf{f}),\quad n=1,2,\ldots,N, \end{equation} where the TV-regularization parameter was chosen as $\kappa=10^{-3}$, and for $\mathbf{x}\in \mathcal{R}^{P\times Q}$ the anisotropic $l_1$-TV operator is \begin{equation} \begin{split} \text{TV}_{l_1}(\mathbf{x}) = &\sum_{p=1}^{P-1}\sum_{q=1}^{Q-1} \Big(\left|x_{p,q} - x_{p+1,q}\right| + \left|x_{p,q} - x_{p,q+1}\right|\Big)\\ & + \sum_{p=1}^{P-1} \left|x_{p,Q}-x_{p+1,Q}\right| +\sum_{q=1}^{Q-1} \left|x_{P,q}-x_{P,q+1}\right| \end{split} \end{equation} with reflexive boundary conditions \cite{beck2009fast,chambolle2004algorithm}. To produce the Approximants for testing from this functional, we first ran $3$ iterations of the gradient descent and ran $2$ iterations of the FGP-FISTA (Fast Gradient Projection with Fast Iterative Shrinkage Thresholding Algorithm)\ \cite{beck2009fast,beck2009fista}. The sequence of $N$ Approximants for either training or testing procedure is a $4$D spatiotemporal sequence $\mathbf{F}=\left(\mathbf{f}_1^{[1]},\mathbf{f}_2^{[1]},\ldots,\mathbf{f}_N^{[1]}\right)$. As an additional processing step, to suppress unwanted artifacts in the Approximants of the experimentally captured intensities $\mathbf{g}_n$, we reduce the sequence size to $M$ by applying a moving average window as \begin{equation}\label{eq:moving_window} \tilde{\mathbf{f}}_m^{[1]} = \begin{dcases} \frac{1}{N_{\text{w}}+1}\sum_{n=m}^{m+N_{\text{w}}} \mathbf{f}_n^{[1]}, & 1\leq m\leq N_{\text{h}}\\ \frac{1}{N_{\text{w}}+1}\sum_{n=m}^{m+N_{\text{w}}} \mathbf{f}_{n+N_{\text{w}}}^{[1]}, & N_{\text{h}}+1\leq m\leq M. \end{dcases} \end{equation} To be consistent, the moving average window was applied to the Approximants for both training and testing. In this study, $N_{\text{w}}=15$, $N_{\text{h}}=6$ and $M=12$. These choices follow from the following considerations. We have $N=42$ diffraction patterns for each sequence: $21$ captured along the $x$ axis ($1-21$) and the remaining ones along the $y$ axis ($22-42$). The window is first applied to $21$ patterns from $x$-axis rotation, which thus generates $6$ averaged diffraction patterns, and then the window is applied to the remaining $21$ patterns from $y$-axis rotation, resulting in the other $6$ patterns. Therefore, the input sequence to the next step in the architecture of Figure~\ref{fig:architecture}, {\it i.e.} to the encoder (Section~\ref{subsec:conv_enc_dec}), consists of a sequence of $M=12$ averaged Approximants~$\tilde{\mathbf{f}}_m^{[1]}$. \subsection{Split-convolutional gated recurrent unit (SC-GRU)}\label{subsec:sc_gru} Recurrent neural networks involve a recurrent unit that retains memory and context based on previous inputs in a form of latent tensors or hidden units. It is well known that the Long Short-Term Memory (LSTM) is robust to instabilities in the training process. Moreover, in the LSTM, the weights applied to past inputs are updated according to usefulness, while less useful past inputs are forgotten. This encourages the most salient aspects of the input sequence to influence the output sequence\ \cite{hochreiter1997long}. Recently, the Gated Recurrent Unit (GRU) was proposed as an alternative to LSTM. The GRU effectively reduces the number of parameters by merging some operations inside the LSTM, without compromising quality of reconstructions; thus, it is expected to generalize better in many cases\ \cite{cho2014learning}. For this reason, we chose to utilize the GRU in this paper as well. The governing equations of the standard GRU are as follows: \begin{equation}\label{eq:gru_equations} \begin{gathered} r_m = W_r \xi_m + U_r h_{m-1}+b_r\\ z_m = W_z \xi_m + U_zh_{m-1} + b_z\\ \Tilde{h}_m = \text{tanh}\left(W\xi_m+U\left(r_m\circ h_{m-1}\right)+b_h\right)\\ h_m = (1-z_m)\circ \Tilde{h}_m + z_m\circ h_{m-1}, \end{gathered} \end{equation} where $\xi_m$, $h_m$, $r_m$, $z_m$ are the inputs, hidden features, reset states, and update states, respectively. Multiplication operations with weight matrices are performed in a fully connected fashion. We modified this architecture so as to take into account the asymmetry between the lateral and axial dimensions of optical field propagation. This is evident even in free-space propagation, where the lateral components of the Fresnel kernel \[ \expb{i\pi\frac{x^2+y^2}{\lambda z}} \] are shift invariant and, thus, convolutional, whereas the longitudinal axis $z$ is not. The asymmetry is also evident in nonlinear propagation, as in the BPM forward model (\ref{eq:BPM-iteration}) that we used here. This does not mean that space is anisotropic --- of course space is isotropic! The asymmetry arises because propagation and the object are 3D, whereas the sensor is 2D. In other words, the orientation of the image plane breaks the symmetry in object space so that the scattered field from a certain voxel within the object {\em apparently} influences the scattered intensity from its neighbors at the detector plane differently in the lateral direction than in the axial direction. To account for this asymmetry in a profitable way for our learning task, we first define the operators $W_r$, $U_r$, etc. as convolutional so as to keep the number of parameters down (even though in free space propagation the axial dimension is not convolutional and under strong scattering neither dimension is nonlinear); and we constrain the convolutional kernels of the operators to be the same in the lateral dimensions $x$ and $y$, and allow the axial $z$ dimension kernel to be different. This approach justifies the term Split-Convolutional, and we found it to be a good compromise between facilitating generalization and adhering to the physics of the problem. \begin{figure}[htbp!] \centering \includegraphics[width=\linewidth]{split_convolution_figure.pdf} \caption{Split convolution scheme: different convolution kernels are applied along the lateral $x,y$ axes {\it vs.} the longitudinal $z$ axis. In our present implementation, the kernels' respective dimensions are $3 \times 3 \times 1$ (or $1 \times 1 \times 1$) and $1 \times 1 \times 4$. The lateral and longitudinal convolutions are computed separately and the results are then added element-wise. The split convolution scheme is used in both the gated recurrent unit (Section~\protect{\ref{sec:comput_arch}}.\protect{\ref{subsec:sc_gru}}) and the encoder/decoder (Section~\protect{\ref{sec:comput_arch}}.\protect{\ref{subsec:conv_enc_dec}}).} \label{fig:split_convolution} \end{figure} We also replaced the tanh activation function of the standard GRU with a rectified linear unit (ReLU) activation \cite{dey2017gate} as the ReLU is computationally less expensive and helpful to avoid local minima with fewer vanishing gradient problems \cite{nair2010rectified,glorot2011deep}. The final form of our SC-GRU dynamics is \begin{equation}\label{eq:new_gru_equations} \begin{gathered} r_m = W_r*\xi_m + U_r*h_{m-1}+b_r\\ z_m = W_z*\xi_m + U_z*h_{m-1} + b_z\\ \Tilde{h}_m = \text{ReLU}\left(W*\xi_m+U*\left(r_m\circ h_{m-1}\right)+b_h\right)\\ h_m = (1-z_m)\circ \Tilde{h}_m + z_m\circ h_{m-1}, \end{gathered} \end{equation} where $*$ denotes our split convolution operation. \subsection{Convolutional encoder and decoder}\label{subsec:conv_enc_dec} Convolutional neural networks (CNNs) are placed before and after the SC-GRU as encoder and decoder, respectively. This architectural choice was inspired by \cite{sinha2017lensless,gehring2016convolutional,hori2017advances,zhao2017learning}. The encoder and decoder also utilize split convolution, as shown in Figure~\ref{fig:split_convolution}, in conjunction with residual learning, which is known to improve generalization in deep networks\ \cite{he2016deep}. As in \cite{sinha2017lensless}, the encoder and decoder utilize down-residual blocks (DRB), up-residual blocks (URB), and residual blocks (RB); however, there are no skip connections in our case, {\it i.e.} this is not a U-net\ \cite{ronneberger2015u} architecture. The encoder learns how to map its input ({\it i.e.} the $\tilde{\mathbf{f}}_m^{[1]}$ sequence) onto a low-dimensional nonlinear manifold. The compression factor is $16$ for the lateral input dimensions, but the axial dimension is left intact, as shown in Figure~\ref{fig:architecture}. This eases the burden on the training process as the number of parameters is reduced; more importantly, encoding abstracts features out of the high-dimensional inputs, passing latent tensors over to the recurrent unit. Letting the encoder for the $m$-th angle Approximant be symbolized as $\text{Enc}_m\left(\cdot\right)$, $\xi_m = \text{Enc}_m\left(\tilde{\mathbf{f}}_m^{[1]}\right)$ in (\ref{eq:new_gru_equations}). The decoder restores the output of the RNN to the native dimension of the object we are reconstructing. \subsection{Angular attention mechanism}\label{subsec:angular_att} Each intensity diffraction pattern from a new angle of illumination is combined at the SC-GRU input with the hidden feature $h_m$ from the same SC-GRU's previous output. After $M$ iterations, there are $M$ different hidden features resulting from $N$ illumination angles, as seen in (\ref{eq:moving_window}). Since the forward operator $H_n(\mathbf{f})$ is object dependent, the qualitative information that each such new angle conveys will vary with the object. It then becomes interesting to consider whether some angles of illumination convey more information than others. The analogue in temporal dynamical systems, the usual domain of application for RNNs, is the {\em attention} mechanism. It decides which elements of the system's state are the most informative. In our case, of course, time has been replaced by the angle of illumination, so we refer to the same mechanism as {\em angular attention:}\ it evaluates the contents of the previously received intensity diffraction patterns from different angles of illumination and assigns to each a compatibility function $e_m$, essentially a weight that is relevant to that illumination's importance for the overall reconstruction. Following the summation style attention mechanism\ \cite{bahdanau2014neural}, we compute the compatibility function $e_m$ as output of a neural network with hidden units (layers) $V_e$, $W_e$ and the weights $\alpha_m$ from the compatibility function as \begin{equation}\label{eq:attention-VeWe} \begin{gathered} e_m = V_e\:\text{tanh}\left(W_e h_m\right), \\ \alpha_m = \text{softmax}\left(e_m\right) = \frac{\text{exp}(e_m)}{\sum_{m=1}^{M} \text{exp}(e_m)}, \\ \quad m = 1,2,\ldots, M. \end{gathered} \end{equation} The final angular attention output $a$ is then computed from a linear combination of the hidden features as \begin{equation}\label{eq:attention} a=\sum_{m=1}^{M} \alpha_m h_m. \end{equation} For the ablation study of Section~\ref{sec:results}, only the last hidden feature $h_M$ is passed on to the decoder, {\it i.e.} the angular attention mechanism is not used. There is an alternative, dot-product attention mechanism\ \cite{vaswani2017attention}, but we chose not to implement it here. \section{Training and testing procedures}\label{sec:train_and_test} \subsection{Training the recurrent neural network}\label{subsec:training_rnn} For training and validation, $5000$ and $500$ layered objects were used, respectively. For each object, a sequence of intensity diffraction patterns from the $N=42$ angles of illumination was produced by BPM, as described earlier. The Approximants were obtained each as a single iteration of the gradient descent. All of the architectures were trained for $100$ epochs with a training loss function (TLF) of negative Pearson correlation coefficient (NPCC) \cite{li2018imaging}, defined as \begin{equation} \sst{\mathcal{E}}{NPCC}\big(f,\hat{f}\big) \equiv -\:\frac{\displaystyle{\sum_{x,y}}\Big(f(x,y)-\big<f\big>\Big)\Big(\hat{f}(x,y)-\big<\hat{f}\big>\Big)}{\sqrt{\displaystyle{\sum_{x,y}}\Big(f(x,y)-\big<f\big>\Big)^2}\sqrt{\displaystyle{\sum_{x,y}}\Big(\hat{f}(x,y)-\big<\hat{f}\big>\Big)^2}}, \label{eq:tlf-npcc} \end{equation} where $f$ and $\hat{f}$ are a ground truth image and its corresponding reconstruction. In this article, our NPCC function was defined to perform computation in $3$D. We used a stochastic gradient descent scheme with the \textit{Adam} optimizer \cite{kingma2014adam}. The learning rate was set to be $10^{-3}$ initially and halved whenever validation loss plateaued for $5$ consecutive epochs. Batch size was set to be $10$. The desktop computer used for training has Intel Xeon W-$2295$ CPU at $3.00$ GHz with $24.75$ MB cache, $128$ GB RAM, and dual NVIDIA Quadro RTX $8000$ GPUs with $48$ GB VRAM. For comparison, we also re-trained the $3$D-DenseNet architecture with skip connections in \cite{goy2019high} with the same training scheme above, \textit{i.e.} on \textit{Adam} for $100$ epochs and with batch size of $10$ and the same learning rate initial value and halving strategy. This serves as baseline; however, the number of parameters in this network is $0.5\:\text{M}$, whereas in our RNN architecture the number of parameters is $21\:\text{M}$. We also trained an enhanced version of the $3$D-DenseNet by tuning the number of dense blocks, the number of layers inside each dense block, filter size, and growth rate to match the total number of parameters with that of the RNN, {\it i.e.} $21\:\text{M}$. In the next section, we refer to these two versions of the $3$D-DenseNet as Baseline ($0.5\:\text{M}$) and Baseline ($21\:\text{M}$), respectively. \subsection{Testing procedures and metrics} A simple affine transform is first applied to the raw experimentally obtained intensity diffraction patterns to correct slight misalignment. Then we run the gradient descent method up to $3$ iterations of the gradient descent (\ref{eq:new_approximants}) and the FGP-FISTA up to $2$ iterations, to test the trained network using the TV-based Approximants (\ref{eq:TV_Approx}). Even though training used NPCC as in (\ref{eq:tlf-npcc}), we investigated two additional metrics for testing: probability of error (PE), the Wasserstein distance \cite{villani2003topics,kolouri2017optimal}. We also quantified test performance using the SSIM (Structural Similarity Index Metric) \cite{wang2004image}, shown in the Supplementary material. PE is the mean absolute error between two binary objects; in the digital communication community it is instead referred to as Bit Error Rate (BER). To obtain the PE, we first threshold the reconstructions and then define \begin{equation} \text{PE} = \frac{\left(\text{\# false negatives}\right)\: + \:\left(\text{\# false positives}\right)}{\text{total \# pixels}}. \end{equation} We found that it oftentimes helps to accentuate the differences between a binary phase ground truth object and its binarized reconstruction as even small residual artifacts, if they are above the threshold, are thresholded to be one, and thus they are taken into account to the probability of error calculation more than they would have been to other metrics. With these procedures, PE is a particularly suitable error metric for the kind of objects we consider in this paper. PE is also closely related to the two-dimensional Wasserstein distance as we will now show through an analytical derivation. The latter metric involves an optimization process in terms of a transport plan to minimize the total cost of transport from a source distribution to a target distribution. The two-dimensional Wasserstein distance is defined as \begin{equation} \begin{gathered} W_{p=1} = \min_P \langle P,C\rangle = \min_P\sum_{ij}\sum_{kl}\gamma_{ij,kl}C_{ij,kl},\\ \text{s.t.}\:\: \sum_{kl}\gamma_{ij,kl} = f_{ij},\: \sum_{ij}\gamma_{ij,kl}=g_{kl},\:\gamma_{ij,kl}\geq 0, \end{gathered} \end{equation} where $f_{ij}$ and $g_{kl}$ are a ground truth binary object and its binary reconstruction, \textit{i.e.} $f_{ij}, g_{kl}, \gamma_{ij,kl} \in\{0,1\}$, a coupling tensor $P=\left(\gamma_{ij,kl}\right)$, and a cost tensor $C_{ij,kl}=\left|x_{ij}-x_{kl}\right|$. PE can be reduced to have a similar, but not equivalent, form to that of the Wasserstein distance. For $i,j,k,l$ where $\gamma_{ij,kl}\neq 0$, \begin{equation}\label{eq:prob} \begin{split} \text{PE} &= \frac{1}{N^2}\sum_{ij}\left|f_{ij}-g_{ij}\right|\\ &= \frac{1}{N^2}\sum_{ij}\left|f_{ij}-\sum_{kl}g_{kl}\:\delta\left[i-k,j-l\right]\right|\\ &= \frac{1}{N^2}\sum_{ij}\left|\sum_{kl}\gamma_{ij,kl}\left(1-\frac{g_{kl}\:\delta\left[i-k,j-l\right]}{\gamma_{ij,kl}}\right)\right|\\ &\equiv \sum_{ij}\left|\sum_{kl}\gamma_{ij,kl}\Tilde{C}_{ij,kl}\right|\\ &= \sum_{ij,kl;\gamma_{ij,kl}\neq 0} \gamma_{ij,kl}\Tilde{C}_{ij,kl}, \qquad \text{where} \end{split} \end{equation} \begin{equation} N^2\Tilde{C}_{ij,kl} = 1\!-\!\frac{g_{kl}\:\delta\left[i-k,j-l\right]}{\gamma_{ij,kl}}= \begin{dcases} \:1, & \:\text{if}\:\: ij \neq kl\\ \:1\! -\! g_{kl}, & \:\text{if}\:\: ij = kl. \end{dcases} \end{equation} This shows that the PE is a version of the Wasserstein distance with differently defined cost tensor. \section{Results}\label{sec:results} \begin{figure*}[htbp] \centering \includegraphics[width=0.55\textwidth]{number_of_patterns.pdf} \caption{Progress of 3D reconstruction performance as new windowed Approximants $m=1,\ldots, M\!\!=\!\!12$ according to (\protect{\ref{eq:moving_window}}) applied on experimental data are presented to the recurrent scheme. The same progression can be found in the Online Materials as a movie.} \label{fig:number_of_patterns} \end{figure*} \begin{table*}[htbp!] \begin{center} \begin{tabular}{c||c c c c|c} \hline \textbf{Probability of error ($\%$)} ($\downarrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Baseline (0.5 M)} & 6.604 & 5.255 & 7.837 & 3.204 & 5.725\\ \text{Baseline (21 M)} & 6.604 & 5.725 & 5.652 & 2.856 & 5.209\\ \hdashline \text{Proposed RNN (21 M)} & \textbf{5.408} & \textbf{4.828} & \textbf{2.332} & \textbf{1.660} & \textbf{3.557}\\ \hline\hline \textbf{Wasserstein distance} ($\times\:10^{-2}$) ($\downarrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Baseline (0.5 M)} & 2.854 & 1.466 & 2.783 & 0.9900 & 2.023\\ \text{Baseline (21 M)} & 2.703 & 1.171 & 2.475 & 0.8112 & 1.790\\ \hdashline \text{Proposed RNN (21 M)} & \textbf{1.999} & \textbf{1.093} & \textbf{1.749} & \textbf{0.6403} & \textbf{1.370}\\ \hline\hline \textbf{PCC} ($\uparrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Baseline (0.5 M)} & 0.8818 & 0.6426 & 0.8658 & 0.6191 & 0.7523\\ \text{Baseline (21 M)} & 0.8859 & 0.6430 & 0.9021 & 0.6132 & 0.7611\\ \hdashline \text{Proposed RNN (21 M)} & \textbf{0.8943} & \textbf{0.6612} & \textbf{0.9551} & \textbf{0.7039} & \textbf{0.8036}\\ \hline \iffalse \hline \textbf{SSIM} ($\uparrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Baseline (0.5 M)} & 0.7606 & 0.7409 & 0.7299 & 0.8046 & 0.7590\\ \text{Baseline (21 M)} & 0.7702 & 0.7557 & 0.7978 & 0.8357 & 0.7899\\ \hdashline \text{Proposed RNN (21 M)} & \textbf{0.7987} & \textbf{0.8128} & \textbf{0.8652} & \textbf{0.9154} & \textbf{0.8480}\\ \hline \fi \end{tabular} \end{center} \caption{Quantitative comparison between the baseline (static) and dynamic reconstruction from testing on experimental data, according to PE, Wasserstein distance ($p=1$), and PCC. SSIM comparisons are in the Supplementary materials.} \label{tab:quantitative_comparison} \end{table*} Our RNN is first trained as described in Section~\ref{sec:train_and_test}, and then tested with the TV-based Approximants (\ref{eq:TV_Approx}) applied to the experimentally obtained diffraction patterns. The evolution of the RNN output as more input patterns are presented is shown in Figure~\ref{fig:number_of_patterns}. When the recurrence starts with $m=1$, the volumetric reconstruction is quite poor; as more orientations are included, the reconstruction improves as expected. A movie version of this evolution for $m=1,\ldots, M$ is included in the online materials. \begin{figure*}[htbp!] \centering \includegraphics[width=0.72\textwidth]{qualitative_comparison.pdf} \caption{Qualitative comparison on test performance between the baseline and proposed architectures using experimental data. The baseline architectures are $3$D-DenseNet CNN architectures with $0.5$ M and $21$ M parameters. Our proposed architecture is a recurrent neural network with elements described in Section~\ref{sec:comput_arch}.} \label{fig:qualitative_comparison} \end{figure*} \begin{table*}[htbp!] \begin{center} \begin{tabular}{c||c c c c|c} \hline \textbf{Probability of error ($\%$)} ($\downarrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Proposed RNN (21 M)} & \textbf{5.408} & 4.828 & \textbf{2.332} & \textbf{1.660} & \textbf{3.557}\\ \hdashline \text{-- ReLU activation (21 M)} & 6.262 & \textbf{4.718} & 3.241 & 1.904 & 4.031\\ \text{-- angular attention (21 M)} & 9.399 & 5.566 & 11.64 & 3.375 & 7.495\\ \text{-- split convolution (43 M)} & 9.674 & 6.342 & 14.43 & 2.405 & 8.212\\ \hline\hline \textbf{Wasserstein distance} ($\times\:10^{-2}$) ($\downarrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Proposed RNN (21 M)} & \textbf{1.999} & \textbf{1.093} & \textbf{1.749} & \textbf{0.6403} & \textbf{1.370}\\ \hdashline \text{-- ReLU activation (21 M)} & 2.291 & 1.156 & 1.886 & 0.6692 & 1.501\\ \text{-- angular attention (21 M)} & 3.016 & 1.587 & 3.672 & 1.063 & 2.335\\ \text{-- split convolution (43 M)} & 4.005 & 2.863 & 3.651 & 2.233 & 3.188\\ \hline\hline \textbf{PCC} ($\uparrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Proposed RNN (21 M)} & \textbf{0.8943} & 0.6612 & \textbf{0.9551} & \textbf{0.7039} & \textbf{0.8036}\\ \hdashline \text{-- ReLU activation (21 M)} & 0.8832 & \textbf{0.6836} & 0.9406 & 0.6725 & 0.7950\\ \text{-- angular attention (21 M)} & 0.8281 & 0.6252 & 0.8145 & 0.4657 & 0.6834\\ \text{-- split convolution (43 M)} & 0.8005 & 0.4525 & 0.7313 & 0.4910 & 0.6188\\ \hline \iffalse \hline \textbf{SSIM} ($\uparrow$) & \text{Layer} 1 & \text{Layer} 2 & \text{Layer} 3 & \text{Layer} 4 & \text{Overall}\\ \hline \text{Proposed RNN (21 M)} & \textbf{0.7987} & \textbf{0.8128} & \textbf{0.8652} & \textbf{0.9154} & \textbf{0.8480}\\ \hdashline \text{-- ReLU activation (21 M)} & 0.7787 & 0.8088 & 0.8459 & 0.8971 & 0.8326\\ \text{-- angular attention (21 M)} & 0.6876 & 0.7175 & 0.6612 & 0.7826 & 0.7122\\ \text{-- split convolution (43 M)} & 0.6205 & 0.4740 & 0.5953 & 0.5274 & 0.5543\\ \hline \fi \end{tabular} \end{center} \caption{Quantitative assessment of ablation effects. Values inside the parentheses in the first column indicate the number of parameters. When we ablate split convolution, we rather choose $3\times 3\times 3$ being the uniform kernel, and, hence, the number of parameters increases. SSIM comparisons are in the Supplementary materials.} \label{tab:ablation_study_quantitative} \end{table*} Visual comparisons with the baseline $3$D-DenseNets with $0.5$ M and $21$ M parameters are shown in Figure~\ref{fig:qualitative_comparison}. The RNN results show substantial visual improvement, with fewer artifacts and distortions compared to static approaches, e.g. \cite{goy2019high}. Quantitatively comparisons in terms of our chosen metrics PE, Wasserstein, and PCC are in Table~\ref{tab:quantitative_comparison}. \begin{figure*}[t!] \centering \includegraphics[width=0.61\textwidth]{ablation_study_qualitative.pdf} \caption{Visual quality assessment from the ablation study on elements described in Section~\ref{sec:comput_arch}. Rows $3-5$ show reconstructions based on experimental data for each layer upon ablation of ReLU activation (\ref{eq:new_gru_equations}), {\it i.e.}, using the more common tanh activation function instead (row 3); angular attention mechanism (row 4); and split convolution (row 5). The rows are ordered by increasing severity of the ablation effect.} \label{fig:ablation_study_qualitative} \end{figure*} We conducted an ablation study, and its purpose is to isolate and compare quantitatively the contribution to the reconstruction of each element described in Figure~\ref{fig:architecture} and Section~\ref{sec:comput_arch}. We remove, one at a time, the split convolution, angular attention mechanism, and ReLU activation, and quantify performance again. Ablation in the case of ReLU activation means that we replace it with the tanh activation function, which is more usual. The ablated architectures are also trained under the same training scheme in Section~\ref{sec:train_and_test}.\ref{subsec:training_rnn} and tested with the same TV-based Approximants. Visually, the ablation of the split convolution affects and degrades the testing performance worst, followed by the ablation of the angular attention mechanism and the ReLU activation. These findings are supported quantitatively as well in Table~\ref{tab:ablation_study_quantitative}. Note that the substitution of the ReLU with the tanh does not bring a large increase compared to others, but even slightly better in some case (see the probability of error of Layer $2$ in Table~\ref{tab:ablation_study_quantitative}). Thus, we find that (1) the split convolution should be considered to replace a general $3$D convolution when designing a recurrent unit and a convolutional encoder/decoder; (2) the angular attention mechanism is helpful when the inputs are formulated into temporal sequences; and (3) the choice of ReLU over tanh is still helpful but somewhat less significant and may be application-dependent. With respect to attention, in particular, even though the module's presence clearly contributes to good training quality, we found that the coefficients converge to $\alpha_m\approx 1/M$ for all $m$, consistent with the more-or-less angularly invariant class of samples---at least in the statistical sense, and for the small range of illumination angles that we used. A more detailed study of the angular attention module can be found in the Supplementary Material. \section{Conclusions and discussion}\label{sec:conclusion} We have proposed a radically new recurrent neural network scheme for processing raw inputs from different angles of illumination dynamically, {\it i.e.} as a sequence, with each new angle improving the 3D reconstruction. We have found this scheme to offer significant qualitative and quantitative improvement over static machine learning schemes, where the raw inputs from all angles are processed at once by a neural network. Through an ablation study, we found that sandwiching the recurrent structure between a convolutional encoder/decoder helps improve the reconstructions. Even more interestingly, an angular attention mechanism, rewarding raw inputs from certain angles as more informative and penalizing others, also contributes significantly to improving reconstruction fidelity albeit less than the encoder/decoder pair. Even though we used the dynamic machine learning approach in the most difficult case of 3D reconstruction when strong scattering is present, there is no reason to doubt that it would be applicable to less ill-posed cases as well, e.g. optical diffraction tomography and Radon inverse. Also possible are alternative implementations of the RNN, e.g. with LSTMs or Reservoir Computing \cite{lukovsevivcius2009reservoir,lukovsevivcius2012reservoir,schrauwen2007overview}, and further exploration of split convolutional variants or DenseNet variants for the encoder/decoder and dynamical units; we leave these investigations to future work. \section{Funding} \noindent Southern University of Science and Technology (6941806); Intelligence Advanced Research Projects Activity (FA8650-17-C-9113); Korea Foundation for Advanced Studies.~\\ \section{Acknowledgments} \noindent I. Kang acknowledges partial support from KFAS (Korea Foundation for Advanced Studies) scholarship. We are grateful to Jungmoon Ham for her assistance with drawing Figures~\ref{fig:introduction} and \ref{fig:split_convolution}, and to Subeen Pang, Mo Deng and Peter So for useful discussions and suggestions.~\\ \noindent\textbf{Disclosures.} The authors declare no conflicts of interest.
2024-02-18T23:39:40.296Z
2020-07-22T02:14:35.000Z
algebraic_stack_train_0000
21
7,788
proofpile-arXiv_065-174
\section{Introduction} Multi-body hadronic $D^{0(+)}$ decays provide an ideal laboratory to study strong and weak interactions. Amplitude analyses of these decays offer comprehensive information of quasi-two-body $D^{0(+)}$ decays, which are important to explore $D\bar D^0$ mixing, charge-parity ($CP$) violation and quark SU(3)-flavor asymmetry breaking phenomenon~\cite{ref5,theory_1,theory_2,chenghy1,yufs}. In particular, for the search of $CP$ violation, it is important to understand the intermediate structures for the singly Cabibbo-suppressed decays of $D^{0(+)}\to K\bar K\pi\pi$~\cite{xwkang,Charles:2009ig,yufs-cpv}. Current measurements of the $D^{0(+)}\to K\bar K\pi\pi$ decays containing $K^0_S$ or $\pi^0$ are limited~\cite{pdg2018}. The branching fractions (BFs) of $D^0\to K^0_SK^0_S\pi^+\pi^-$~\cite{FOCUS_kskspipi,ARGUS_kkpipi}, $D^+\to K^0_SK^-\pi^+\pi^+$~\cite{FOCUS_kskpipi}, $D^+\to K^0_SK^+\pi^+\pi^-$~\cite{FOCUS_kskpipi}, and $D^+\to K^+K^-\pi^+\pi^0$~\cite{ACCMOR_kkpipi0} were only determined relative to some well known decays or via topological normalization, with poor precision. This paper presents the first direct measurements of the absolute BFs for the decays $D^0\to K^+K^-\pi^0\pi^0$, $D^0\to K^0_SK^0_S\pi^+\pi^-$, $D^0\to K^0_SK^-\pi^+\pi^0$, $D^0\to K^0_SK^+\pi^-\pi^0$, $D^+\to K^+K^-\pi^+\pi^0$, $D^+\to K^0_SK^+\pi^0\pi^0$, $D^+\to K^0_SK^-\pi^+\pi^+$, $D^+\to K^0_SK^+\pi^+\pi^-$, and $D^+\to K^0_SK^0_S\pi^+\pi^0$. The $D^0\to K^0_SK^0_S\pi^0\pi^0$ decay is not included since it suffers from poor statistics and high background. Throughout this paper, charge conjugate processes are implied. An $e^+e^-$ collision data sample corresponding to an integrated luminosity of 2.93~fb$^{-1}$~\cite{lum_bes3} collected at a center-of-mass energy of $\sqrt s=$ 3.773~GeV with the BESIII detector is used to perform this analysis. \section{BESIII detector and Monte Carlo simulation} The BESIII detector is a magnetic spectrometer~\cite{BESIII} located at the Beijing Electron Positron Collider (BEPCII)~\cite{Yu:IPAC2016-TUYA01}. The cylindrical core of the BESIII detector consists of a helium-based multilayer drift chamber (MDC), a plastic scintillator time-of-flight system (TOF), and a CsI(Tl) electromagnetic calorimeter (EMC), which are all enclosed in a superconducting solenoidal magnet providing a 1.0~T magnetic field. The solenoid is supported by an octagonal flux-return yoke with resistive plate counter muon identifier modules interleaved with steel. The acceptance of charged particles and photons is 93\% over $4\pi$ solid angle. The charged-particle momentum resolution at $1~{\rm GeV}/c$ is $0.5\%$, and the $dE/dx$ resolution is $6\%$ for the electrons from Bhabha scattering. The EMC measures photon energies with a resolution of $2.5\%$ ($5\%$) at $1$~GeV in the barrel (end cap) region. The time resolution of the TOF barrel part is 68~ps, while that of the end cap part is 110~ps. Simulated samples produced with the {\sc geant4}-based~\cite{geant4} Monte Carlo (MC) package including the geometric description of the BESIII detector and the detector response, are used to determine the detection efficiency and to estimate the backgrounds. The simulation includes the beam-energy spread and initial-state radiation (ISR) in the $e^+e^-$ annihilations modeled with the generator {\sc kkmc}~\cite{kkmc}. The inclusive MC samples consist of the production of $D\bar{D}$ pairs with consideration of quantum coherence for all neutral $D$ modes, the non-$D\bar{D}$ decays of the $\psi(3770)$, the ISR production of the $J/\psi$ and $\psi(3686)$ states, and the continuum processes. The known decay modes are modeled with {\sc evtgen}~\cite{evtgen} using the BFs taken from the Particle Data Group (PDG)~\cite{pdg2018}, and the remaining unknown decays from the charmonium states are modeled with {\sc lundcharm}~\cite{lundcharm}. The final-state radiations from charged final-state particles are incorporated with the {\sc photos} package~\cite{photos}. \section{Measurement Method} The $D^0\bar D^0$ or $D^+D^-$ pair is produced without an additional hadron in $e^+e^-$ annihilations at $\sqrt s=3.773$ GeV. This process offers a clean environment to measure the BFs of the hadronic $D$ decay with the double-tag (DT) method. The single-tag (ST) candidate events are selected by reconstructing a $\bar D^0$ or $D^-$ in the following hadronic final states: $\bar D^0 \to K^+\pi^-$, $K^+\pi^-\pi^0$, and $K^+\pi^-\pi^-\pi^+$, and $D^- \to K^{+}\pi^{-}\pi^{-}$, $K^0_{S}\pi^{-}$, $K^{+}\pi^{-}\pi^{-}\pi^{0}$, $K^0_{S}\pi^{-}\pi^{0}$, $K^0_{S}\pi^{+}\pi^{-}\pi^{-}$, and $K^{+}K^{-}\pi^{-}$. The event in which a signal candidate is selected in the presence of an ST $\bar D$ meson, is called a DT event. The BF of the signal decay is determined by \begin{equation} \label{eq:br} {\mathcal B}_{{\rm sig}} = N^{\rm net}_{\rm DT}/(N^{\rm tot}_{\rm ST}\cdot\epsilon_{{\rm sig}}), \end{equation} where $N^{\rm tot}_{\rm ST}=\sum_i N_{{\rm ST}}^i$ and $N^{\rm net}_{\rm DT}$ are the total yields of the ST and DT candidates in data, respectively. $N_{{\rm ST}}^i$ is the ST yield for the tag mode $i$. For the signal decays involving $K^0_S$ meson(s) in the final states, $N^{\rm net}_{\rm DT}$ is the net DT yields after removing the peaking background from the corresponding non-$K^0_S$ decays. For the other signal decays, the variable corresponds to the fitted DT yields as described later. Here, $\epsilon_{{\rm sig}}$ is the efficiency of detecting the signal $D$ decay, averaged over the tag mode $i$, which is given by: \begin{equation} \label{eq:eff} \epsilon_{{\rm sig}} = \sum_i (N^i_{{\rm ST}}\cdot\epsilon^i_{{\rm DT}}/\epsilon^i_{{\rm ST}})/N^{\rm tot}_{\rm ST}, \end{equation} where $\epsilon^i_{{\rm ST}}$ and $\epsilon^i_{{\rm DT}}$ are the efficiencies of detecting ST and DT candidates in the tag mode $i$, respectively. \section{Event selection} The selection criteria of $K^\pm$, $\pi^\pm$, $K^0_S$, and $\pi^0$ are the same as those used in the analyses presented in Refs.~\cite{epjc76,cpc40,bes3-pimuv,bes3-Dp-K1ev,bes3-etaetapi,bes3-omegamuv,bes3-etamuv,bes3-etaX}. All charged tracks, except those from $K^0_{S}$ decays, are required to have a polar angle $\theta$ with respect to the beam direction within the MDC acceptance $|\rm{cos\theta}|<0.93$, and a distance of closest approach to the interaction point (IP) within 10~cm along the beam direction and within 1~cm in the plane transverse to the beam direction. Particle identification (PID) for charged pions, kaons, and protons is performed by exploiting TOF information and the specific ionization energy loss $dE/dx$ measured by the MDC. The confidence levels for pion and kaon hypotheses ($CL_{\pi}$ and $CL_{K}$) are calculated. Kaon and pion candidates are required to satisfy $CL_{K}>CL_{\pi}$ and $CL_{\pi}>CL_{K}$, respectively. The $K^0_S$ candidates are reconstructed from two oppositely charged tracks to which no PID criteria are applied and which masses are assumed to be that of pions. The charged tracks from the $K^0_S$ candidate must satisfy $|\rm{cos\theta}|<0.93$. In addition, due to the long lifetime of the $K^0_S$ meson, there is a less stringent criterion on the distance of closest approach to the IP in the beam direction of less than 20~cm and no requirement on the distance of closest approach in the plane transverse to the beam direction. Furthermore, the $\pi^+\pi^-$ pairs are constrained to originate from a common vertex and their invariant mass is required to be within $(0.486,0.510)~{\rm GeV}/c^2$, which corresponds to about three times the fitted resolution around the nominal $K^0_S$ mass. The decay length of the $K^0_S$ candidate is required to be larger than two standard deviations of the vertex resolution away from the IP. The $\pi^0$ candidate is reconstructed via its $\gamma\gamma$ decay. The photon candidates are selected using the information from the EMC shower. It is required that each EMC shower starts within 700~ns of the event start time and its energy is greater than 25 (50)~MeV in the barrel (end cap) region of the EMC~\cite{BESIII}. The energy deposited in the nearby TOF counters is included to improve the reconstruction efficiency and energy resolution. The opening angle between the candidate shower and the nearest charged track must be greater than $10^{\circ}$. The $\gamma\gamma$ pair is taken as a $\pi^0$ candidate if its invariant mass is within $(0.115,\,0.150)$\,GeV$/c^{2}$. To improve the resolution, a kinematic fit constraining the $\gamma\gamma$ invariant mass to the $\pi^{0}$ nominal mass~\cite{pdg2018} is imposed on the selected photon pair. \section{Yields of ST $\bar D$ mesons} To select $\bar D^0\to K^+\pi^-$ candidates, the backgrounds from cosmic rays and Bhabha events are rejected by using the same requirements described in Ref.~\cite{deltakpi}. In the selection of $\bar D^0\to K^+\pi^-\pi^-\pi^+$ candidates, the $\bar D^0\to K^0_SK^\pm\pi^\mp$ decays are suppressed by requiring the mass of all $\pi^+\pi^-$ pairs to be outside $(0.478,0.518)$~GeV/$c^2$ The tagged $\bar D$ mesons are identified using two variables, namely the energy difference \begin{equation} \Delta E_{\rm tag} \equiv E_{\rm tag} - E_{\rm b}, \label{eq:deltaE} \end{equation} and the beam-constrained mass \begin{equation} M_{\rm BC}^{\rm tag} \equiv \sqrt{E^{2}_{\rm b}-|\vec{p}_{\rm tag}|^{2}}. \label{eq:mBC} \end{equation} Here, $E_{\rm b}$ is the beam energy, $\vec{p}_{\rm tag}$ and $E_{\rm tag}$ are the momentum and energy of the $\bar D$ candidate in the rest frame of $e^+e^-$ system, respectively. For each tag mode, if there are multiple candidates in an event, only the one with the smallest $|\Delta E_{\rm tag}|$ is kept. The tagged $\bar D$ candidates are required to satisfy $\Delta E_{\rm tag}\in(-55,40)$\,MeV for the tag modes containing $\pi^0$ in the final states and $\Delta E_{\rm tag}\in(-25,25)$\,MeV for the other tag modes, thereby taking into account the different resolutions. To extract the yields of ST $\bar D$ mesons for individual tag modes, binned-maximum likelihood fits are performed on the corresponding $M_{\rm BC}^{\rm tag}$ distributions of the accepted ST candidates following Refs.~\cite{epjc76,cpc40,bes3-pimuv,bes3-Dp-K1ev,bes3-etaetapi,bes3-omegamuv,bes3-etamuv}. In the fits, the $\bar D$ signal is modeled by an MC-simulated shape convolved with a double-Gaussian function describing the resolution difference between data and MC simulation. The combinatorial background shape is described by an ARGUS function~\cite{ARGUS} defined as $c_f(f;E_{\rm end},\xi_f)=A_f\cdot f\cdot \sqrt{1 - \frac {f^2}{E^2_{\rm end}}} \cdot \exp\left[\xi_f \left(1-\frac {f^2}{E^2_{\rm end}}\right)\right]$, where $f$ denotes $M^{\rm tag}_{\rm BC}$, $E_{\rm end}$ is an endpoint fixed at 1.8865 GeV, $A_f$ is a normalization factor, and $\xi_f$ is a free parameter. The resulting fits to the $M_{\rm BC}$ distributions for each mode are shown in Fig.~\ref{fig:datafit_MassBC}. The total yields of the ST $\bar D^0$ and $D^-$ mesons in data are $2327839\pm1860$ and $1558159\pm2113$, respectively, where the uncertainties are statistical only. \begin{figure}[htp] \centering \includegraphics[width=1.0\linewidth]{massbc.eps} \caption{\small Fits to the $M_{\rm BC}$ distributions of the ST $\bar D^0$ (left column) and $D^-$ (middle and right columns) candidates, where the points with error bars are data, the blue solid and red dashed curves are the fit results and the fitted backgrounds, respectively.} \label{fig:datafit_MassBC} \end{figure} \section{Yields of DT events} In the recoiling sides against the tagged $\bar D$ candidates, the signal $D$ decays are selected by using the residual tracks that have not been used to reconstruct the tagged $\bar D$ candidates. To suppress the $K^0_S$ contribution in the individual mass spectra for the $D^0\to K^+K^-\pi^0\pi^0$, $D^0\to K^0_SK^0_S\pi^{+}\pi^{-}$, and $D^+\to K^0_SK^+\pi^+\pi^-$ decays, the $\pi^{+}\pi^{-}$ and $\pi^{0}\pi^{0}$ invariant masses are required to be outside $(0.468,0.528)$~GeV/$c^2$ and $(0.438,0.538)$~GeV/$c^2$, respectively. To suppress the background from $D^0\to K^-\pi^+\omega$ in the identification of the $D^0\to K^0_SK^-\pi^+\pi^0$ process, the $K^0_S\pi^0$ invariant mass is required to be outside $(0.742,0.822)$ GeV/$c^2$. These requirements correspond to at least five times the fitted mass resolution away from the fitted mean of the mass peak. The signal $D$ mesons are identified using the energy difference $\Delta E_{\rm sig}$ and the beam-constrained mass $M_{\rm BC}^{\rm sig}$, which are calculated with Eqs.~(\ref{eq:deltaE}) and (\ref{eq:mBC}) by substituting ``tag'' with ``sig''. For each signal mode, if there are multiple candidates in an event, only the one with the smallest $|\Delta E_{\rm sig}|$ is kept. The signal decays are required to satisfy the mode-dependent $\Delta E_{\rm sig}$ requirements, as shown in the second column of Table~\ref{tab:DT}. To suppress incorrectly identified $D\bar D$ candidates, the opening angle between the tagged $\bar D$ and the signal $D$ is required to be greater than $160^\circ$, resulting in a loss of (2-6)\% of the signal and suppressing (8-55)\% of the background. Figure~\ref{fig:mBC2D} shows the $M_{\rm BC}^{\rm tag}$ versus $M_{\rm BC}^{\rm sig}$ distribution of the accepted DT candidates in data. The signal events concentrate around $M_{\rm BC}^{\rm tag} = M_{\rm BC}^{\rm sig} = M_{D}$, where $M_{D}$ is the nominal $D$ mass~\cite{pdg2018}. The events with correctly reconstructed $D$ ($\bar D$) and incorrectly reconstructed $\bar D$ ($D$), named BKGI, are spread along the lines around $M_{\rm BC}^{\rm tag} = M_{D}$ or $M_{\rm BC}^{\rm sig} = M_{D}$. The events smeared along the diagonal, named BKGII, are mainly from the $e^+e^- \to q\bar q$ processes. The events with uncorrelated and incorrectly reconstructed $D$ and $\bar D$, named BKGIII, disperse across the whole allowed kinematic region. For each signal $D$ decay mode, the yield of DT events ($N^{\rm fit}_{\rm DT}$) is obtained from a two-dimensional (2D) unbinned maximum-likelihood fit~\cite{cleo-2Dfit} on the $M_{\rm BC}^{\rm tag}$ versus $M_{\rm BC}^{\rm sig}$ distribution of the accepted candidates. In the fit, the probability density functions (PDFs) of signal, BKGI, BKGII, and BKGIII are constructed as \begin{itemize} \item signal: $a(x,y)$, \item BKGI: $b(x)\cdot c_y(y;E_{\rm b},\xi_{y}) + b(y)\cdot c_x(x;E_{\rm b},\xi_{x})$, \item BKGII: $c_z(z;\sqrt{2}E_{\rm b},\xi_{z}) \cdot g(k)$, and \item BKGIII: $c_x(x;E_{\rm b},\xi_{x}) \cdot c_y(y;E_{\rm b},\xi_{y})$, \end{itemize} respectively. Here, $x=M_{\rm BC}^{\rm sig}$, $y=M_{\rm BC}^{\rm tag}$, $z=(x+y)/\sqrt{2}$, and $k=(x-y)/\sqrt{2}$. The PDFs of signal $a(x,y)$, $b(x)$, and $b(y)$ are described by the corresponding MC-simulated shapes. $c_f(f;E_{\rm end},\xi_f)$ is an ARGUS function~\cite{ARGUS} defined above, where $f$ denotes $x$, $y$, or $z$; $E_{\rm b}$ is fixed at 1.8865 GeV. $g(k)$ is a Gaussian function with mean of zero and standard deviation parametrized by $\sigma_k=\sigma_0 \cdot(\sqrt{2}E_{\rm b}/c^2-z)^p$, where $\sigma_0$ and $p$ are fit parameters. \begin{figure}[htp] \centering \includegraphics[width=1.0\linewidth]{2Dfit_2018.eps} \caption{ The $M_{\rm BC}^{\rm tag}$ versus $M_{\rm BC}^{\rm sig}$ distribution of the accepted DT candidates of $D^+\to K^+K^-\pi^+\pi^0$ in data. Here, ISR denotes the signal spreading along the diagonal direction. } \label{fig:mBC2D} \end{figure} Combinatorial $\pi^+\pi^-$ pairs from the decays $D^0\to K^0_S2(\pi^+\pi^-)$ [and $D^0\to 3(\pi^+\pi^-)$], $D^0\to K^-\pi^+\pi^+\pi^-\pi^0$, $D^0\to K^+\pi^+\pi^-\pi^-\pi^0$, $D^+\to K^-\pi^+\pi^+\pi^+\pi^-$, $D^+\to K^+2(\pi^+\pi^-)$, $D^+\to K^+\pi^+\pi^-\pi^0\pi^0$, $D^+\to K^0_S\pi^+\pi^+\pi^-\pi^0$ [and $D^+\to 2(\pi^+\pi^-)\pi^+\pi^0$] may also satisfy the $K^0_S$ selection criteria and form peaking backgrounds around $M_D$ in the $M_{\rm BC}^{\rm sig}$ distributions for $D^0\to K^0_SK^0_S\pi^+\pi^-$, $D^0\to K^0_SK^-\pi^+\pi^0$, $D^0\to K^0_SK^+\pi^-\pi^0$, $D^+\to K^0_SK^+\pi^0\pi^0$ $D^+\to K^0_SK^-\pi^+\pi^+$, $D^+\to K^0_SK^+\pi^+\pi^-$, and $D^+\to K^0_SK^0_S\pi^+\pi^0$, respectively. This kind of peaking background is estimated by selecting events in the $K^0_S$ sideband region of $(0.454,0.478)\cup(0.518,0.542)~{\rm GeV}/c^2$. For $D^0\to K^0_SK^-\pi^+\pi^0$, $D^0\to K^0_SK^+\pi^-\pi^0$, $D^+\to K^0_SK^-\pi^+\pi^+$, $D^+\to K^0_SK^+\pi^+\pi^-$, and $D^+\to K^0_SK^+\pi^0\pi^0$ decays, one-dimensional (1D) signal and sideband regions are used. For $D^0\to K^0_SK^0_S\pi^+\pi^-$ and $D^+\to K^0_SK^0_S\pi^+\pi^0$ decays, 2D signal and sideband regions are used. The 2D $K^0_S$ signal region is defined as the square region with both $\pi^+\pi^-$ combinations lying in the $K^0_S$ signal regions. The 2D $K^0_S$ sideband 1~(2) regions are defined as the square regions with 1~(2) $\pi^+\pi^-$ combination(s) located in the 1D $K^0_S$ sideband regions and the rest in the 1D $K^0_S$ signal region. Figure~\ref{fig:mks} shows 1D and 2D $\pi^+\pi^-$ invariant-mass distributions as well as the $K^0_S$ signal and sideband regions. \begin{figure}[htp] \centering \includegraphics[width=1.0\linewidth]{D0_ks2.eps} \caption{\small (a)~The $\pi^+\pi^-$ invariant-mass distributions of the $D^+\to K^0_SK^-\pi^+\pi^+$ candidate events of data (points with error bars) and inclusive MC sample (histogram). Pairs of the red solid~(blue dashed) arrows denote the $K^0_S$ signal~(sideband) regions. (b)~Distribution of $M_{\pi^+\pi^-(1)}$ versus $M_{\pi^+\pi^-(2)}$ for the $D^0\to K^0_SK^0_S\pi^+\pi^-$ candidate events in data. Red solid box denotes the 2D signal region. Pink dot-dashed~(blue dashed) boxes indicate the 2D sideband 1~(2) regions. }\label{fig:mks} \end{figure} For the signal decays involving $K^0_S$ meson(s) in the final states, the net yields of DT events are calculated by subtracting the sideband contribution from the DT fitted yield by \begin{equation} \label{eq:1} N^{\rm net}_{\rm DT} = N^{\rm fit}_{\rm DT} + \Sigma^N_i \left [\left (-\frac{1}{2} \right )^i N^{\rm fit}_{{\rm sid}i} \right ]. \end{equation} Here, $N=1$ for the decays with one $K^0_S$ meson while $N=2$ for the decays with two $K^0_S$ mesons. The combinatorial $\pi^+\pi^-$ backgrounds are assumed to be uniformly distributed and double-counting is avoided by subtracting (2) yields from (1) yields appropriately. $N^{\rm fit}_{\rm DT}$ and $N^{\rm fit}_{{\rm sid}i}$ are the fitted $D$ yields in the 1D or 2D signal region and sideband $i$ region, respectively. For the other signal decays, the net yields of DT events are $N^{\rm fit}_{\rm DT}$. Figure~\ref{fig:2Dfit} shows the $M^{\rm tag}_{\rm BC}$ and $M^{\rm sig}_{\rm BC}$ projections of the 2D fits to data. From these 2D fits, we obtain the DT yields for individual signal decays as shown in Table~\ref{tab:DT}. For each signal decay mode, the statistical significance is calculated according to $\sqrt{-2{\rm ln ({\mathcal L_0}/{\mathcal L_{\rm max}}})}$, where ${\mathcal L}_{\rm max}$ and ${\mathcal L}_0$ are the maximum likelihoods of the fits with and without involving the signal component, respectively. The effect of combinatorial $\pi^+\pi^-$ backgrounds in the $K^0_S$-signal regions has been considered for the decays involving a $K^0_S$. The statistical significance for each signal decay is found to be greater than $8\sigma$. \section{Results} Each of the $D^0\to K^0_SK^-\pi^+\pi^0$, $D^+\to K^+K^-\pi^+\pi^0$, $D^+\to K^0_SK^-\pi^+\pi^+$, and $D^+\to K^0_SK^+\pi^+\pi^-$ decays is modeled by the corresponding mixed signal MC samples, in which the dominant decay modes containing resonances of $K^*(892)$, $\rho(770)$, and $\phi$ are mixed with the phase space (PHSP) signal MC samples. The mixing ratios are determined by examining the corresponding invariant mass and momentum spectra. The other decays, which are limited in statistics, are generated with the PHSP generator. The momentum and the polar angle distributions of the daughter particles and the invariant masses of each two- and three-body particle combinations of the data agree with those of the MC simulations. As an example, Fig.~\ref{add} shows the invariant mass distributions of two or three-body particle combinations of $D^+\to K^+K^-\pi^+\pi^0$ candidate events for data and MC simulations. The measured values of $N^{\rm net}_{{\rm DT}}$, $\epsilon^{}_{{\rm sig}}$, and the obtained BFs are summarized in Table~\ref{tab:DT}. The current world-average values are also given for comparison. The signal efficiencies have been corrected by the necessary data-MC differences in the selection efficiencies of $K^\pm$ and $\pi^\pm$ tracking and PID procedures and the $\pi^0$ reconstruction. These efficiencies also include the BFs of the $K^0_S$ and $\pi^0$ decays. The efficiency for $D^+\to K^0_SK^+\pi^+\pi^-$ ($D^0\to K^0_SK^-\pi^+\pi^0$) is lower than that of $D^+\to K^0_SK^-\pi^+\pi^+$ ($D^0\to K^0_SK^+\pi^-\pi^0$) due to the $K^0_S$ $(\omega)$ rejection in the $\pi^+\pi^-$ ($K^0_S\pi^0$) mass spectrum. \begin{figure*}[htbp] \centering \includegraphics[width=0.49\linewidth]{2Dfit_tag33.eps} \includegraphics[width=0.49\linewidth]{2Dfit_sig33.eps} \caption{\small Projections on the $M^{\rm tag}_{\rm BC}$ and $M^{\rm sig}_{\rm BC}$ distributions of the 2D fits to the DT candidate events with all $\bar D^0$ or $D^-$ tags. Data are shown as points with error bars. Blue solid, light blue dotted, blue dot-dashed, red dot-long-dashed, and pink long-dashed curves denote the overall fit results, signal, BKGI, BKGII, and BKGIII components (see text), respectively. } \label{fig:2Dfit} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=0.8\linewidth]{data_mc.eps} \caption{\small The invariant mass distributions of two or three-body particle combinations of $D^+\to K^+K^-\pi^+\pi^0$ candidate events for data and MC simulations. Data are shown as points with error bars. Red solid histograms are mixed signal MC samples. Blue dashed histograms are PHSP signal MC samples. Yellow hatched histograms are the backgrounds estimated from the inclusive MC sample. } \label{add} \end{figure*} \section{Systematic uncertainties} The systematic uncertainties are estimated relative to the measured BFs and are discussed below. In BF determinations using Eq.~(\ref{eq:br}), all uncertainties associated with the selection of tagged $\bar D$ canceled in the ratio. The systematic uncertainties in the total yields of ST $\bar D$ mesons related to the $M_{\rm BC}$ fits to the ST $\bar D$ candidates, were previously estimated to be 0.5\% for both neutral and charged $\bar D$~\cite{epjc76,cpc40,bes3-pimuv}. The tracking and PID efficiencies for $K^\pm$ or $\pi^\pm$, $\epsilon_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}[{\rm data}]$ and $\epsilon_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}[{\rm MC}]$, are investigated using DT $D\bar D$ hadronic events. The averaged ratios between data and MC efficiencies ($f_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}=\epsilon_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}[{\rm data}]/\epsilon_{K\,{\rm or}\,\pi}^{\rm tracking\,(PID)}[{\rm MC}]$) of tracking (PID) for $K^\pm$ or $\pi^\pm$ are weighted by the corresponding momentum spectra of signal MC events, giving $f_K^{\rm tracking}$ to be $1.022{\text -}1.031$ and $f_\pi^{\rm tracking}$ to be close to unity. After correcting the MC efficiencies by $f_K^{\rm tracking}$, the residual uncertainties of $f_{K\,{\rm or}\,\pi}^{\rm tracking}$ are assigned as the systematic uncertainties of tracking efficiencies, which are (0.4-0.7)\% per $K^\pm$ and (0.2-0.3)\% per $\pi^\pm$. $f_K^{\rm PID}$ and $f_\pi^{\rm PID}$ are all close to unity and their individual uncertainties, (0.2-0.3)\%, are taken as the associated systematic uncertainties per $K^\pm$ or $\pi^\pm$. The systematic error related to the uncertainty in the $K_{S}^{0}$ reconstruction efficiency is estimated from measurements of $J/\psi\to K^{*}(892)^{\mp}K^{\pm}$ and $J/\psi\to \phi K_S^{0}K^{\pm}\pi^{\mp}$ control samples~\cite{sysks} and found to be 1.6\% per $K^0_S$. The systematic uncertainty of $\pi^0$ reconstruction efficiency is assigned as (0.7-0.8)\% per $\pi^0$ from a study of DT $D\bar D$ hadronic decays of $\bar D^0\to K^+\pi^-\pi^0$ and $\bar D^0\to K^0_S\pi^0$ decays tagged by either $D^0\to K^-\pi^+$ or $D^0\to K^-\pi^+\pi^+\pi^-$~\cite{epjc76,cpc40}. The systematic uncertainty in the 2D fit to the $M_{\rm BC}^{\rm tag}$ versus $M_{\rm BC}^{\rm sig}$ distribution is examined via the repeated measurements in which the signal shape and the endpoint of the ARGUS function ($\pm0.2$\,MeV/$c^2$) are varied. Quadratically summing the changes of the BFs for these two sources yields the corresponding systematic uncertainties. The systematic uncertainty due to the $\Delta E_{\rm sig}$ requirement is assigned to be 0.3\%, which corresponds to the largest efficiency difference with and without smearing the data-MC Gaussian resolution of $\Delta E_{\rm sig}$ for signal MC events. Here, the smeared Gaussian parameters are obtained by using the samples of DT events $D^0\to K^0_S\pi^0$, $D^0\to K^-\pi^+\pi^0$, $D^0\to K^-\pi^+\pi^0\pi^0$, and $D^+\to K^-\pi^+\pi^+\pi^0$ versus the same $\bar D$ tags in our nominal analysis. The systematic uncertainties due to $K^0_S$ sideband choice and $K^0_S$ rejection mass window are assigned by examining the changes of the BFs via varying nominal $K^0_S$ sideband and corresponding rejection window by $\pm5$~MeV/$c^2$. For the decays whose efficiencies are estimated with mixed signal MC events, the systematic uncertainty in the MC modeling is determined by comparing the signal efficiency when changing the percentage of MC sample components. For the decays whose efficiencies are estimated with PHSP-distributed signal MC events, the uncertainties are assigned as the change of the signal efficiency after adding the possible decays containing $K^*(892)$ or $\rho(770)$. The imperfect simulations of the momentum and $\cos\theta$ distributions of charged particles are considered as a source of systematic uncertainty. The signal efficiencies are re-weighted by those distributions in data with background subtracted. The largest change of the re-weighted to nominal efficiencies, 0.9\%, is assigned as the corresponding systematic uncertainty. The measurements of the BFs of the neutral $D$ decays are affected by quantum correlation effect. For each neutral $D$ decay, the $CP$-even component is estimated by the $CP$-even tag $D^0\to K^+K^-$ and the $CP$-odd tag $D^0\to K^0_S\pi^0$. Using the same method as described in Ref.~\cite{QC-factor} and the necessary parameters quoted from Refs.~\cite{R-ref1,R-ref2,R-ref3}, we find the correction factors to account for the quantum correlation effect on the measured BFs are $(98.3^{+1.6}_{-1.1{\,\rm stat}})\%$, $(98.1^{+2.8}_{-1.7{\,\rm stat}})\%$, $(95.9^{+3.4}_{-2.7{\,\rm stat}})\%$, and $(98.4^{+1.1}_{-1.0{\,\rm stat}})\%$ for $D^0\to K^+K^-\pi^0\pi^0$, $D^0\to K^0_SK^0_S\pi^+\pi^-$, $D^0\to K^0_SK^-\pi^+\pi^0$, and $D^0\to K^0_SK^+\pi^-\pi^0$, respectively. After correcting the signal efficiencies by the individual factors, the residual uncertainties are assigned as systematic uncertainties. The uncertainties due to the limited MC statistics for various signal decays, (0.4-0.8)\%, are taken into account as a systematic uncertainty. The uncertainties of the quoted BFs of the $K^0_S\to \pi^+\pi^-$ and $\pi^0\to \gamma\gamma$ decays are 0.07\% and 0.03\%, respectively~\cite{pdg2018}. The efficiencies of $D\bar D$ opening angle requirement is studied by using the DT events of $D^0\to K^-\pi^+\pi^+\pi^-$, $D^0\to K^-\pi^+\pi^0\pi^0$, and $D^+\to K^-\pi^+\pi^+\pi^0$ tagged by the same tag modes in our nominal analysis. The difference of the accepted efficiencies between data and MC simulations, 0.4\% for the decays without $\pi^0$, 0.8\% for the decays involving one $\pi^0$ and 0.3\% for the decays involving two $\pi^0$s, is assigned as the associated systematic uncertainty. Table~\ref{tab:relsysuncertainties1} summarizes the systematic uncertainties in the BF measurements. For each signal decay, the total systematic uncertainty is obtained by adding the above effects in quadrature to be (2.6-6.0)\% for various signal decay modes. \begin{table*}[htbp] \centering \caption{\small Requirements of $\Delta E_{\rm sig}$, net yields of DT candidates ($N^{\rm net}_{{\rm DT}}$), signal efficiencies ($\epsilon_{\rm sig}$), and the obtained BFs (${\mathcal B}_{\rm sig}$) for various signal decays as well as comparisons with the world-average BFs (${\mathcal B}_{\rm PDG}$). The first and second uncertainties for ${\mathcal B}_{\rm sig}$ are statistical and systematic, respectively, while the uncertainties for $N^{\rm net}_{\rm DT}$ and $\epsilon_{\rm sig}$ are statistical only. The world-average BF of $D^+\to K^+K^-\pi^+\pi^0$ is obtained by summing over the contributions of $D^+\to \phi(\to K^+K^-)\pi^+\pi^0$ and $D^+\to K^+K^-\pi^+\pi^0|_{{\rm non\text-}\phi}$. }\label{tab:DT} \begin{ruledtabular} \begin{tabular}{lcccccc} \multicolumn{1}{c} {Signal mode}&$\Delta E_{\rm sig}$\,(MeV) &$N^{\rm net}_{\rm DT}$ & $\epsilon_{\rm sig}$\,(\%) & ${\mathcal B}_{\rm sig}$\,($\times10^{-3}$) & ${\mathcal B}_{\rm PDG}$\,($\times10^{-3}$) \\ \hline $D^0\to K^+K^-\pi^0\pi^0$ &$(-59,40)$&$ 132.1\pm13.9$&$ 8.20\pm0.07$&$0.69\pm0.07\pm0.04$&--\\ $D^0\to K^0_SK^0_S\pi^+\pi^-$&$(-22,22)$&$ 62.5\pm10.4$&$ 5.14\pm0.04$&$0.52\pm0.09\pm0.03$&$1.22\pm0.23$\\ $D^0\to K^0_SK^-\pi^+\pi^0$ &$(-43,32)$&$ 195.8\pm20.3$&$ 6.38\pm0.06$&$1.32\pm0.14\pm0.07$&--\\ $D^0\to K^0_SK^+\pi^-\pi^0$ &$(-44,33)$&$ 119.3\pm12.9$&$ 7.94\pm0.06$&$0.65\pm0.07\pm0.02$&--\\ $D^+\to K^+K^-\pi^+\pi^0$ &$(-39,30)$&$1311.7\pm40.4$&$12.72\pm0.08$&$6.62\pm0.20\pm0.25$&$26^{+9}_{-8}$\\ $D^+\to K^0_SK^+\pi^0\pi^0$ &$(-61,44)$&$ 34.7\pm 7.2$&$ 3.77\pm0.02$&$0.59\pm0.12\pm0.04$&--\\ $D^+\to K^0_SK^-\pi^+\pi^+$ &$(-22,21)$&$ 467.9\pm26.6$&$13.24\pm0.08$&$2.27\pm0.12\pm0.06$&$2.38\pm0.17$\\ $D^+\to K^0_SK^+\pi^+\pi^-$ &$(-21,20)$&$ 279.6\pm18.1$&$ 9.39\pm0.06$&$1.91\pm0.12\pm0.05$&$1.74\pm0.18$\\ $D^+\to K^0_SK^0_S\pi^+\pi^0$&$(-46,37)$&$ 80.4\pm12.0$&$ 3.84\pm0.03$&$1.34\pm0.20\pm0.06$&--\\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*}[htp] \centering \caption{ Systematic uncertainties (\%) in the measurements of the BFs of the signal decays (1) $D^0\to K^+K^-\pi^0\pi^0$, (2) $D^0\to K^0_SK^0_S\pi^+\pi^-$, (3) $D^0\to K^0_SK^-\pi^+\pi^0$, (4) $D^0\to K^0_SK^+\pi^-\pi^0$, (5) $D^+\to K^+K^-\pi^+\pi^0$, (6) $D^+\to K^0_SK^+\pi^0\pi^0$, (7) $D^+\to K^0_SK^-\pi^+\pi^+$, (8) $D^+\to K^0_SK^+\pi^+\pi^-$, and (9) $D^+\to K^0_SK^0_S\pi^+\pi^0$.} \label{tab:relsysuncertainties1} \centering \begin{ruledtabular} \begin{tabular}{cccccccccc} Source/Signal decay & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline $N^{\rm tot}_{\rm ST}$ &0.5 &0.5 &0.5 &0.5 &0.5 &0.5 &0.5 &0.5 &0.5 \\ $(K/\pi)^\pm$ tracking &1.0 &0.6 &0.9 &0.9 &1.6 &0.4 &1.1 &1.2 &0.3 \\ $(K/\pi)^\pm$ PID &0.4 &0.4 &0.6 &0.6 &1.0 &0.2 &0.6 &0.7 &0.2 \\ $K^0_S$ reconstruction &... &3.2 &1.6 &1.6 &... &1.6 &1.6 &1.6 &3.2 \\ $\pi^0$ reconstruction &1.6 &... &0.7 &0.7 &0.8 &1.6 &... &... &0.7 \\ $\Delta E_{\rm sig}$ requirement &0.7 &0.7 &0.7 &0.7 &0.7 &0.7 &0.7 &0.7 &0.7 \\ $K_{S}^{0}$ rejection &4.2 &2.4 &... &... &... &4.2 &... &0.8 &... \\ $K_{S}^{0}$ sideband &... &0.2 &1.1 &0.2 &... &1.3 &0.1 &0.1 &0.2 \\ Quoted BFs &0.0 &0.1 &0.1 &0.1 &0.0 &0.1 &0.1 &0.1 &0.1 \\ MC statistics &0.8 &0.6 &0.7 &0.6 &0.5 &0.4 &0.4 &0.5 &0.6 \\ MC modeling &1.3 &1.0 &0.5 &0.7 &2.1 &1.4 &0.5 &0.7 &0.5 \\ Imperfect simulation &0.9 &0.9 &0.9 &0.9 &0.9 &0.9 &0.9 &0.9 &0.9 \\ $D\bar D$ opening angle &0.3 &0.4 &0.8 &0.8 &0.8 &0.3 &0.4 &0.4 &0.8 \\ 2D fit &1.3 &2.8 &3.1 &1.5 &1.9 &2.7 &0.5 &0.6 &3.0 \\ Quantum correlation effect &1.6 &2.8 &3.4 &1.1 &... &... &... &... &... \\ \hline Total &5.5 &5.9 &5.4 &3.3 &3.8 &6.0 &2.6 &2.8 &4.8 \\ \end{tabular} \end{ruledtabular} \end{table*} \section{Summary} In summary, by analyzing a data sample obtained in $e^+e^-$ collisions at $\sqrt{s}=3.773$~GeV with the BESIII detector and corresponding to an integrated luminosity of 2.93~fb$^{-1}$, we obtained the first direct measurements of the absolute BFs of nine $D^{0(+)}\to K\bar K\pi\pi$ decays containing $K^0_S$ or $\pi^0$ mesons. The $D^0\to K^+K^-\pi^0\pi^0$, $D^0\to K^0_SK^-\pi^+\pi^0$, $D^0\to K^0_SK^+\pi^-\pi^0$, $D^+\to K^0_SK^+\pi^0\pi^0$, and $D^+\to K^0_SK^0_S\pi^+\pi^0$ decays are observed for the first time. Compared to the world-average values, the BFs of the $D^0\to K^0_SK^0_S\pi^+\pi^-$, $D^+\to K^+K^-\pi^+\pi^0$, $D^+\to K^0_SK^-\pi^+\pi^+$, and $D^+\to K^0_SK^+\pi^+\pi^-$ decays are measured with improved precision. Our BFs of $D^+\to K^0_SK^-\pi^+\pi^+$ and $D^+\to K^0_SK^+\pi^+\pi^-$ are in agreement with individual world averages within $1\sigma$ while our BFs of $D^0\to K^0_SK^0_S\pi^+\pi^-$ and $D^+\to K^+K^-\pi^+\pi^0$ deviate with individual world averages by $2.3\sigma$ and $2.8\sigma$, respectively. The precision of the BF of $D^+\to K^+K^-\pi^+\pi^0$ is improved by a factor of about seven. Future amplitude analyses of all these $D^{0(+)}\to K\bar K\pi\pi$ decays with larger data samples foreseen at BESIII~\cite{bes3-white-paper}, Belle~II~\cite{belle2-white-paper}, and LHCb~\cite{lhcb-white-paper} will supply rich information of the two-body decay modes containing scalar, vector, axial and tensor mesons, thereby benefiting the understanding of quark SU(3)-flavor symmetry. \section{Acknowledgement} Authors thank for valuable discussions with Prof. Fu-sheng Yu. The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos.~11775230, 11475123, 11625523, 11635010, 11735014, 11822506, 11835012, 11935015, 11935016, 11935018, 11961141012; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos.~U1532101, U1932102, U1732263, U1832207; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; ERC under Contract No. 758462; German Research Foundation DFG under Contracts Nos. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Science and Technology fund; STFC (United Kingdom); The Knut and Alice Wallenberg Foundation (Sweden) under Contract No. 2016.0157; The Royal Society, UK under Contracts Nos. DH140054, DH160214; The Swedish Research Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0012069.
2024-02-18T23:39:40.538Z
2020-07-22T02:07:39.000Z
algebraic_stack_train_0000
30
6,423
proofpile-arXiv_065-176
\section{Introduction} Recently there have been many efforts to imbue deep-learning models with the ability to perform causal inference. This has been motivated primarily by the inability of traditional correlative models to make predictions on interventional and counterfactual questions \cite{pcrbook, pearlbook}, as well as the explainability of causal graphical models. These efforts have largely run in parallel to the developing trend of exploiting the non-local properties of graph neural networks \cite{DBLP:journals/corr/abs-1711-07971} to generate powerful and efficient representations of high-dimensional data. In this note we dichotomize the task of causal inference as a two-step process, illustrated in Figure \ref{fig:2step}. The first step involves inferring the graphical structure of a causal model associated with a given observational data set as a directed acyclic graph (DAG). Inferring the structure of causal DAG's from observational data has a long history and there have been many proposed techniques including constraint-based \cite{pcrbook, pearlbook, Zhang2008-ZHAOTC-3, 10.5555/2074158.2074204} and score-based methods \cite{10.1007/BFb0028180, Chickering2002OptimalSI, DBLP:journals/corr/abs-1302-3567, heckarticle}, recently developed masked-gradient methods \cite{zheng2018dags, zheng2019learning, DBLP:journals/corr/abs-1904-10098, ng2019graph, ng2019masked, fang2020low, ng2020role}, as well as hybrid methods \cite{DBLP:journals/corr/abs-1906-02226}. Notable novel alternatives also include methods based on reinforcement-learning \cite{DBLP:journals/corr/abs-1906-04477}, adversarial networks \cite{kalainathan2018structural} and restricted Boltzmann machines \cite{Sokolovska2020UsingUD}. Since the task of causal structural discovery is merely a means to an end for this work, we (rather arbitrarily) adopt the masked-gradient approach due to its parsimonious integration with the neural network based architectures for SEM-learning that are the subject of this note.\footnote{codebase: \url{http://github.com/q1park/spacetime}} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{twostep.PNG}\vspace{-0.75cm} \end{center} \caption{The causal inference steps in this note begin with existing DAG structure-learning algorithms to infer causal structures in latent representations of data. Using the learned DAG, neural-networks are used to estimate the response of conditional probabilities under various graphical interventions.} \label{fig:2step} \end{figure} For the second step of causal inference, we develop a novel autoencoding architecture that applies generative moment-matching neural-networks \cite{DBLP:journals/corr/ZhaoSE17b, DBLP:journals/corr/RenLLZ16} to the edges of the learned causal graph, in order to estimate the functional dependence of the causally related observables as a structural equation model (SEM). Since their inception, generative moment-matching networks have been used for various tasks \cite{diane2017a, gaoproceed, briol2019statistical, lotfollahi2019conditional} related to the estimation of joint and conditional probability distributions, but to our knowledge this is the first use of their applications to an explicit causal graph structure. Our aim is to develop a fully unsupervised formalism that starts from purely observational tabular data, and ends with a robust automated sampling procedure that generates an accurate functional estimate of conditional probability distributions for the associated SEM. Existing techniques for Bayesian sampling on the latent space of generative models are also numerous, including Monte Carlo and gradient-optimization based methods \cite{ahn2012bayesian, 2001SPIE.4322..456H, DBLP:journals/corr/abs-1812-03285}. Much of this work has been inspired by several recent efforts to develop generative models that encode causal structure. For example, in \cite{DBLP:journals/corr/abs-1709-02023} the authors develop specific conditional adversarial loss functions for learning multi-step causal relations. Their goals are similar to those described in this note with a focus on linear relations within high-dimensional image vectors. In \cite{yang2020causalvae} the authors use supervised learning to endow the latent space distributions of a variational autoencoder with a causal graphical structure, with the aim of intervening on this latent space to control specific properties of their feature maps. In this note we perform experiments on simple low-dimensional feature maps, and examine the performance of our autoencoder in generating accurate conditional probability distributions from complex non-linear multi-step causal structures. These causal structures are assumed to exist as relations among dimensions in the latent representation of the data. Thus in principle, the methods described here should also be applicable to more complex feature maps such as those generated by image and language data. However experimentation on these high-dimensional data types are beyond the scope of this note. In Section \ref{sec:bkg} we give a brief review of causal graphs and describe a vectorized formulation for structural equation models that is suited for deep-learning applications. In Section \ref{sec:exp} we give the results of our experiments on causal structure learning using existing masked gradient methods. We then describe our algorithm for SEM-learning and provide results on its performance. In Section \ref{sec:disc} we conclude with a discussion on possible applications and future directions for this work. \raggedbottom \section{Background} \label{sec:bkg} \subsection{Causal Graphs} The identification of a causal effect between two variables is equivalent to measuring the response $\delta_0$ of some endogenous variable $X_0$ with respect to a controlled change $\delta_1$ in some exogenous variable $X_1$. If all of the variables are controlled, then the causal effect can be directly inferred via the conditional probability distribution $P(X_0 +\delta_0 | X_1+\delta_1)$. Inferring causal effects from uncontrolled observational data is challenging due to the existence of confounding variables $S_n$ which generate spurious correlations whose effects on the conditional probability $P(X_0 (S_n) | X_1 (S_n))$ may be statistically indistinguishable from true causal effects. This is illustrated diagramatically in Figure \ref{fig:spurion}. Here we adopt the formalism of Pearl in which the effect of a controlled change in variable $X_1$ is represented on a causal graph by mutilating all of the arrows going into node $X_1$ as shown in Figure \ref{fig:intervention}. The result is referred to as the {\it intervened}\footnote{For notational simplicity we use slashes to indicate graph mutilated variables in conditional probabilities rather than Pearl's original notation of $P(X_0|{\rm do}(X_1))$} conditional probability distribution $P(X_0|\slashed{X}_1) \sim P(X_0 +\delta_0 | X_1+\delta_1)$ \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{spurion.PNG} \vspace{-0.75cm} \end{center} \caption{Integrating out a confounding common cause variable $S_n$ generates a spurious correlation via a correction to the conditional probability distribution $P(X_0 | X_1)$.} \label{fig:spurion} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{intervention.PNG} \vspace{-0.75cm} \end{center} \caption{Observing a controlled change to some variable $X_1$ requires removing the effects of any possible external influences. This is represented graphically by mutilating all in-going arrows into node $X_1$.} \label{fig:intervention} \end{figure} There exists a rich literature describing the necessary and sufficient conditions for statistical distinguishability between causal and correlative effects, as well as methods for estimating causal responses when these conditions are met \cite{pcrbook, pearlbook}. Although the necessary conditions are beyond the scope of this brief review, the sufficient conditions amount to a requirement that the subset of measured confounding variables must be {\it sufficiently complete} so as to provide adequate control over the causal effects. In particular, the requirement of {\it sufficient completeness} can be succinctly dichotomized into two cases known as the {\it back-door} and {\it front-door} criterion. The {\it back-door criteria} can be used to estimate the causal response on a pair of nodes $X_1 \rightarrow X_0$, given an observation of a set of confounding variables $S = \{ S_0, S_1 \}$ as shown in Figure \ref{fig:backfront}. The intervened conditional probability can then be computed via the back-door adjustment formula given in Equation \ref{eq:adjustback}. \begin{align} P(X_i | \slashed{X}_j = x) &= \displaystyle\int d s \, P(X_i | X_j = x, S=s) \, P(S=s) \label{eq:adjustback} \end{align} The {\it front-door criteria} can be used to estimate the causal response on a pair of nodes $X_2 \rightarrow X_0$ in situations where there exists a chain of causal influences $X_2 \rightarrow X_1 \rightarrow X_0$ as shown in Figure \ref{fig:backfront}. The intervened conditional probability can then be computed via the front-door adjustment formula given in Equation \ref{eq:adjustfront}. \begin{align} P(X_i | \slashed{X}_j = x) &= \displaystyle\int d s \, P(S=s | X_j = x) \displaystyle\int d x^\prime \, P(X_i | X_j = x^\prime, S=s) \, P(X_j = x^\prime) \label{eq:adjustfront} \end{align} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{backfrontdoor.PNG} \vspace{-0.75cm} \end{center} \caption{(Left) Given the sufficiently complete set of measured confounding variables $S = \{ S_0, S_1 \}$, the back-door adjustment formula estimates the causal effect of $X_1$ on $X_0$. A measurement of only the set $S = \{ S_0 \}$ would be insufficient due to the existence of an unblocked ``back-door" path between the observables given by $X_1 \rightarrow S_1 \rightarrow S_0 \rightarrow S_2 \rightarrow X_0$. (Right) If there exists a causal chain $X_2 \rightarrow X_1 \rightarrow X_0$, the front-door adjustment formula can be used to disentangle the causal effect of $X_2$ on $X_0$ from any measured or unmeasured confounding variables.} \label{fig:backfront} \end{figure} \subsection{Structural Equation Models} Structural equation models (SEM's) are a functional extension of causal graphical models in which the values of each node variable $X_{\mu}$ are determined as a function of its parent node variables $X_{{\rm pa}(\mu)}$ and noise $\xi_\mu$. Here we adopt a notation where each node in a causal graph with $V$ nodes is specified by a spacetime index $\mu = 1, ..., V$ and Einstein summation is assumed. The set of parent (child) nodes corresponding to $\mu$ is given by $X_{{\rm pa}(\mu)}$ ($X_{{\rm ch}(\mu)}$) as illustrated in Figure \ref{fig:pach}. The generic form for an SEM can then be expressed as shown in Equation \ref{eq:sem} \begin{equation} X_{\mu} = f \left( \xi_\mu, \, X_{ {\rm pa} (\mu) } \right) \label{eq:sem} \end{equation} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{pach.PNG} \vspace{-0.75cm} \end{center} \caption{Given some node in a causal graph $X_\mu$, we use $X_{{\rm pa}(\mu)}$ to refer to the set of all nodes that are parents of node $\mu$ and $X_{{\rm ch}(\mu)}$ to refer to the set of all nodes that are children of node $\mu$.} \label{fig:pach} \end{figure} If the contribution from noise is assumed to be additive, then each node variable $X_\mu$ can be expressed simply as a polynomial (or other) expansion in its parent nodes $X_{{\rm pa} (\mu)}$ as shown in Equation \ref{eq:polysem}. The leading order term in this expansion describes a linearized SEM, which is typically expressed in terms of a weighted graph adjacency matrix $W_{\mu \nu}$ in the form shown in Equation \ref{eq:linsem}. \begin{align} X_\mu &= -\xi_\mu + f \left( X_{ {\rm pa} (\mu) } \right) \nonumber \\ &\approx - \xi_\mu + \displaystyle\sum_{n=1}^\infty c_{n,{\rm pa} (\mu) } X_{ {\rm pa} (\mu) }^n \label{eq:polysem} \\ &\xrightarrow{\mathcal{O}(1)} - \xi_\mu + W_{\mu \nu} X_\nu \label{eq:linsem} \end{align} The linear SEM of Equation \ref{eq:linsem} has the unique property that its exact solution describes a generative model that predicts each variable from pure noise as shown in Equation \ref{eq:gensem}. The inverse operator can be expressed in closed-form as a degree-$d$ polynomial in terms of Cayley-Hamilton coefficients $c_n$, which describe the propagation of ancestral noise through the causal graph. Thus each node variable $X_\mu$ can be expressed as a linear combination of its noise $\xi_\mu$ and the noise of its $n^{\rm th}$ ancestors $\xi_{{\rm pa}_n (\mu)}$, as shown in Equation \ref{eq:noiseprop}. \begin{align} X_\mu &= \left( - \delta_{\mu \nu} + W_{\mu \nu} \right)^{-1} \xi_\nu \label{eq:gensem} \\ &= \left( - \delta_{\mu \nu} + \displaystyle\sum_{n=1}^d c_n W_{\mu \nu}^n \right) \xi_\nu \nonumber \\ &= - \xi_\mu + \displaystyle\sum_{n=1}^d c_n \, \xi_{{\rm pa}_n (\mu)} \label{eq:noiseprop} \end{align} The weighted adjacency matrix $W_{\mu \nu}$ serves the dual purpose of masking each node variable $X_\mu$ from its non-parent nodes through its zero-entries, while the non-zero entries define the strength of linear correlations between each pair of nodes in the causal graph. Unfortunately there is no standardized generalization to non-linear SEM's. One natural possibility is to define a separate weighted adjacency matrix $W_{\mu \nu}^{(n)}$ for each order $n$ in a functional expansion like the polynomial example in Equation \ref{eq:polysem}. While this interpretation nicely generalizes the linear approximation, its computational complexity is unbounded, and there have been various other suggested interpretations for the adjacency matrix weights, related to the mutual information between parent-child node variables \cite{fang2020low}. In this note we develop an alternative formalism for describing non-linear SEM's that is agnostic to the interpretation of the weights in the adjacency matrix. We thus define a causal mask matrix $M_{\mu \nu}$ which is just the unweighted adjacency matrix as shown in Equation \ref{eq:maskmatrix}, where $\odot$ refers to an element-wise multiplication. \begin{align} M_{\mu \nu} \equiv | W_{\mu \nu} | \odot \frac{1}{|W_{\mu \nu}| + \epsilon} \label{eq:maskmatrix} \end{align} We then define a procedure for extracting the data for the parents of each node in the following way. We first lift each node variable into an auxiliary dimension $\dot{\mu} = 1, ..., V$. Index contraction of the spacetime index with the mask matrix $M_{\mu \nu}$ then produces a vector $X_{{\rm pa} (\mu)}^{\dot{\mu}}$ for each node $\mu$ whose index in the auxiliary dimension contains its parent-node data as shown in Equation \ref{eq:nodehot}. This vectorized parental masking procedure is suitable for expressing functions of sets of parent-nodes in a generalized SEM as $X_{\mu}^{\dot{\mu}} = f ( \xi_\mu, \, X_{ {\rm pa} (\mu) }^{\dot{\mu}} )$. \begin{align} X_\mu &~\longrightarrow~ X_\mu^{\dot{\mu}} \equiv X_\mu \otimes \delta_\mu^{\dot{\mu}} = \quad \text{\normalsize $\mu$}\mymatrix{ \begin{pmatrix} X_V & 0 & \cdots & 0 & 0 \\ 0 & X_{V-1} & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & X_1 & 0 \\ 0 & 0 & \cdots & 0 & X_0 \end{pmatrix} } \nonumber \\ &~\longrightarrow~ M_{\mu \nu} X_\nu^{\dot{\mu}} = \begin{pmatrix} 0 & 0 & \cdots & 0 & 0 \\ X_V & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ X_V & X_{V-1} & \cdots & 0 & 0 \\ X_V & X_{V-1} & \cdots & X_{1} & 0 \end{pmatrix} = \begin{pmatrix} X_{{\rm pa} (V)}^{\dot{\mu}} \\ ~~\ X_{{\rm pa} (V-1)}^{\dot{\mu}} \\ \vdots \\ X_{{\rm pa} (1)}^{\dot{\mu}} \\ X_{{\rm pa} (0)}^{\dot{\mu}} \end{pmatrix} = X_{{\rm pa} (\mu)}^{\dot{\mu}} \label{eq:nodehot} \end{align} \section{Experiments} \label{sec:exp} \subsection{Causal Structure Learning} The algorithms for SEM-learning described in this note rely on first inferring the correct causal graph structure for a given data set. Fortunately the last two years have seen exciting progress in applications of neural networks to the problem of causal graph structure-learning, particularly in the area of masked-gradient methods \cite{zheng2018dags, DBLP:journals/corr/abs-1904-10098, ng2019graph, fang2020low, ng2020role}. These methods center around an identity for acyclic weighted adjacency matrices, which was first derived in \cite{zheng2018dags} and is shown in Equation \ref{eq:acyclic}. This identity enables a re-formulation of acyclic graph-learning as a continuous optimization problem. Here again $\odot$ denotes element-wise multiplication. \begin{align} {\rm tr} \, e^{W \odot W} = {\rm tr} \, I \label{eq:acyclic} \end{align} The graph-learning network can then be constructed using an encoder/decoder framework with an objective function that attempts to minimize some reconstruction loss, subject to an acyclicity constraint $h=0$, where $h$ is a function of the weighted adjacency matrix given in Equation \ref{eq:acconstraint}. \begin{align} h(W) = - {\rm tr} \, I + {\rm tr} \, e^{W \odot W} = 0 \label{eq:acconstraint} \end{align} The original formulation for this continuous optimization, referred to as $\texttt{NO-TEARS}$ \cite{zheng2018dags}, uses a reconstruction loss inspired directly by the form of the linear SEM in Equation \ref{eq:linsem}. As illustrated in in the first line of Table \ref{tab:structalgos}, the encoder $\mathcal{E}$ is just the identity function while the decoder $\mathcal{D}$ is an MLP that takes as input a weighted masked latent space vector $W \cdot Z$. \bgroup \def1.5{1.5} \begin{table}[ht] \begin{tabular}{ccc} & Encoder & Decoder \\ \hline \texttt{NO-TEARS}: & \qquad $Z = X$ \qquad & \qquad $\widehat{X} = \mathcal{D}(W \cdot Z)$ \\ \texttt{GNN}: & \qquad $Z = (-I+W) \cdot \mathcal{E} (X)$ \qquad & \qquad $\widehat{X} = \mathcal{D}((-I+W)^{-1} \cdot Z)$ \\ \texttt{GAE}: & \qquad $Z = \mathcal{E}(X)$ \qquad & \qquad $\widehat{X} = \mathcal{D} ( W \cdot Z)$ \end{tabular} \caption{A comparison of functional structures for three well known masked-gradient-based algorithsm for causal structure learning.} \label{tab:structalgos} \end{table} \egroup In this note we focus our tests on two non-linear generalizations of the $\texttt{NO-TEARS}$ algorithm, referred to as $\texttt{GNN}$ and $\texttt{GAE}$. The encoder/decoder architectures are given in Table \ref{tab:structalgos}, where $\mathcal{E}$ and $\mathcal{D}$ refer to generic MLP based function-learners. Both of the $\texttt{GNN}$ and $\texttt{GAE}$ frameworks generalize the well known closed-form solution for linear SEM's. However the salient difference between them is the presence of a residual connection in \texttt{GNN} represented by the identity term in the second line of Table \ref{tab:structalgos}. The reconstruction loss function for $\texttt{GNN}$ is given by the usual evidence lower-bound (ELBO) for variational autoencoders while the reconstruction loss for $\texttt{GAE}$ is simply the mean-squared-error (MSE). The above optimization can be implemented using the method of Lagrange multipliers with the Lagrangian defined in Equation \ref{eq:lagrangian}. \begin{align} \mathcal{L}_\texttt{GNN/GAE} &= -\mathcal{L}_{\rm ELBO/MSE} + \lambda \, | h(W_{\mu \nu}) | + \frac{c}{2} \, | h(W_{\mu \nu}) |^2 \label{eq:lagrangian} \end{align} Following the work in \cite{DBLP:journals/corr/abs-1904-10098, ng2019graph} we perform tests on four different toy data sets generated by structural equation models of increasing non-linear complexity, as shown in Equations \ref{eq:egsem1}-\ref{eq:egsem4}. \begin{align} \text{linear:}& \quad X = -\xi + W \cdot X \label{eq:egsem1} \\ \text{non-linear 1:}& \quad X = -\xi + W \cdot \cos \ (X + 1) \label{eq:egsem2} \\ \text{non-linear 2:}& \quad X = -\xi + 2 \, \sin \left( W \cdot (X + 1/2) \right) + W \cdot (X + 1/2) \label{eq:egsem3} \\ \text{non-linear 3:}& \quad X = -\xi + 2 \, \sin \left( W \cdot ( \cos \ (X + 1) + 1/2) \right) + W \cdot ( \cos \ (X + 1) + 1/2) \label{eq:egsem4} \end{align} In the original papers, both $\texttt{GNN}$ and $\texttt{GAE}$ were tested using randomly generated Erd\H os-R\'enyi graphs. For graphs with $V$ nodes, the authors of $\texttt{GNN}$ reported structural hamming distance (SHD) errors ranging from $0.2 \times V$ (for nonlinear 2) and $0.8 \times V$ for (nonlinear 1). Impressively, the performance of the $\texttt{GAE}$ algorithm exhibits a scaling that is roughly independent of the number of nodes in the graph for the Erd\H os-R\'enyi case, which we have verified in our own experiments. The primary reason for the difference in performance on large graphs is due to the presence of the residual connection in $\texttt{GNN}$, which enables an extremely accurate reconstruction of the data despite an incorrect causal graph structure. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.45]{graphAB.PNG}\vspace{-0.5cm} \caption{Two graph structures used for the experiments in this note, which we refer to as Graph A (left) and Graph B (right). Causal estimation for Graph A requires mutilating two edges independent on the number of confounders, while causal estimation for Graph B requires mutilating a number of edges equal to the number of confounders.} \label{fig:graph} \end{center} \end{figure} In this note we perform tests on the $\texttt{GNN}$ and $\texttt{GAE}$ algorithms using the two graph structures shown in Figure \ref{fig:graph}, referred to as Graph A and Graph B. These two graph structures form the baseline cases for our structural equation model tests described in the next section, and represent different configurations of confounding variables increasing in number. The results of our structure-learning experiments, shown in Figure \ref{fig:shd}, indicate that the explicit presence of numerous confounding variables presents a significant obstacle to the recovery of correct causal structures relative to the Erd\H os-R\'enyi case, even for simple graphs with nodes as few as $\mathcal{O}(10)$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.52]{shdA.png} \includegraphics[scale=0.52]{shdB.png}\vspace{-0.5cm} \end{center} \caption{Structural hamming distances (SHD) for \texttt{GNN} and \texttt{GAE} as a function of the total number of nodes. Results are shown for Graph A (top row) and Graph B (bottom row) as defined in \ref{fig:graph}. For each \# nodes we generate two graphs with different weights from different random seeds and perform 3 runs for each graph. The error bars indicate variations between the 3 runs on each seed.} \label{fig:shd} \end{figure} \subsection{Structural Equation Modeling} The network architecture for SEM-learning proposed in this note is illustrated in Figure \ref{fig:archi}, and can be factorized into two components. The first component is just a generic variational autoencoder that encodes each node feature $X_\mu$ into its latent representation $Z_\mu$ before decoding it back to the target representation $\widehat{X}_\mu$. The second component introduces a ``causal block" $\mathcal{C}$ that performs ancestral sampling on the latent representation $Z_\mu$ and produces a latent representation for each child-node $\widehat{Z}_{{\rm ch} (\mu)}$ that is a function of \textit{only its parent-nodes} $Z_\mu$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.4]{archi.PNG}\vspace{-0.75cm} \end{center} \caption{The proposed network architecture is an extension of a generic variational autoencoder (blue). The generator for the latent space $Z_\mu$ is augmented with an additional causal network block $\mathcal{C}$ (orange) that uses a causal mask $M_{\mu \nu}$ as defined in \ref{eq:maskmatrix} to generate a latent space distribution for each child node $\widehat{Z}_{{\rm ch} (\mu)}$ that is a function of only it's parent nodes $Z_\mu$. The $n^{th}$ child node of a latent variable $Z_\mu$ can thus be generated by cycling the inputs $n$ times through $\mathcal{C}$.} \label{fig:archi} \end{figure} For SEM-learning on a graph with $V$ nodes, the causal block $\mathcal{C}$ is correspondingly composed of $V$ neural-networks as illustrated diagramatically in Figure \ref{fig:sampling}. A restriction on the functional dependence of each node to only its parent nodes is crucial for the automated generation of intervened conditional probability distributions. This is achieved simply through the use of the causal mask $M_{\mu \nu}$ in the causal block $\mathcal{C}$, as well as the absence of any residual connection except for those nodes which have no parents. This includes those nodes which are chosen for intervention, as well as those nodes with no parents since they can be viewed as being intervened on by the environment. Ancestral sampling of an intervened distribution can then be performed simply by generating data for the intervened node $Z_\mu$ from a random-normal distribution, and cycling the data through the causal block $n$ times in order to obtain the data for its $n^{\rm th}$ child node $Z_{{\rm ch_n} (\mu)}$, as illustrated in \ref{fig:archi}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.4]{sampling.PNG}\vspace{-0.75cm} \end{center} \caption{The causal block $\mathcal{C}$ takes inputs from the latent node variables $Z_\mu$. A single neural network for each latent dimension generates means and variances for the child nodes $\widehat{Z}_\mu$. Nodes with no parents, including the intervened node $Z_2$, contain a residual connection, and all nodes with parents are functions of only their parents.} \label{fig:sampling} \end{figure} A functional expression for the causal block $\mathcal{C}$ can be expressed as a sum of three terms as shown in Equation \ref{eq:causalsem}. The first term $\xi_\mu$ describes the contribution from noise and is computed via the usual reparameterization trick \cite{kingma2013autoencoding} from neural-network-generated variances. The second term provides a residual connection only for node variables that have no parents. We thus define a delta function $\delta_{{\rm pa} (\mu)}$ whose argument given a specified node $\mu$ is the number of parents belonging to that node, and normalized as shown in equation \ref{eq:parentres}. \begin{align} \mathcal{C}(Z_\mu) &= - \xi_\mu - \delta_{{\rm pa}(\mu)} Z_\mu + \left( 1-\delta_{{\rm pa}(\mu)} \right) {\rm NN}_\mu^{\dot{\mu}} ( Z_{{\rm pa} (\mu)}^{\dot{\mu}} ) \label{eq:causalsem} \\ &\longrightarrow \widehat{Z}_{{\rm ch} (\mu)} \nonumber \end{align} \begin{equation} \delta_{{\rm pa}(\mu)}=\left\{ \begin{array}{@{}ll@{}} 1 & ~~\text{if \# parents = 0 for node}\ \mu \\ 0 & ~~\text{otherwise} \end{array}\right. \label{eq:parentres} \end{equation} The third and final term is generated by the set of $V$ neural networks ${\rm NN}_\mu^{\dot{\mu}}$ whose input is the vector containing the latent representation of $\mu$'s parent node data $Z_{{\rm pa} (\mu)}^{\dot{\mu}}$, as constructed according to Equation \ref{eq:nodehot}. The loss function used is a combination of the joint \cite{DBLP:journals/corr/ZhaoSE17b} and conditional \cite{DBLP:journals/corr/RenLLZ16} maximum-mean-discrepancies (MMD and CMMD) as shown in Equation \ref{eq:caeloss}, with $\gamma \gg \beta$. The set of networks ${\rm NN}_\mu^{\dot{\mu}}$ thus together form a generative conditional moment-matching graph-neural-network. \begin{align} \mathcal{L} = &- \beta \, D_{\rm MMD} \big( Q(Z|X) || P(Z) \big) - \gamma \, D_{\rm CMMD} \big( Q(\widehat{Z}|Z_{\rm pa}) || P(Z | Z_{\rm pa}) \big) \nonumber \\ &+E_{Q(Z|X)} \big( \log P(\widehat{X} | Z ) \big) \label{eq:caeloss} \end{align} To measure the performance of interventional sampling we perform tests using an MLP-based encoder and decoder $\mathcal{E}$/$\mathcal{D}$ each consisting of a single hidden layer with 16 neurons. The causal block $\mathcal{C}$ is composed of $V$ neural networks, each with input dimension $V$ and output dimension $1$, and each consisting of a single hidden-layer containing 64 neurons. For the loss function we choose (rather arbitrarily) $\beta=1$ and $\gamma=300$, and each trial is run on 8000 data points. The performance metric used is the relative entropy (KL divergence) between the conditional probability distributions generated by the intervened and unintervened ground truth SEM's $D_{\rm KL} \left( P(X_i | \slashed{X}_j = x_j) || Q (X_i | \slashed{X}_j = x_j) \right)$. We then compare it with the relative entropy between the intervened SEM and the one predicted by the causal autoencoder $D_{\rm KL} \left( P(X_i | \slashed{X}_j = x_j) || Q(X_i | X_j = x_j) \right)$ at different standard deviations away from the distribution means, as illustrated in Figure \ref{fig:metric}. The autoencoder predictions for these results have been smoothened using a kernel density estimator with a normal reference bandwidth. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{wiggles_1.png} \includegraphics[scale=0.5]{wiggles_2a.png}\vspace{-0.75cm} \end{center} \caption{The performance metric adopted in this note is the relative entropy $D_{KL}$ between the conditional probability distribution for the predicted intervened SEM (top right) and the ground truth SEM (top middle). The $D_{KL}$ is computed along slices corresponding to points at various standard deviations away from the mean (bottom right). As a baseline we compare this against the $D_{KL}$ with respect to the unintervened conditional probability distribution (bottom left).} \label{fig:metric} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{dkl01A.png} \includegraphics[scale=0.5]{dkl02A.png}\vspace{-0.75cm} \end{center} \caption{Performance metrics for experiments on Graph A. $D_{KL}$'s are shown along contours of varying standard deviation $\sigma$ for the probability distributions $P(X_0 | X_1)$ (top row) and $P(X_0 | X_2)$ (bottom row). The solid and dashed lines represent averages for 4 randomly generated adjacency matrices.} \label{fig:resultsA} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[scale=0.5]{dkl01B.png} \includegraphics[scale=0.5]{dkl02B.png}\vspace{-0.75cm} \end{center} \caption{Performance metrics for Graph B along contours of varying standard deviation $\sigma$. Results are shown for the probability distributions $P(X_0 | X_1)$ (top row) and $P(X_0 | X_2)$ (bottom row). The solid and dashed lines represent averages for 4 randomly generated adjacency matrices.} \label{fig:resultsB} \end{figure} \section{Discussion} \label{sec:disc} The results of our experiments indicate that the proposed framework for simulating structural equation models is capable of capturing complex non-linear relationships among variables in way that is amenable to multi-step counterfactual interventions. Importantly, the generated probability distributions appear faithful to the ground truth intervened SEM's, even when the intervened variables are fixed to values that are outside the range of values contained in the training data distributions. This capability implies a predictive ability that is manifestly beyond what is possible through analytical calculations via the back-door and front-door adjustment formulas, which can only be applied to intervened variables that take on values for which observable data exists. With 8000 data points in each of the training sets, the maximum and minimum values for the node variable $X_2$ typically fall within the range of $3.5 \sigma$ from the distribution mean, never exceeding $4.0 \sigma$. From Figure \ref{fig:resultsA} and \ref{fig:resultsB}, we can observe that the linearly correlated data sets are faithful to the ground truth well beyond the $4.0 \sigma$ mark. On the other hand, those data sets with strong non-linear components vary in their predictive performance beyond $3 \sigma$, but are reliably closer to the ground truth relative to the un-intervened distributions. This is unsurprising upon closer inspection of the predicted conditional (intervened) probabilities, which demonstrate a clear tendency for our generative model to perform simple linear extrapolations of the distributions in regimes outside those contained in the training data. Although the experiments performed in this note were restricted to the case of scalar-valued node variables, we expect that a very simple extension of these methods could make them applicable to complex high dimensional image and language data. For example in CausalVAE \cite{yang2020causalvae}, the authors use supervised learning to encode specific image labels into a single dimension of the latent space $Z_\mu$. In one example, they use the CelebA data set of facial images to encode causal relationships between features like $Age \rightarrow Beard$, thus allowing them to intervene on the latent space to produce images of unnaturally young bearded faces. Augmenting this procedure with the causal block $\mathcal{C}$ described in this note would in principle enable synthetic generation of image populations with features that accurately represent conditional probabilities under multiple steps of causal influence. For example, an accurate distribution of hair colors if the graph structure contained $Age \rightarrow Beard \rightarrow Hair \ Color$. Unfortunately a detailed exploration on these high dimensional data types is beyond the scope of this note. Another potential application of these methods could be for use with model-based reinforcement learning. In \cite{DBLP:journals/corr/abs-1901-08162} the authors performed several experiments in a model-free RL framework in which they trained agents to make causal predictions in simple one-step-querying scenarios. In these experiments, the agents were directed to sample points from joint and conditional probability distributions of SEM-generated data, as well as the corresponding distributions from arbitrarily mutilated SEM graphs. These experiments showed evidence that their agents learned to exploit interventional and counterfactual reasoning to accumulate significantly higher rewards compared to the relevant baselines. In \cite{nair2019causal} the authors expand on the previous work by successfully training RL agents to perform causal reasoning in a more complex multi-step relational scenario with the ability to generalize to unseen causal structures that were held-out during training. Their experiments involved two separate RL agents. One which used supervised learning to generate a causal graph model off ground truth graphs, and another which was directed to take ``goal-oriented" actions based on models learned by the first agent. The authors strongly hypothesized that the impressive level of generalizability displayed by their algorithm was a direct result of the explicit model-based approach. We find the possibility of performing such experiments using graphical models learned via the fully unsupervised approach described in this note to be both very intriguing and plausibly practical as a future area of exploration. \section{Acknowledgements} We thank Vincent Tang, Jiheum Park, Ignavier Ng, Jungwoo Lee, and Tim Lou for useful discussions. \bibliographystyle{unsrtnat}
2024-02-18T23:39:40.542Z
2020-07-28T02:38:45.000Z
algebraic_stack_train_0000
31
5,536
proofpile-arXiv_065-223
\section{Introduction} Prediction algorithms use data, necessarily sampled under specific conditions, to learn correlations that extrapolate to new or related data. If successful, the performance gap between these two environments is small, and we say that algorithms \textit{generalize} beyond their training data. Doing so is difficult however, some form of uncertainty about the distribution of new data is unavoidable. The set of potential distributional changes that we may encounter is mostly unknown and in many cases may be large and varied. Some examples include covariate shifts \cite{bickel2009discriminative}, interventions in the underlying causal system \cite{pearl2009causality}, varying levels of noise \cite{fuller2009measurement} and confounding \cite{pearl1998there}. All of these feature in modern applications, and while learning systems are increasingly deployed in practice, generalization of predictions and their reliability in a broad sense remains an open question. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{Figures/Fig1.png} \caption[.]{\textbf{The challenges of generalization}. Each panel plots testing performance under different shifts. The proposed approach, Derivative Invariant Risk Minimization (DIRM, described in section \ref{sec_gen}), is a relaxation of the causal solution that naturally interpolates (as a function of hyperparameter $\lambda$, see eq. (\ref{robust_objective})) between the causal solution and Ordinary Least Squares (OLS).} \label{Fig1} \end{figure*} A common approach to formalize learning with uncertain data is, instead of optimizing for correlations in a \textit{fixed} distribution, to do so simultaneously for a \textit{range} of different distributions in an uncertainty set $\mathcal P$, \begin{align} \label{robust_pop} \underset{f}{\text{minimize }} \underset{P \in \mathcal P}{\sup}\hspace{0.1cm} \mathbb E_{(x,y)\sim P} [ \mathcal L(f(x),y)], \end{align} for some measure of error $\mathcal L$ of the function $f$ that relates input and output examples $(x,y)\sim P$. Choosing different sets $\mathcal P$ leads to estimators with different properties. It includes as special cases, for instance, many approaches in domain adaptation, covariate shift, robust statistics and optimization, see e.g. \cite{ben2009robust,kuhn2019wasserstein,bickel2009discriminative,duchi2016statistics,duchi2019distributionally,sinha2017certifying,wozabal2012framework,abadeh2015distributionally,duchi2018learning}. Robust solutions to problem (\ref{robust_pop}) are said to generalize if potential shifted, test distributions are contained in $\mathcal P$, but also larger sets $\mathcal P$ result in conservative solutions (i.e. with sub-optimal performance) on data sampled from distributions away from worst-case scenarios. One formulation of causality is also a version of this problem: $\mathcal P$ defined as any distribution arising from interventions on observed covariates $x$ leading to shifts in their distribution $P_x$ (see e.g. sections 3.2 and 3.3 in \cite{meinshausen2018causality}). The invariance to changes in covariate distributions of causal solutions is powerful for generalization, but implicitly assumes that all covariates or other drivers of the outcome subject to change at test time are observed. Often shifts occur elsewhere, for example in the distribution of unobserved confounders, in which case also conditional distributions $P_{y|x}$ may shift. In the presence of unobserved confounders, the goals of achieving robustness and learning a causal model can be \textit{different} (and similar behaviour also occurs with varying measurement noise). There is, in general, an inherent \textit{trade-off} in generalization performance: in the presence of unobserved confounders, causal and correlation-based solutions are both optimal in different regimes, depending on the shift in the underlying generating mechanism from which new data is generated. We consider next a simple example, illustrated in Figure \ref{Fig1}, to show this explicitly. \subsection{Introductory example} Assume access to observations of variables $(X_1,X_2,Y)$ in two training datasets, each dataset sampled with different variances ($\sigma^2=1$ and $\sigma^2 = 2$) from the following structural model $\mathbb F$, \begin{align*} X_2 := -H + E_{X_2}, \quad Y := X_2 + 3H + E_{Y},\quad X_1 := Y + X_2 + E_{X_1}. \end{align*} $E_{X_1}, E_{X_2}\sim\mathcal N(0,\sigma^2)$, $E_Y\sim\mathcal N(0,1)$ are exogenous. \begin{enumerate}[leftmargin=*, itemsep=0pt, topsep=0pt] \item In a first scenario (\textbf{leftmost panel}) all data (training and testing) is generated \textit{without} unobserved confounders, $H:=0$. \item In a second scenario (\textbf{remaining panels}) all data (training and testing) is generated \textit{with} unobserved confounders, $H:=E_H\sim\mathcal N(0,1)$. \end{enumerate} Each panel of Figure \ref{Fig1} shows performance on \textbf{new} data obtained after manipulating the underlying data generating system; the magnitude and type of intervention appears in the horizontal axis. We consider the minimum average error solution, Ordinary Least Squares (OLS), the causal solution, i.e. the linear model with coefficients $(0,1)$ for $(X_1,X_2)$, and Derivative Invariant Risk Minimization (DIRM, the proposed approach described in section \ref{sec_gen}) in different instantiations as a function of a hyperparameter $\lambda$, see eq. (\ref{robust_objective}). Three observations motivate this paper. \begin{enumerate}[leftmargin=*,itemsep=0pt] \item The presence of unobserved confounding hurts generalization performance in general with higher errors for all methods, e.g. contrast the $y$-axis of the leftmost and middle panel, but also leads to heterogeneous behaviour between optimization objectives depending on the nature of the shift in new data, e.g. contrast the two rightmost panels. \item Minimum error solutions absorb spurious correlations (due to $H$ and the fact that $X_1$ is caused by $Y$) by construction with unstable performance under shifts in $p(X_1,X_2)$ but as a consequence better performance under shifts in $p(H)$. Causal solutions, by contrast, are designed to be robust to shifts in observed covariates but completely abstract from variation in unobserved variables and are sub-optimal with moderate shifts in observed variables (e.g. middle panel). \item Minimum average error and causal solutions can be interpreted as two extremes of a distributionally robust optimization problem (\ref{robust_pop}), with a range of intermediate solutions that DIRM seeks to exploit and that in practice may have a more desirable performance profile. \end{enumerate} \subsection{Our Contributions} This work investigates generalization performance in the presence of unobserved confounding with data from multiple environments. Our first steps in section \ref{sec_2} emphasize a qualitative difference in the statistical invariances (which feature prominently in the field of domain generalization, see e.g. \cite{arjovsky2019invariant,krueger2020out,parascandolo2020learning,koyama2020out}) that can be expected in the presence of unobserved confounders while keeping in mind the trade-offs in performance illustrated in Figure \ref{Fig1}. This trade-off and new invariance principles suggest a new objective, Derivative Invariant Risk Minimization (described in section \ref{sec_gen}), that defines a range of intermediate solutions between the causal and minimum error extremes. These solutions are robust in a well-defined sense, as upperbounding a robust minimization problem (\ref{robust_pop}) that defines $\mathcal P$ as an \textit{affine} combination of training data distributions. This result, when $\mathcal P$ is interpreted as a set of distributions arising from shifts in the underlying causal model, confirms the interpolation behaviour found in Figure \ref{Fig1} but also defines robustness guarantees in a much broader sense, including robustness to interventions in unobserved and target variables that are only limited by the geometry of training environments (see section \ref{sec_rob_inter}). We conclude this paper with a discussion of related work and with performance comparisons on medical data and other benchmarks for domain generalization. \section{Invariances with Unobserved Confounders} \label{sec_2} This section introduces the problem of out-of-distribution generalization. We describe in greater detail the reasons that learning principles, such as Empirical Risk Minimization (ERM), underperform in general, and define invariances across environments to recover more robust solutions. We take the perspective that all potential distributions that may be observed over a system of variables arise from a structural causal model $\mathcal M = (\mathbb F, \mathbb V, \mathbb U)$, characterized by endogenous variables, $\mathbb V\in\mathcal V$, representing all variables determined by the system, either observed or not; exogenous variables, $\mathbb U\in\mathcal U$, representing independent sources of randomness, and a sequence of structural equations $\mathbb F: \mathcal U \rightarrow \mathcal V$, describing how endogenous variables are (deterministically) derived from the exogenous variables, see e.g. \cite{pearl2009causality}. An example is given in Figure \ref{Fig1}, $\mathbb V = (X_1,X_2,H,Y)$ are endogenous and $\mathbb U = (E_{X_1},E_{X_2},E_{H},E_{Y})$ are exogenous variables. Unseen test data is generated from such a system $\mathcal M$ after manipulating the distribution of exogenous variables $\mathbb U$, which propagates across the system shifting the joint distribution of all variables $\mathbb V$, whether observed or unobserved, but keeping the causal mechanisms $\mathbb F$ unchanged. Representative examples include changes in data collection conditions, such as due to different measurement devices, or new data sources, such as patients in different hospitals or countries. \textbf{Objective.} Our goal is to learn a representation $Z = \phi(X)$ acting on the set of observed variables $X \subset \mathbb V$ with the ability to extrapolate to new unseen data, and doing so acknowledging that all relevant variables in $\mathbb V$ are likely not observed. Unobserved confounders (say for predicting $Y\in \mathbb V$) simultaneously cause $X$ and $Y$, confounding or biasing the causal association between $X$ and $Y$ giving rise to spurious correlations that do not reproduce in general, see e.g. \cite{pearl1998there} for an introduction. \subsection{The biases of unobserved confounding} Consider the following structural equation for observed variables $(X,Y)$, \begin{align} \label{nonlinear_model} Y := f\circ\phi(X) + E, \end{align} where $f := f(\cdot;\beta_0)$ is a predictor acting on a representation $Z:=\phi(X)$ and $E$ stands for potential sources of mispecification and unexplained sources of variability. For a given sample of data $(x,y)$ and $z = \phi(x)$, the optimal prediction rule $\hat\beta$ is often taken to minimize squared residuals, with $\hat\beta$ the solution to the normal equations: $\nabla_{\beta} f(z;\hat\beta)y = \nabla_{\beta} f(z;\hat\beta)f(z;\hat\beta)$, where $\nabla_{\beta} f(z;\hat\beta)$ denotes the column vector of gradients of $f$ with respect to parameters $\beta$ evaluated at $\hat\beta$. Consider the Taylor expansion of $f(z;\beta_0)$ around an estimate $\hat\beta$ sufficiently close to $\beta_0$, $f(z;\beta_0) \approx f(z;\hat\beta) + \nabla_{\beta} f(z;\hat\beta)^T (\beta_0 - \hat\beta)$. Using this approximation in our first order optimality condition we find, \begin{align} \label{least_squares_consistency} \nabla_{\beta} f(z;\hat\beta)\nabla_{\beta} f(z;\hat\beta)^T(\beta_0 - \hat\beta) + v = \nabla_{\beta} f(z;\hat\beta) \epsilon, \end{align} where $v$ is a scaled disturbance term that includes the rest of the linear approximation of $f$ and is small asymptotically; $\epsilon:= y - f(z;\hat\beta)$ is the residual. $\hat \beta$ is consistent for the true $\beta_0$ if and only if $\nabla_{\beta} f(z;\hat\beta) \epsilon \rightarrow 0$ in probability. Consistency is satisfied if $E$ (all sources of variation in $Y$ not captured by $X$) are independent of $X$ (i.e. exogenous) or in other words if all common causes or confounders to both $X$ and $Y$ have been observed. If this is not the case, conventional regression may assign significant associations to variables that are neither directly nor indirectly related to the outcome, and as a consequence we have no performance guarantees on new data with changes in the distribution of these variables. \vspace{-0.1cm} \subsection{Invariances with multiple environments} \label{sec_invariances} The underlying structural mechanism $\mathbb F$, that also relates unobserved with observed variables, even if unknown, is stable irrespective of manipulations in exogenous variables that may give rise to heterogeneous data sources. Under certain conditions, statistical footprints emerge from this structural invariance across different data sources that are testable from data, see e.g. \cite{peters2016causal,ghassami2017learning,rothenhausler2019causal}. \textbf{Assumption 1}. We assume that we have access to input and output pairs $(X,Y)$ observed across heterogeneous data sources or environments $e$, defined as a probability distribution $P_e$ over an observation space $\mathcal X \times \mathcal Y$ that arises, just like new unseen data, from manipulations in the distribution of exogenous variables in an underlying model $\mathcal M$. \textbf{Assumption 2}. For the remainder of this section \textit{only}, consider restricting ourselves to data sources emerging from manipulations in exogenous $E_X$ (i.e. manipulations of observed variables) in an underlying additive noise model with unobserved confounding. It may be shown then, by considering the distributions of error terms $Y - f\circ\phi(X)$ and its correlation with any function of $X$, that the inner product $\nabla_{\beta} f(z;\beta_0) \epsilon$, even if \textit{non-zero} due to unobserved confounding as shown in (\ref{least_squares_consistency}), converges to a \textit{fixed unknown value equal across training environments}. \textbf{Proposition 1} (Derivative invariance). \textit{For any two environment distributions $P_i$ and $P_j$ generated under assumption 2, it holds that, up to disturbance terms, the causal parameter $\beta_0$ satisfies,} \begin{align} \label{optimal_beta} \underset{(x,y)\sim P_i}{\mathbb E}\nabla_{\beta} f(z;\beta_0)(y - f(z;\beta_0)) - \underset{(x,y)\sim P_j}{\mathbb E}\nabla_{\beta} f(z;\beta_0)(y - f(z;\beta_0)) = 0. \end{align} \textit{Proof}. All proofs are given in the Appendix. This \textit{invariance} across environments must hold for causal parameters (under certain conditions) \textit{even} in the presence of unobserved confounders. A few remarks are necessary concerning this relationship and its extrapolation properties. \subsection{Remarks} \begin{itemize}[leftmargin=*] \item The first remark is based on the observation that, up to a constant, each inner product in (\ref{optimal_beta}) is the gradient of the squared error with respect to $\beta$. This reveals that the optimal predictor, in the presence of unobserved confounding, is not one that produces minimum loss but one that produces a \textit{non-zero} loss gradient \textit{equal} across environments. Therefore, seeking minimum error solutions, even in the population case, produces estimators with \textit{necessarily} unstable correlations because the variability due to unobserved confounders is not explainable from observed data. Forcing gradients to be zero then \textit{forces} models to utilize artifacts of the specific data collection process that are not related to the input-output relationship; and, for this reason, will not in general perform outside training data. \item From (\ref{optimal_beta}) we may pose a sequence of moment conditions for each pair of available environments. We may then seek solutions $\beta$ that make all of them small simultaneously. Solutions are unique if the set of moments is sufficient to identify $\beta^{\star}$ exactly (and given our model assumptions may be interpreted as causal and robust to certain interventions). In the Appendix, we revisit our introductory example to show that indeed this is the case, and that other invariances exploited for causality and robustness (such as \cite{arjovsky2019invariant,krueger2020out}) do not hold in the presence of unobserved confounding and give biased results. \item In practice, only a \textit{set} of solutions may be identified with the moment conditions in Proposition 1 with no performance guarantees for any individual solutions, and no guarantees if assumptions fail to hold. Moreover, even if accessible, we have seen in Figure \ref{Fig1} that causal solutions may not always be desirable under more general shifts (for example shifts in unobserved variables). \end{itemize} \section{A Robust Optimization Perspective} \label{sec_gen} In this section we motivate a relaxation of the ideas presented using the language of robust optimization. One strategy is to optimize for the worst case loss across environments which ensures accurate prediction on any convex mixture of training environments \cite{ben2009robust}. The space of convex mixtures, however, can be restrictive. For instance, in high-dimensional systems perturbed data is likely occur at a new vertex not represented as a linear combination of training environments. We desire performance guarantees outside this convex hull. We consider in this section problems of the form of (\ref{robust_pop}) over an \textit{affine} combination of training losses, similarly to \cite{krueger2020out}, and show that they relate closely to the invariances presented in Proposition 1. Let $\Delta_{\eta}:=\{\{\alpha_e\}_{e\in\mathcal E}: \alpha_e \geq -\eta, \sum_{e\in\mathcal E} \alpha_e = 1\}$ be a collection of scalars and consider the set of distributions defined by $\mathcal P := \{\sum_{e\in\mathcal E} \alpha_e P_e : \{\alpha_e\} \in\Delta_{\eta}\}$, all affine combinations of distributions defined by the available environments. $\eta \in \mathbb R$ defines the strength of the extrapolation, $\eta = 0$ corresponds to a convex hull of distributions but above that value the space of distributions is richer, going beyond what has been observed: affine combinations amplify the strength of manipulations that generated the observed training environments. The following theorem presents an interesting upperbound to the robust problem (\ref{robust_pop}) with affine combinations of errors. \textbf{Theorem 1} \textit{Let $\{P_e\}_{e \in \mathcal E}$, be a set of available training environments. Further, let the parameter space of $\beta$ be open and bounded. Then, the following inequality holds,} \begin{align*} \underset{\{\alpha_e\} \in \Delta_{\eta}}{\sup}\hspace{0.1cm} &\sum_{e\in\mathcal E} \alpha_e \underset{(x,y)\sim P_e}{\mathbb E} \mathcal L\left(f \circ \phi(x),y \right) \leq \underset{(x,y)\sim P_e, e\sim \mathcal E}{\mathbb E} \mathcal L\left(f \circ \phi(x),y \right) \\ &+ (1 + n\eta) \cdot C \cdot \Big|\Big| \hspace{0.1cm} \underset{e\in \mathcal E}{\sup}\hspace{0.1cm}\underset{(x,y)\sim P_e}{\mathbb E} \nabla_{\beta}\mathcal L\left(f \circ \phi(x),y \right) - \underset{(x,y)\sim P_e, e\sim \mathcal E}{\mathbb E} \nabla_{\beta}\mathcal L\left(f \circ \phi(x),y \right)\hspace{0.1cm} \Big |\Big|_{L_2}, \end{align*} \textit{where $||\cdot||_{L_2}$ denotes the $L_2$-norm, $C$ is a constant that depends on the domain of $\beta$, $n:= |\mathcal E|$ is the number of available environments and $e\sim\mathcal E$ loosely denotes sampling indeces with equal probability from $\mathcal E$.} \textbf{Interpretation.} This bound illustrates the trade-off between the invariance of Proposition 1 (second term of the inequality above) and prediction in-sample (the first term). A combination of them upper-bounds a robust optimization problem over affine combinations of training environments, and depending how much we weight each objective (prediction versus invariance) we can expect solutions to be more or less robust. Specifically, for $\eta = -1/n$ the objective reduces to ERM, but otherwise the upperbound increasingly weights differences in loss derivatives (violations of the invariances of section \ref{sec_invariances}), and in the limit ($\eta\rightarrow\infty$) can be interpreted to be robust at least to \textit{any} affine combination of training losses. \textbf{Remark on assumptions.} Note that the requirement that $\mathbb F$ be fixed or Assumption 2, is not necessary for generalization guarantees. As long as new data distributions can be represented as affine combinations of training distributions, we can expected performance to be as least as good as that observed for the robust problem in Theorem 1. \subsection{Proposed objective} Our proposed learning objective is to guide the optimization of $\phi$ and $\beta$ towards solutions that minimize the upperbound in Theorem 1. Using Lagrange multipliers we define the general objective, \begin{align} \label{robust_objective} \underset{\beta,\phi}{\text{minimize }}\underset{(x,y)\sim P_e, e\sim \mathcal E}{\mathbb E} \mathcal L\left(f \circ \phi(x),y \right) + \lambda\cdot \underset{e\sim \mathcal E}{\text{Var}}\left(|| \underset{(x,y)\sim P_e}{\mathbb E}\nabla_{\beta}\mathcal L\left(f\circ \phi(x),y \right)||_{L_2}\right), \end{align} where $\lambda \geq 0$. We call this problem Derivative Invariant Risk Minimization (DIRM). This objective shares similarities with the objective proposed in \cite{krueger2020out}. The authors considered enforcing equality in environment-specific losses, rather than derivatives, as regularization, which can also be related to a robust optimization problem over an affine combination of errors. We have seen in section \ref{sec_invariances} however that equality in losses is not expected to hold in the presence of unobserved confounders \textbf{Remark on optimization.} The $L_2$ norm in the regularizer is an integral over the domain of values of $\beta$ and is in general intractable. We approximate this objective in practice with norms on functional evaluations at each step of the optimization rather than explicitly computing the integral. We give more details and show this approximation to be justified empirically in the Appendix. \subsection{Robustness in terms of interventions} \label{sec_rob_inter} In this section we give a causal perspective on the robustness achieved by our objective in (\ref{robust_objective}). As is apparent in Theorem 1, performance guarantees on data from a new environment depend on the relationship of new distributions with those observed during training. Let $f\circ\phi_{\lambda \rightarrow \infty}$ minimize $\mathcal L$ among all functions that satisfy all pairs of moment conditions defined in (\ref{optimal_beta}); that is, a solution to our proposed objective in (\ref{robust_objective}) with $\lambda\rightarrow\infty$. At optimality, it holds that gradients evaluated at this solution are equal across environments. As a consequence of Theorem 1, the loss evaluated at this solution with respect to \textit{any} affine combination of environments is bounded by the average loss computed in-sample (denoted $L$, say), \begin{align} \sum_{e\in\mathcal E} \alpha_e \underset{(x,y)\sim P_e}{\mathbb E} \mathcal L\left(f \circ \phi(x),y \right) \leq L, \qquad\text{for any set of } \alpha_e \in \Delta_{\eta}. \end{align} From the perspective of interventions in the underlying causal mechanism, this can be seen as a form of data-driven predictive stability across a range of distributions whose perturbations occur in the same direction as those observed during training. \textbf{Example.} Consider distributions $P$ of a univariate random variable $X$ given by affine combinations of training distributions $P_0$ with mean $0$ and $P_1$ which, due to intervention, has mean $1$ so that, using our notation, $\mathbb E_PX = \alpha_0\mathbb E_{P_0}X + \alpha_1\mathbb E_{P_1}X$, $\alpha_0=1-\alpha_1\geq -\eta$. $\mathbb E_PX\in[-\eta,\eta]$ and thus we may expect DIRM to be robust to distributions subject to interventions of magnitude $\pm\eta$ on $X$ and any magnitude in the limit $\eta\rightarrow\infty$ (or equivalently $\lambda\rightarrow\infty$). With this reasoning, however, note that the "diversity" of training environments has a large influence on whether we can interpret solutions to be causal (for which we need interventions on all observed variables and unique minimizers) and robustness guarantees: for instance, with equal means in $P_0$ and $P_1$ affine combinations would not extrapolate to interventions in the mean of $X$. This is why we say that interventions in test data must have the same "direction" as interventions in training data (but interventions can occur on observed, unobserved or target variables). \begin{minipage}{.6\textwidth} Using our simple example in Figure \ref{Fig1} to verify this fact empirically, we consider 3 scenarios corresponding to interventions on exogenous variables of $X, H$ and $Y$. In each, training data from two environments is generated with means in the distribution of the concerned variables set to a value of 0 and 1 respectively (that is interventions occur on the same variables during training and testing), everything else being equal ($\sigma^2 := 1, H:= E_H\sim \mathcal N(0,1)$). Performance is evaluated on data generated by increasing the shift in the variable being studied up to a mean of 5. In all cases, we see in Figure \ref{stability} that the performance of $f\circ\phi_{\lambda \rightarrow \infty}$ is stable to increasing perturbations in the system. No other learning paradigm has this property. \end{minipage} \hfill \begin{minipage}{.32\textwidth} \begin{figure}[H] \captionsetup{skip=1pt} \centering \includegraphics[width=0.9\textwidth]{Figures/stability.png} \caption{Stability to general shifts.} \label{stability} \end{figure} \end{minipage} \subsection{Stability of certain optimal solutions} \label{stability_section} A special case may also be considered when the underlying system of variables and the available environments allow for optimal solutions $f\circ\phi_{\lambda \rightarrow \infty}$ and $f\circ\phi_{\lambda = 0}$ to coincide. In this case, the learned representation $\phi(x)$ results in a predictor $f$ optimal on average \textit{and} simultaneously with equal gradient in each environment, thus, \begin{align*} ||\underset{(x,y)\sim P_e}{\mathbb E}\nabla_{\beta}\mathcal L\left(f\circ \phi(x),y \right)||_{L_2} = 0, \qquad \text{for all } e\in\mathcal E. \end{align*} For this representation $\phi$, it follows that optimal solutions $f$ learned on any new dataset sampled from an affine combination of training distributions coincides with this special solution. This gives us a sense of reproducibility of learning: if a specific feature is significant for predictions on the whole range of $\lambda$ with the available data then it will likely be significant on new (related) data. We explore this further in section \ref{sec_reproducibility}. \textbf{Contrast with IRM} \cite{arjovsky2019invariant}. The above special case where all solutions in our hyperparameter range agree has important parallels with IRM. The authors proposed a learning objective enforcing representations of data with minimum error on average and across environments, such that at optimum $\mathbb E_{P_i} Y|\phi^{\star}(X) = \mathbb E_{P_j} Y|\phi^{\star}(X)$ for any pair $(i,j)\in\mathcal E$. \textit{Without} unobserved confounding, our proposal and IRM agree. But, \textit{with} unobserved confounding, minimum error solutions of IRM by design converge to spurious associations (see remarks after Proposition 1) and are not guaranteed to generalize to more general environments. For example, in the presence of additive unobserved confounding $H$, irrespective of $\phi$, we may have $\mathbb E_{P_i} Y|\phi^{\star}(X) = \phi^{\star}(X) + \mathbb E_{P_i} H \neq \phi^{\star}(X) + \mathbb E_{P_j} H = \mathbb E_{P_j} Y|\phi^{\star}(X)$ if the means of $H$ differ. The sought invariance then does not hold. \section{Related work} There has been a growing interest in interpreting shifts in distribution to fundamentally arise from interventions in the causal mechanisms of data. Peters et al. \cite{peters2016causal} exploited this link for causal inference: causal relationships by definition being invariant to the observational regime. Invariant solutions, as a result of this connection, may be interpreted also as robust to certain interventions \cite{meinshausen2018causality}, and recent work has explored learning invariances in various problem settings from a causal perspective \cite{arjovsky2019invariant,rothenhausler2019causal,krueger2020out,gimenez2020identifying}. Among those, we note the invariance proposed in \cite{rothenhausler2019causal}, the authors seek to recover causal solutions with unobserved confounding. Generalization properties of these solutions were rarely studied, with one exception being Anchor regression \cite{rothenhausler2018anchor}. The authors proposed to interpolate between empirical risk minimization and causal solutions with explicit robustness to certain interventions in a linear model. The present work may be interpreted as a non-linear formulation of this principle with a more general study of generalization. Notions of invariance have been found useful in the broader field of domain generalization without necessarily referring to an underlying causal model. For instance, recent work has included the use data augmentation \cite{volpi2018generalizing,shankar2018generalizing}, meta-learning to simulate domain shift \cite{li2018learning,zhang2020adaptive}, constrastive learning \cite{kim2021selfreg}, adversarial learning of representations invariant to the environment \cite{ganin2016domain, albuquerque2019adversarial}, and with applications in structured medical domains \cite{jin2020enforcing}. Closest to DIRM are \cite{koyama2020out} and recently \cite{shi2021gradient} that explicitly use loss derivatives with respect to model parameters to regularize ERM solutions without however deriving their objectives with respect to shifts in an underlying causal model or with respect to an underlying robust optimization problem. A further line of research, instead of appealing explicitly to invariances between environments, proposes to solve directly a worst-case optimization problem (\ref{robust_pop}). One popular approach is to define $\mathcal P$ as a ball around the empirical distribution $\hat P$, for example using $f$-divergences or Wasserstein balls of a defined radius, see e.g. \cite{kuhn2019wasserstein,duchi2016statistics,duchi2019distributionally,sinha2017certifying,wozabal2012framework,abadeh2015distributionally,duchi2018learning}. These are general and multiple environments are not required, but this also means that sets are defined agnostic to the geometry of plausible shifted distributions, and may therefore lead to solutions, when tractable, that are overly conservative or do not satisfy generalization requirements \cite{duchi2019distributionally}. \section{Experiments} In this section, we conduct an analysis of generalization performance on shifted image, speech and tabular data from the medical domain. Data linkages, electronic health records, and bio-repositories, are increasingly being collected to inform medical practice. As a result, also prediction models derived from healthcare data are being put forward as potentially revolutionizing decision-making in hospitals. Recent studies \cite{cabitza2017unintended,venugopalan2019s}, however, suggest that their performance may reflect not only their ability to identify disease-specific features, but also their ability to exploit spurious correlations due to unobserved confounding (such as varying data collection practices): a major challenge for the reliability of decision support systems. In our comparisons we consider the following baseline algorithms: \begin{itemize}[leftmargin=*, itemsep=0pt] \item Empirical Risk Minimization (\textbf{ERM}) that optimizes for minimum loss agnostic of data source. \item Group Distributionally Robust Optimization (\textbf{DRO}) \cite{sagawa2019distributionally} that optimizes for minimum loss across the worst convex mixture of training environments. \item Domain Adversarial Neural Networks (\textbf{DANN}) \cite{ganin2016domain} that use domain adversarial training to facilitate transfer by augmenting the neural network architecture with an additional domain classifier to enforce the distribution of $\phi(X)$ to be the same across training environments. \item Invariant Risk Minimization (\textbf{IRM}) \cite{arjovsky2019invariant} that regularizes ERM ensuring representations $\phi(X)$ be optimal in every observed environment. \item Risk Extrapolation (\textbf{REx}) \cite{krueger2020out} that regularizes for equality in environment losses instead of considering their derivatives. \end{itemize} \textbf{Appendix.} We make additional comparisons in the Appendix on domain generalization benchmarks including VLCS \cite{fang2013unbiased}, PACS \cite{li2017deeper} and Office-Home \cite{venkateswara2017deep} using the DomainBed platform \cite{gulrajani2020search}. All experimental details are standardized across experiments and algorithms (equal network architectures and hyperparameter optimization techniques), and all specifications can be found in the Appendix. \begin{table*}[t] \fontsize{9.5}{9.5}\selectfont \centering \begin{tabular}{ |p{1.2cm}|C{1.6cm}|C{1.6cm}||C{1.6cm}|C{1.6cm}||C{1.6cm}|C{1.6cm}| } \cline{2-7} \multicolumn{1}{c|}{} & \multicolumn{2}{c||}{\textbf{Pneumonia Prediction}} & \multicolumn{2}{c||}{\textbf{Parkinson Prediction}}&\multicolumn{2}{c|}{\textbf{Survival Prediction}}\\ \cline{2-7} \multicolumn{1}{c|}{} & \textbf{Training} & \textbf{Testing} & \textbf{Training} & \textbf{Testing}&\textbf{Training} & \textbf{Testing}\\ \hline ERM & 91.6 ($\pm$ .7) & 52.7 ($\pm$ 1) & 95.5 ($\pm$ .5) & 62.8 ($\pm$ 1) & 93.2 ($\pm$ .4) & 75.4 ($\pm$ .9)\\ \hline DRO & 91.2 ($\pm$ .5) & 53.0 ($\pm$ .6) & 94.0 ($\pm$ .3) & 69.9 ($\pm$ 2)& 90.4 ($\pm$ .4) & 75.2 ($\pm$ .8) \\ \hline DANN & 91.3 ($\pm$ 1) & 57.7 ($\pm$ 2) & 91.6 ($\pm$ 2) & 51.4 ($\pm$ 5) & 89.0 ($\pm$ .8) & 73.8 ($\pm$ .9)\\ \hline IRM & 89.3 ($\pm$ 1) & 58.6 ($\pm$ 2) & 93.7 ($\pm$ 1) & 71.4 ($\pm$ 2)& 91.7 ($\pm$ .6) & 75.6 ($\pm$ .8)\\ \hline REx & 87.6 ($\pm$ 1) & 57.7 ($\pm$ 2) & 92.1 ($\pm$ 1) & 72.5 ($\pm$ 2)& 91.1 ($\pm$ .5) & 75.1 ($\pm$ .9)\\ \hline \textbf{DIRM} & 84.4 ($\pm$ 1) & 63.1 ($\pm$ 3) & 93.0 ($\pm$ 2)& 72.4 ($\pm$ 2) & 91.2 ($\pm$ .6) & 77.6 ($\pm$ 1) \\ \hline \end{tabular} \caption{Accuracy of predictions in percentages ($\%$). Uncertainty intervals are standard deviations. All datasets are approximately balanced, $50\%$ performance is as good as random guessing.} \label{perf} \end{table*} \subsection{Diagnosis of Pneumonia with Chest X-ray Data} In this section, we attempt to replicate the study in \cite{zech2018confounding}. The authors observed a tendency of image models towards exploiting spurious correlations for the diagnosis on pneumonia from patient Chest X-rays that do not reproduce outside of training data. We use publicly available data from the National Institutes of Health (NIH) \cite{wang2017chestx} and the Guangzhou Women and Children’s Medical Center (GMC) \cite{kermany2018identifying}. Differences in distribution are manifest, and can be seen for example in the top edge of mean pneumonia-diagnosed X-rays shown in Figure \ref{x_ray}. In this experiment, we exploit this (spurious) pathology correlation to demonstrate the need for solutions robust to changes in site-specific features. \begin{minipage}{.6\textwidth} \textbf{Experiment design.} We construct two training sets that will serve as training environments. In each environment, $90\%$ and $80\%$ of pneumonia-diagnosed patients were drawn from the NIH dataset and the remaining $10\%$ and $20\%$ of the pneumonia-diagnosed patients were drawn from the GMC dataset. The reverse logic ($10\%$ NIH / $90\%$ GMC split) was followed for the test set. This encourages algorithms to use NIH-specific correlations for prediction during training which are no expected to extrapolate during testing. \end{minipage} \hfill \begin{minipage}{.32\textwidth} \begin{figure}[H] \vspace{-0.4cm} \captionsetup{skip=5pt} \centering \includegraphics[width=1\textwidth]{Figures/x_ray.png} \caption{Mean pneumonia X-ray.} \label{x_ray} \end{figure} \end{minipage} Our results (Table \ref{perf}) show that DIRM significantly outperforms, suggesting that the proposed invariance guides the algorithm towards better solutions in the case of changes due to unobserved factors. \subsection{Diagnosis of Parkinson's Disease with Speech} Parkinson's disease is a progressive nervous system disorder that affects movement. Symptoms start gradually, sometimes starting with a barely noticeable tremor in a patient's voice. This section investigates the performance of predictive models for the detection of Parskinson's disease, trained on voice recordings of vowels, numbers and individual words and tested on vowel recordings of unseen patients. \textbf{Experiment design.} We used the UCI Parkinson Speech Dataset with given training and testing splits \cite{sakar2013collection}. Even though the distributions of features will differ in different types of recordings and patients, we would expect the underlying patterns in speech to reproduce across different samples. However, this is not the case for correlations learned with baseline training paradigms (Table \ref{perf}). This suggests that spurious correlations due to the specific type of recording (e.g. different vowels or numbers), or even chance associations emphasized due to low sample sizes (120 examples), may be responsible for poor generalization performance. Our results show that correcting for spurious differences between recording types (DIRM, IRM, REx) can improve performance substantially over ERM although the gain of DIRM over competing methods is less pronounced. \subsection{Survival Prediction with Health Records} This section investigates whether predictive models transfer across data from different medical studies \cite{meta2012survival}, all containing patients that experienced heart failure. The problem is to predict survival within 3 years of experiencing heart failure from a total of 33 demographic variables. We introduce a twist however, explicitly introducing unobserved confounding by omitting certain predictive variables. The objective is to test performance on new studies with \textit{shifted} distributions, while knowing that these occur predominantly due to variability in unobserved variables. \textbf{Experiment design.} Confounded data is constructed by omitting a patient's age from the data, found in a preliminary correlation analysis to be associated with the outcome as well as other significant predictors such as blood pressure and body mass index (that is, it confounds the association between blood pressure, body mass index, and survival). This example explicitly introduces unobserved confounding, but this scenario is expected in many other scenarios and across application domains. For instance, such a shift might occur if a prediction model is taken to patients in a different hospital or country than it was trained on. Often distribution of very relevant variables (e.g. socio-economic status, ethnicity, diet, etc.) will differ even though this information is rarely recorded in the data. We consider the 5 studies in MAGGIC of over 500 patients with balanced death rates. Performance results are averages over 5 experiments, in each case, one study is used for testing and the remaining four are used for training. DIRM's performance in this case is competitive with methods which serves to confirm the desirable performance profile of DIRM. \subsubsection{Reproducibility of variable selection} \label{sec_reproducibility} Prediction algorithms are often use to infer influential features in outcome prediction. It is important that this inference be consistent across environments even if perturbed or shifted in some variables. Healthcare is challenging in this respect because patient heterogeneity is high. We showed in section \ref{stability_section} that in the event that the optimal predictor is invariant as a function of $\lambda\in[0,\infty)$, optimal predictors estimated in \textit{every} new dataset in the span of observed distributions, should be \textit{stable}. We test this aspect in this section, considering a form of diluted stability for feature selection ($\lambda\in[0,1]$ instead of $\lambda\in[0,\infty)$). \begin{minipage}{.6\textwidth} \textbf{Experiment design.} For a single layer network, we consider significant those covariates with estimated parameters bounded away from zero in all solutions in the range $\lambda\in[0,1]$. Comparisons are made with ERM (conventional logistic regression) and both methods are trained separately on 100 different random pairs of the 33 MAGGIC studies, that is 100 different environments on which algorithms may give different relevant features. Figure \ref{maggic} shows how many features (among the top 10 discovered features) in each of the 100 experiments intersect. For instance, we have that $6$ features intersecting across $80/100$ runs for DIRM while only $4$ for ERM (approximately). DIRM thus recovers influential features more consistently than ERM. \end{minipage} \hfill \begin{minipage}{.37\textwidth} \begin{figure}[H] \vspace{-1.5em} \captionsetup{font=small,skip=0pt} \centering \includegraphics[width=0.7\textwidth]{Figures/stability_maggic.png} \caption{Reproducibility of variable selection.} \label{maggic} \end{figure} \end{minipage} \section{Conclusions} We have studied the problem of out-of-sample generalization from a new perspective, grounded in the underlying causal mechanism generating new data that may arise from shifts in observed, unobserved or target variables. Our proposal is a new objective, DIRM, that is provably robust to certain shifts in distribution, and is informed by new statistical invariances in the presence of unobserved confounders. Our experiments show that we may expect better generalization performance and also better reproducibility of influential features in problems of variable selection. A limitation of DIRM is that robustness guarantees crucially depend on the (unobserved) properties of available data: DIRM generally does not guarantee protection against unsuspected events. For example, in Theorem 1, the supremum contains distributions that lie in the affine combination of training environments, as opposed to arbitrary distributions. \section*{Acknowledgements} This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1, the ONR and the NSF grants number 1462245 and number 1533983.
2024-02-18T23:39:40.738Z
2021-05-26T02:14:14.000Z
algebraic_stack_train_0000
41
6,422
proofpile-arXiv_065-236
"\\section{Introduction}\n\nDielectric multilayers constitute one of the simplest and most common cl(...TRUNCATED)
2024-02-18T23:39:40.774Z
2020-07-22T02:10:59.000Z
algebraic_stack_train_0000
43
5,125
proofpile-arXiv_065-324
"\\section{Introduction}\r\nLet $\\mathbb{N}$ be the set of all nonnegative integers. For any sequen(...TRUNCATED)
2024-02-18T23:39:41.119Z
2020-07-22T02:08:34.000Z
algebraic_stack_train_0000
63
4,189
proofpile-arXiv_065-355
"\\section{Introduction}\\label{sec:intro} \nStudies of $0^{+} \\rightarrow 0^{+}$ superallowed $\\(...TRUNCATED)
2024-02-18T23:39:41.242Z
1996-09-10T18:08:26.000Z
algebraic_stack_train_0000
70
3,900
proofpile-arXiv_065-371
"\\section{Introduction}\\label{sec:intro}\n\nThe theoretical possibility that strange quark matter (...TRUNCATED)
2024-02-18T23:39:41.273Z
1996-09-09T11:46:30.000Z
algebraic_stack_train_0000
73
5,750
proofpile-arXiv_065-444
"\\section{INTRODUCTION}\n\nA precise theoretical framework is needed for \nthe study of the quark m(...TRUNCATED)
2024-02-18T23:39:41.503Z
1996-08-31T13:17:48.000Z
algebraic_stack_train_0000
85
2,581
proofpile-arXiv_065-456
"\\subsection*{Acknowledgments}\nThis work was supported by the US Department of Energy, Nuclear Phy(...TRUNCATED)
2024-02-18T23:39:41.530Z
1996-09-16T21:51:35.000Z
algebraic_stack_train_0000
88
25
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
41